data
list
[ { "answers": [ "a vocabulary of positive and negative predicates that helps determine the polarity score of an event", "" ], "context": "Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).", "id": 0, "question": "What is the seed lexicon?", "title": "Minimally Supervised Learning of Affective Events Using Discourse Relations" }, { "answers": [ "Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiGRU achieved 0.917 accuracy, ACP+AL+CA+CO -- BERT achieved 0.913 accuracy. \nUsing a subset to train: BERT achieved 0.876 accuracy using ACP (6K), BERT achieved 0.886 accuracy using ACP (6K) + AL, BiGRU achieved 0.830 accuracy using ACP (6K), BiGRU achieved 0.879 accuracy using ACP (6K) + AL + CA + CO." ], "context": "Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).", "id": 1, "question": "What are the results?", "title": "Minimally Supervised Learning of Affective Events Using Discourse Relations" }, { "answers": [ "based on the relation between events, the suggested polarity of one event can determine the possible polarity of the other event ", "cause relation: both events in the relation should have the same polarity; concession relation: events should have opposite polarity" ], "context": "", "id": 2, "question": "How are relations used to propagate polarity?", "title": "Minimally Supervised Learning of Affective Events Using Discourse Relations" }, { "answers": [ "7000000 pairs of events were extracted from the Japanese Web corpus, 529850 pairs of events were extracted from the ACP corpus", "The ACP corpus has around 700k events split into positive and negative polarity " ], "context": "", "id": 3, "question": "How big is the Japanese data?", "title": "Minimally Supervised Learning of Affective Events Using Discourse Relations" }, { "answers": [ "" ], "context": "Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \\cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.", "id": 4, "question": "What are labels available in dataset for supervision?", "title": "Minimally Supervised Learning of Affective Events Using Discourse Relations" }, { "answers": [ "3%" ], "context": "The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.", "id": 5, "question": "How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed approach copared to basic approach?", "title": "Minimally Supervised Learning of Affective Events Using Discourse Relations" }, { "answers": [ "by exploiting discourse relations to propagate polarity from seed predicates to final sentiment polarity" ], "context": "The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.", "id": 6, "question": "How does their model learn using mostly raw data?", "title": "Minimally Supervised Learning of Affective Events Using Discourse Relations" }, { "answers": [ "30 words" ], "context": "The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.", "id": 7, "question": "How big is seed lexicon used for training?", "title": "Minimally Supervised Learning of Affective Events Using Discourse Relations" }, { "answers": [ "" ], "context": "Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.", "id": 8, "question": "How large is raw corpus used for training?", "title": "Minimally Supervised Learning of Affective Events Using Discourse Relations" }, { "answers": [ "", "" ], "context": "1.1em", "id": 9, "question": "Does the paper report macro F1?", "title": "PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic Emotions in German and English Poetry" }, { "answers": [ "" ], "context": "1.1.1em", "id": 10, "question": "How is the annotation experiment evaluated?", "title": "PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic Emotions in German and English Poetry" }, { "answers": [ "" ], "context": "1.1.1.1em", "id": 11, "question": "What are the aesthetic emotions formalized?", "title": "PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic Emotions in German and English Poetry" }, { "answers": [ "", "" ], "context": "“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”", "id": 12, "question": "Do they report results only on English data?", "title": "Community Identity and User Engagement in a Multi-Community Landscape" }, { "answers": [ "Dynamic communities have substantially higher rates of monthly user retention than more stable communities. More distinctive communities exhibit moderately higher monthly retention rates than more generic communities. There is also a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community - a short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content.\n" ], "context": "A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.", "id": 13, "question": "How do the various social phenomena examined manifest in different types of communities?", "title": "Community Identity and User Engagement in a Multi-Community Landscape" }, { "answers": [ "" ], "context": "In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.", "id": 14, "question": "What patterns do they observe about how user engagement varies with the characteristics of a community?", "title": "Community Identity and User Engagement in a Multi-Community Landscape" }, { "answers": [ "They selected all the subreddits from January 2013 to December 2014 with at least 500 words in the vocabulary and at least 4 months of the subreddit's history. They also removed communities with the bulk of the contributions are in foreign language.", "They collect subreddits from January 2013 to December 2014,2 for which there are at\nleast 500 words in the vocabulary used to estimate the measures,\nin at least 4 months of the subreddit’s history. They compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language." ], "context": "Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.", "id": 15, "question": "How did the select the 300 Reddit communities for comparison?", "title": "Community Identity and User Engagement in a Multi-Community Landscape" }, { "answers": [ "" ], "context": "Having described these word-level measures, we now proceed to establish the primary axes of our typology:", "id": 16, "question": "How do the authors measure how temporally dynamic a community is?", "title": "Community Identity and User Engagement in a Multi-Community Landscape" }, { "answers": [ "" ], "context": "We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.", "id": 17, "question": "How do the authors measure how distinctive a community is?", "title": "Community Identity and User Engagement in a Multi-Community Landscape" }, { "answers": [ "", "" ], "context": "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.", "id": 18, "question": "What data is the language model pretrained on?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "", "" ], "context": "Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.", "id": 19, "question": "What baselines is the proposed model compared against?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "", "CTS is extracting structural data from medical research data (unstructured). Authors define QA-CTS task that aims to discover most related text from original text." ], "context": "Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.", "id": 20, "question": "How is the clinical text structuring task defined?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "", "" ], "context": "Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.", "id": 21, "question": "What are the specific tasks being unified?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "the dataset consists of pathology reports including sentences and questions and answers about tumor size and resection margins so it does include additional sentences " ], "context": "In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.", "id": 22, "question": "Is all text in this dataset a question, or are there unrelated sentences in between questions?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "2,714 " ], "context": "For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.", "id": 23, "question": "How many questions are in the dataset?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "" ], "context": "Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.", "id": 24, "question": "What is the perWhat are the tasks evaluated?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "" ], "context": "There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.", "id": 25, "question": "Are there privacy concerns with clinical data?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "" ], "context": "The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\\left\\langle l_s, 2\\right\\rangle $ where $l_s$ denotes the length of sequence.", "id": 26, "question": "How they introduce domain-specific features into pre-trained language model?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "" ], "context": "Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.", "id": 27, "question": "How big is QA-CTS task dataset?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "" ], "context": "In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.", "id": 28, "question": "How big is dataset of pathology reports collected from Ruijing Hospital?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "" ], "context": "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.", "id": 29, "question": "What are strong baseline models in specific tasks?", "title": "Question Answering based Clinical Text Structuring Using Pre-trained Language Model" }, { "answers": [ "Quality measures using perplexity and recall, and performance measured using latency and energy usage. " ], "context": "Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .", "id": 30, "question": "What aspects have been compared between various language models?", "title": "Progress and Tradeoffs in Neural Language Models" }, { "answers": [ "" ], "context": " BIBREF3 evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 . Since our focus is on comparing “core” neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.", "id": 31, "question": "what classic language models are mentioned in the paper?", "title": "Progress and Tradeoffs in Neural Language Models" }, { "answers": [ "", "" ], "context": "We conducted our experiments on Penn Treebank (PTB; BIBREF12 ) and WikiText-103 (WT103; BIBREF13 ). Preprocessed by BIBREF14 , PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens.", "id": 32, "question": "What is a commonly used evaluation metric for language models?", "title": "Progress and Tradeoffs in Neural Language Models" }, { "answers": [ "", "" ], "context": "Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).", "id": 33, "question": "Which dataset do they use a starting point in generating fake reviews?", "title": "Stay On-Topic: Generating Context-specific Fake Restaurant Reviews" }, { "answers": [ "", "" ], "context": "Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .", "id": 34, "question": "Do they use a pretrained NMT model to help generating reviews?", "title": "Stay On-Topic: Generating Context-specific Fake Restaurant Reviews" }, { "answers": [ "" ], "context": "We discuss the attack model, our generative machine learning method and controlling the generative process in this section.", "id": 35, "question": "How does using NMT ensure generated reviews stay on topic?", "title": "Stay On-Topic: Generating Context-specific Fake Restaurant Reviews" }, { "answers": [ "" ], "context": "Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.", "id": 36, "question": "What kind of model do they use for detection?", "title": "Stay On-Topic: Generating Context-specific Fake Restaurant Reviews" }, { "answers": [ "" ], "context": "We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.", "id": 37, "question": "Does their detection tool work better than human detection?", "title": "Stay On-Topic: Generating Context-specific Fake Restaurant Reviews" }, { "answers": [ "", "" ], "context": "Ever since the LIME algorithm BIBREF0 , \"explanation\" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result.", "id": 38, "question": "Which baselines did they compare?", "title": "Saliency Maps Generation for Automatic Text Summarization" }, { "answers": [ "one" ], "context": "We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.", "id": 39, "question": "How many attention layers are there in their model?", "title": "Saliency Maps Generation for Automatic Text Summarization" }, { "answers": [ "" ], "context": "The CNN/Daily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind question-answering dataset BIBREF13 . It contains around three hundred thousand news articles coupled with summaries of about three sentences. These summaries are in fact “highlights\" of the articles provided by the media themselves. Articles have an average length of 780 words and the summaries of 50 words. We had 287 000 training pairs and 11 500 test pairs. Similarly to See et al. See2017, we limit during training and prediction the input text to 400 words and generate summaries of 200 words. We pad the shorter texts using an UNKNOWN token and truncate the longer texts. We embed the texts and summaries using a vocabulary of size 50 000, thus recreating the same parameters as See et al. See2017.", "id": 40, "question": "Is the explanation from saliency map correct?", "title": "Saliency Maps Generation for Automatic Text Summarization" }, { "answers": [ "", "" ], "context": "Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models.", "id": 41, "question": "How is embedding quality assessed?", "title": "Probabilistic Bias Mitigation in Word Embeddings" }, { "answers": [ "RIPA, Neighborhood Metric, WEAT" ], "context": "Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen)...\\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \\sum _{j=1}^{k} (v \\cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$.", "id": 42, "question": "What are the three measures of bias which are reduced in experiments?", "title": "Probabilistic Bias Mitigation in Word Embeddings" }, { "answers": [ "" ], "context": "The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:", "id": 43, "question": "What are the probabilistic observations which contribute to the more robust algorithm?", "title": "Probabilistic Bias Mitigation in Word Embeddings" }, { "answers": [ "", "" ], "context": "In recent years, word embeddings BIBREF0, BIBREF1, BIBREF2 have been proven to be very useful for training downstream natural language processing (NLP) tasks. Moreover, contextualized embeddings BIBREF3, BIBREF4 have been shown to further improve the performance of NLP tasks such as named entity recognition, question answering, or text classification when used as word features because they are able to resolve ambiguities of word representations when they appear in different contexts. Different deep learning architectures such as multilingual BERT BIBREF4, LASER BIBREF5 and XLM BIBREF6 have proved successful in the multilingual setting. All these architectures learn the semantic representations from unannotated text, making them cheap given the availability of texts in online multilingual resources such as Wikipedia. However, the evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. This is the best-case scenario, languages with tones of data for training that generate high-quality models.", "id": 44, "question": "What turn out to be more important high volume or high quality data?", "title": "Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\\`ub\\'a and Twi" }, { "answers": [ "" ], "context": "The large amount of freely available text in the internet for multiple languages is facilitating the massive and automatic creation of multilingual resources. The resource par excellence is Wikipedia, an online encyclopedia currently available in 307 languages. Other initiatives such as Common Crawl or the Jehovah’s Witnesses site are also repositories for multilingual data, usually assumed to be noisier than Wikipedia. Word and contextual embeddings have been pre-trained on these data, so that the resources are nowadays at hand for more than 100 languages. Some examples include fastText word embeddings BIBREF2, BIBREF7, MUSE embeddings BIBREF8, BERT multilingual embeddings BIBREF4 and LASER sentence embeddings BIBREF5. In all cases, embeddings are trained either simultaneously for multiple languages, joining high- and low-resource data, or following the same methodology.", "id": 45, "question": "How much is model improved by massive data and how much by quality?", "title": "Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\\`ub\\'a and Twi" }, { "answers": [ "" ], "context": "is a language in the West Africa with over 50 million speakers. It is spoken among other languages in Nigeria, republic of Togo, Benin Republic, Ghana and Sierra Leon. It is also a language of Òrìsà in Cuba, Brazil, and some Caribbean countries. It is one of the three major languages in Nigeria and it is regarded as the third most spoken native African language. There are different dialects of Yorùbá in Nigeria BIBREF11, BIBREF12, BIBREF13. However, in this paper our focus is the standard Yorùbá based upon a report from the 1974 Joint Consultative Committee on Education BIBREF14.", "id": 46, "question": "What two architectures are used?", "title": "Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\\`ub\\'a and Twi" }, { "answers": [ "", "" ], "context": "Recently, the transformative potential of machine learning (ML) has propelled ML into the forefront of mainstream media. In Brazil, the use of such technique has been widely diffused gaining more space. Thus, it is used to search for patterns, regularities or even concepts expressed in data sets BIBREF0 , and can be applied as a form of aid in several areas of everyday life.", "id": 47, "question": "Does this paper target European or Brazilian Portuguese?", "title": "Is there Gender bias and stereotype in Portuguese Word Embeddings?" }, { "answers": [ "" ], "context": "There is a wide range of techniques that provide interesting results in the context of ML algorithms geared to the classification of data without discrimination; these techniques range from the pre-processing of data BIBREF4 to the use of bias removal techniques BIBREF5 in fact. Approaches linked to the data pre-processing step usually consist of methods based on improving the quality of the dataset after which the usual classification tools can be used to train a classifier. So, it starts from a baseline already stipulated by the execution of itself. On the other side of the spectrum, there are Unsupervised and semi-supervised learning techniques, that are attractive because they do not imply the cost of corpus annotation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .", "id": 48, "question": "What were the word embeddings trained on?", "title": "Is there Gender bias and stereotype in Portuguese Word Embeddings?" }, { "answers": [ "" ], "context": "In BIBREF13 , the quality of the representation of words through vectors in several models is discussed. According to the authors, the ability to train high-quality models using simplified architectures is useful in models composed of predictive methods that try to predict neighboring words with one or more context words, such as Word2Vec. Word embeddings have been used to provide meaningful representations for words in an efficient way.", "id": 49, "question": "Which word embeddings are analysed?", "title": "Is there Gender bias and stereotype in Portuguese Word Embeddings?" }, { "answers": [ "", "" ], "context": "Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries.", "id": 50, "question": "Did they experiment on this dataset?", "title": "Citation Data of Czech Apex Courts" }, { "answers": [ "" ], "context": "The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical research.The legal citation analysis employs tools provided by the field of network analysis.", "id": 51, "question": "How is quality of the citation measured?", "title": "Citation Data of Czech Apex Courts" }, { "answers": [ "903019 references" ], "context": "The area of reference recognition already contains a large amount of work. It is concerned with recognizing text spans in documents that are referring to other documents. As such, it is a classical topic within the AI & Law literature.", "id": 52, "question": "How big is the dataset?", "title": "Citation Data of Czech Apex Courts" }, { "answers": [ "" ], "context": "Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.", "id": 53, "question": "Do they evaluate only on English datasets?", "title": "LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment" }, { "answers": [ "" ], "context": "Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community.", "id": 54, "question": "Do the authors mention any possible confounds in this study?", "title": "LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment" }, { "answers": [ "Given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively, the estimated intensity is established as mean squared error.", "defined into four categories from high risk, moderate risk, to low risk" ], "context": "Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).", "id": 55, "question": "How is the intensity of the PTSD established?", "title": "LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment" }, { "answers": [ "", "" ], "context": "There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )", "id": 56, "question": "How is LIWC incorporated into this system?", "title": "LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment" }, { "answers": [ "" ], "context": "To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model.", "id": 57, "question": "How many twitter users are surveyed using the clinically validated survey?", "title": "LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment" }, { "answers": [ "" ], "context": "We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as \"MA Women Veterans @WomenVeterans\", \"Illinois Veterans @ILVetsAffairs\", \"Veterans Benefits @VAVetBenefits\" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.", "id": 58, "question": "Which clinically validated survey tools are used?", "title": "LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment" }, { "answers": [ "" ], "context": "Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in 2019 in Wuhan, Central China, and has since spread globally, resulting in the 2019–2020 coronavirus pandemic. On March 16th, 2020, researchers and leaders from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health released the COVID-19 Open Research Dataset (CORD-19) of scholarly literature about COVID-19, SARS-CoV-2, and the coronavirus group.", "id": 59, "question": "Did they experiment with the dataset?", "title": "Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak Supervision" }, { "answers": [ "", "" ], "context": "The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.", "id": 60, "question": "What is the size of this dataset?", "title": "Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak Supervision" }, { "answers": [ "" ], "context": "CORD-19-NER annotation is a combination from four sources with different NER methods:", "id": 61, "question": "Do they list all the named entity types present?", "title": "Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak Supervision" }, { "answers": [ "Accuracy and the macro-F1 (averaged F1 over positive and negative classes) are used as a measure of quality.", "" ], "context": "Sentiment classification is an important task which requires either word level or document level sentiment annotations. Such resources are available for at most 136 languages BIBREF0 , preventing accurate sentiment classification in a low resource setup. Recent research efforts on cross-lingual transfer learning enable to train models in high resource languages and transfer this information into other, low resource languages using minimal bilingual supervision BIBREF1 , BIBREF2 , BIBREF3 . Besides that, little effort has been spent on the creation of sentiment lexica for low resource languages (e.g., BIBREF0 , BIBREF4 , BIBREF5 ). We create and release Unisent, the first massively cross-lingual sentiment lexicon in more than 1000 languages. An extensive evaluation across several languages shows that the quality of Unisent is close to manually created resources. Our method is inspired by BIBREF6 with a novel combination of vocabulary expansion and domain adaptation using embedding spaces. Similar to our work, BIBREF7 also use massively parallel corpora to project POS tags and dependency relations across languages. However, their approach is based on assignment of the most probable label according to the alignment model from the source to the target language and does not include any vocabulary expansion or domain adaptation and do not use the embedding graphs.", "id": 62, "question": "how is quality measured?", "title": "UniSent: Universal Adaptable Sentiment Lexica for 1000+ Languages" }, { "answers": [ "" ], "context": "Our method, Adapted Sentiment Pivot requires a sentiment lexicon in one language (e.g. English) as well as a massively parallel corpus. Following steps are performed on this input.", "id": 63, "question": "how many languages exactly is the sentiment lexica for?", "title": "UniSent: Universal Adaptable Sentiment Lexica for 1000+ Languages" }, { "answers": [ "" ], "context": "Our goal is to evaluate the quality of UniSent against several manually created sentiment lexica in different domains to ensure its quality for the low resource languages. We do this in several steps.", "id": 64, "question": "what sentiment sources do they compare with?", "title": "UniSent: Universal Adaptable Sentiment Lexica for 1000+ Languages" }, { "answers": [ "", "" ], "context": "1.1em", "id": 65, "question": "Is the method described in this work a clustering-based method?", "title": "Word Sense Disambiguation for 158 Languages using Word Embeddings Only" }, { "answers": [ "" ], "context": "1.1.1em", "id": 66, "question": "How are the different senses annotated/labeled? ", "title": "Word Sense Disambiguation for 158 Languages using Word Embeddings Only" }, { "answers": [ "" ], "context": "1.1.1.1em", "id": 67, "question": "Was any extrinsic evaluation carried out?", "title": "Word Sense Disambiguation for 158 Languages using Word Embeddings Only" }, { "answers": [ "" ], "context": "Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa.", "id": 68, "question": "Does the model use both spectrogram images and raw waveforms as features?", "title": "Spoken Language Identification using ConvNets" }, { "answers": [ "", "" ], "context": "Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8, BIBREF9, BIBREF10. Following their success in speaker verification systems, i-vectors have also been used as features in various classification networks. These approaches required significant domain knowledge BIBREF11, BIBREF9. Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12, BIBREF13.", "id": 69, "question": "Is the performance compared against a baseline model?", "title": "Spoken Language Identification using ConvNets" }, { "answers": [ "Answer with content missing: (Table 1)\nPrevious state-of-the art on same dataset: ResNet50 89% (6 languages), SVM-HMM 70% (4 languages)" ], "context": "Several state-of-the-art results on various audio classification tasks have been obtained by using log-Mel spectrograms of raw audio, as features BIBREF19. Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20, BIBREF21 against other machine learning techniques. It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22. This motivated us to develop a CNN-based architecture with attention since this approach hasn’t been applied to the task of language identification before.", "id": 70, "question": "What is the accuracy reported by state-of-the-art methods?", "title": "Spoken Language Identification using ConvNets" }, { "answers": [ "" ], "context": "The bilingual lexicon induction task aims to automatically build word translation dictionaries across different languages, which is beneficial for various natural language processing tasks such as cross-lingual information retrieval BIBREF0 , multi-lingual sentiment analysis BIBREF1 , machine translation BIBREF2 and so on. Although building bilingual lexicon has achieved success with parallel sentences in resource-rich languages BIBREF2 , the parallel data is insufficient or even unavailable especially for resource-scarce languages and it is expensive to collect. On the contrary, there are abundant multimodal mono-lingual data on the Internet, such as images and their associated tags and descriptions, which motivates researchers to induce bilingual lexicon from these non-parallel data without supervision.", "id": 71, "question": "Which vision-based approaches does this approach outperform?", "title": "Unsupervised Bilingual Lexicon Induction from Mono-lingual Multimodal Data" }, { "answers": [ "" ], "context": "The early works for bilingual lexicon induction require parallel data in different languages. BIBREF2 systematically investigates various word alignment methods with parallel texts to induce bilingual lexicon. However, the parallel data is scarce or even unavailable for low-resource languages. Therefore, methods with less dependency on the availability of parallel corpora are highly desired.", "id": 72, "question": "What baseline is used for the experimental setup?", "title": "Unsupervised Bilingual Lexicon Induction from Mono-lingual Multimodal Data" }, { "answers": [ "", "" ], "context": "Our goal is to induce bilingual lexicon without supervision of parallel sentences or seed word pairs, purely based on the mono-lingual image caption data. In the following, we introduce the multi-lingual image caption model whose objectives for bilingual lexicon induction are two folds: 1) explicitly build multi-lingual word embeddings in the joint linguistic space; 2) implicitly extract the localized visual features for each word in the shared visual space. The former encodes linguistic information of words while the latter encodes the visual-grounded information, which are complementary for bilingual lexicon induction.", "id": 73, "question": "Which languages are used in the multi-lingual caption model?", "title": "Unsupervised Bilingual Lexicon Induction from Mono-lingual Multimodal Data" }, { "answers": [ "" ], "context": "The proliferation of social media has made it possible to study large online communities at scale, thus making important discoveries that can facilitate decision making, guide policies, improve health and well-being, aid disaster response, etc. The wide host of languages, languages varieties, and dialects used on social media and the nuanced differences between users of various backgrounds (e.g., different age groups, gender identities) make it especially difficult to derive sufficiently valuable insights based on single prediction tasks. For these reasons, it would be desirable to offer NLP tools that can help stitch together a complete picture of an event across different geographical regions as impacting, and being impacted by, individuals of different identities. We offer AraNet as one such tool for Arabic social media processing.", "id": 74, "question": "Did they experiment on all the tasks?", "title": "AraNet: A Deep Learning Toolkit for Arabic Social Media" }, { "answers": [ "" ], "context": "For Arabic, a collection of languages and varieties spoken by a wide population of $\\sim 400$ million native speakers covering a vast geographical region (shown in Figure FIGREF2), no such suite of tools currently exists. Many works have focused on sentiment analysis, e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 and dialect identification BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. However, there is generally rarity of resources on other tasks such as gender and age detection. This motivates our toolkit, which we hope can meet the current critical need for studying Arabic communities online. This is especially valuable given the waves of protests, uprisings, and revolutions that have been sweeping the region during the last decade.", "id": 75, "question": "What models did they compare to?", "title": "AraNet: A Deep Learning Toolkit for Arabic Social Media" }, { "answers": [ "", "" ], "context": "Supervised BERT. Across all our tasks, we use Bidirectional Encoder Representations from Transformers (BERT). BERT BIBREF15, dispenses with recurrence and convolution. It is based on a multi-layer bidirectional Transformer encoder BIBREF16, with multi-head attention. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task. The pre-trained BERT can be easily fine-tuned on a wide host of sentence-level and token-level tasks. All our models are trained in a fully supervised fashion, with dialect id being the only task where we leverage semi-supervised learning. We briefly outline our semi-supervised methods next.", "id": 76, "question": "What datasets are used in training?", "title": "AraNet: A Deep Learning Toolkit for Arabic Social Media" }, { "answers": [ "", "" ], "context": "Generative adversarial nets (GAN) (Goodfellow et al., 2014) belong to a class of generative models which are trainable and can generate artificial data examples similar to the existing ones. In a GAN model, there are two sub-models simultaneously trained: a generative model INLINEFORM0 from which artificial data examples can be sampled, and a discriminative model INLINEFORM1 which classifies real data examples and artificial ones from INLINEFORM2 . By training INLINEFORM3 to maximize its generation power, and training INLINEFORM4 to minimize the generation power of INLINEFORM5 , so that ideally there will be no difference between the true and artificial examples, a minimax problem can be established. The GAN model has been shown to closely replicate a number of image data sets, such as MNIST, Toronto Face Database (TFD), CIFAR-10, SVHN, and ImageNet (Goodfellow et al., 2014; Salimans et al. 2016).", "id": 77, "question": "Which GAN do they use?", "title": "Generative Adversarial Nets for Multiple Text Corpora" }, { "answers": [ "" ], "context": "In a GAN model, we assume that the data examples INLINEFORM0 are drawn from a distribution INLINEFORM1 , and the artificial data examples INLINEFORM2 are transformed from the noise distribution INLINEFORM3 . The binary classifier INLINEFORM4 outputs the probability of a data example (or an artificial one) being an original one. We consider the following minimax problem DISPLAYFORM0 ", "id": 78, "question": "Do they evaluate grammaticality of generated text?", "title": "Generative Adversarial Nets for Multiple Text Corpora" }, { "answers": [ "" ], "context": "Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”", "id": 79, "question": "Which corpora do they use?", "title": "Generative Adversarial Nets for Multiple Text Corpora" }, { "answers": [ "" ], "context": "Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.", "id": 80, "question": "Do they report results only on English datasets?", "title": "Stacked DeBERT: All Attention in Incomplete Data for Text Classification" }, { "answers": [ "typos in spellings or ungrammatical words" ], "context": "We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.", "id": 81, "question": "How do the authors define or exemplify 'incorrect words'?", "title": "Stacked DeBERT: All Attention in Incomplete Data for Text Classification" }, { "answers": [ "" ], "context": "In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.", "id": 82, "question": "How many vanilla transformers do they use after applying an embedding layer?", "title": "Stacked DeBERT: All Attention in Incomplete Data for Text Classification" }, { "answers": [ "", "" ], "context": "In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.", "id": 83, "question": "Do they test their approach on a dataset without incomplete data?", "title": "Stacked DeBERT: All Attention in Incomplete Data for Text Classification" }, { "answers": [ "", "" ], "context": "Besides the already mentioned BERT, the following baseline models are also used for comparison.", "id": 84, "question": "Should their approach be applied only when dealing with incomplete data?", "title": "Stacked DeBERT: All Attention in Incomplete Data for Text Classification" }, { "answers": [ "In the sentiment classification task by 6% to 8% and in the intent classification task by 0.94% on average" ], "context": "We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .", "id": 85, "question": "By how much do they outperform other models in the sentiment in intent classification tasks?", "title": "Stacked DeBERT: All Attention in Incomplete Data for Text Classification" }, { "answers": [ "", "" ], "context": "Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).", "id": 86, "question": "What is the sample size of people used to measure user satisfaction?", "title": "Gunrock: A Social Bot for Complex and Engaging Long Conversations" }, { "answers": [ "", "" ], "context": "Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.", "id": 87, "question": "What are all the metrics to measure user engagement?", "title": "Gunrock: A Social Bot for Complex and Engaging Long Conversations" }, { "answers": [ "" ], "context": "Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2.", "id": 88, "question": "What the system designs introduced?", "title": "Gunrock: A Social Bot for Complex and Engaging Long Conversations" }, { "answers": [ "" ], "context": "Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born\"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him\" in the last segment in User_5 is replaced with “bradley cooper\" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.", "id": 89, "question": "Do they specify the model they use for Gunrock?", "title": "Gunrock: A Social Bot for Complex and Engaging Long Conversations" }, { "answers": [ "" ], "context": "We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter,\" triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.", "id": 90, "question": "Do they gather explicit user satisfaction data on Gunrock?", "title": "Gunrock: A Social Bot for Complex and Engaging Long Conversations" }, { "answers": [ "" ], "context": "All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology.", "id": 91, "question": "How do they correlate user backstory queries to user satisfaction?", "title": "Gunrock: A Social Bot for Complex and Engaging Long Conversations" }, { "answers": [ "" ], "context": "In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than $56\\%$ of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language.", "id": 92, "question": "Do the authors report only on English?", "title": "Towards Detection of Subjective Bias using Contextualized Word Embeddings" }, { "answers": [ "", "" ], "context": "In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.", "id": 93, "question": "What is the baseline for the experiments?", "title": "Towards Detection of Subjective Bias using Contextualized Word Embeddings" }, { "answers": [ "They used BERT-based models to detect subjective language in the WNC corpus" ], "context": "FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.", "id": 94, "question": "Which experiments are perfomed?", "title": "Towards Detection of Subjective Bias using Contextualized Word Embeddings" }, { "answers": [ "", "No, other baseline metrics they use besides ROUGE-L are n-gram overlap, negative cross-entropy, perplexity, and BLEU." ], "context": "Producing sentences which are perceived as natural by a human addressee—a property which we will denote as fluency throughout this paper —is a crucial goal of all natural language generation (NLG) systems: it makes interactions more natural, avoids misunderstandings and, overall, leads to higher user satisfaction and user trust BIBREF0 . Thus, fluency evaluation is important, e.g., during system development, or for filtering unacceptable generations at application time. However, fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. Hence, it is not guaranteed that a correct output will match any of a finite number of given references. This results in difficulties for current reference-based evaluation, especially of fluency, causing word-overlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 . As a result, fluency evaluation of NLG is often done manually, which is costly and time-consuming.", "id": 95, "question": "Is ROUGE their only baseline?", "title": "Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!" }, { "answers": [ "" ], "context": "Acceptability judgments, i.e., speakers' judgments of the well-formedness of sentences, have been the basis of much linguistics research BIBREF10 , BIBREF11 : a speakers intuition about a sentence is used to draw conclusions about a language's rules. Commonly, “acceptability” is used synonymously with “grammaticality”, and speakers are in practice asked for grammaticality judgments or acceptability judgments interchangeably. Strictly speaking, however, a sentence can be unacceptable, even though it is grammatical – a popular example is Chomsky's phrase “Colorless green ideas sleep furiously.” BIBREF3 In turn, acceptable sentences can be ungrammatical, e.g., in an informal context or in poems BIBREF12 .", "id": 96, "question": "what language models do they use?", "title": "Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!" }, { "answers": [ "" ], "context": "In this section, we first describe SLOR and the intuition behind this score. Then, we introduce WordPieces, before explaining how we combine the two.", "id": 97, "question": "what questions do they ask human judges?", "title": "Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!" }, { "answers": [ "", "" ], "context": "In machine translation, neural networks have attracted a lot of research attention. Recently, the attention-based encoder-decoder framework BIBREF0 , BIBREF1 has been largely adopted. In this approach, Recurrent Neural Networks (RNNs) map source sequences of words to target sequences. The attention mechanism is learned to focus on different parts of the input sentence while decoding. Attention mechanisms have shown to work with other modalities too, like images, where their are able to learn to attend the salient parts of an image, for instance when generating text captions BIBREF2 . For such applications, Convolutional Neural Networks (CNNs) such as Deep Residual BIBREF3 have shown to work best to represent images.", "id": 98, "question": "What misbehavior is identified?", "title": "An empirical study on the effectiveness of images in Multimodal Neural Machine Translation" }, { "answers": [ "" ], "context": "In this section, we detail the neural machine translation architecture by BIBREF1 BahdanauCB14, implemented as an attention-based encoder-decoder framework with recurrent neural networks (§ SECREF2 ). We follow by explaining the conditional GRU layer (§ SECREF8 ) - the gating mechanism we chose for our RNN - and how the model can be ported to a multimodal version (§ SECREF13 ).", "id": 99, "question": "What is the baseline used?", "title": "An empirical study on the effectiveness of images in Multimodal Neural Machine Translation" }, { "answers": [ "" ], "context": "Given a source sentence INLINEFORM0 , the neural network directly models the conditional probability INLINEFORM1 of its translation INLINEFORM2 . The network consists of one encoder and one decoder with one attention mechanism. The encoder computes a representation INLINEFORM3 for each source sentence and a decoder generates one target word at a time and by decomposing the following conditional probability : DISPLAYFORM0 ", "id": 100, "question": "Which attention mechanisms do they compare?", "title": "An empirical study on the effectiveness of images in Multimodal Neural Machine Translation" }, { "answers": [ "", "" ], "context": "Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.", "id": 101, "question": "Which paired corpora did they use in the other experiment?", "title": "Unsupervised Machine Commenting with Neural Variational Topic Model" }, { "answers": [ "Under the retrieval evaluation setting, their proposed model + IR2 had better MRR than NVDM by 0.3769, better MR by 4.6, and better Recall@10 by 20 . \nUnder the generative evaluation setting the proposed model + IR2 had better BLEU by 0.044 , better CIDEr by 0.033, better ROUGE by 0.032, and better METEOR by 0.029", "Proposed model is better than both lexical based models by significan margin in all metrics: BLEU 0.261 vs 0.250, ROUGLE 0.162 vs 0.155 etc." ], "context": "In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges.", "id": 102, "question": "By how much does their system outperform the lexicon-based models?", "title": "Unsupervised Machine Commenting with Neural Variational Topic Model" }, { "answers": [ "" ], "context": "Here, we first introduce the challenges of building a well-performed machine commenting system.", "id": 103, "question": "Which lexicon-based models did they compare with?", "title": "Unsupervised Machine Commenting with Neural Variational Topic Model" }, { "answers": [ "" ], "context": "Facing the above challenges, we provide three solutions to the problems.", "id": 104, "question": "How many comments were used?", "title": "Unsupervised Machine Commenting with Neural Variational Topic Model" }, { "answers": [ "" ], "context": "We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning.", "id": 105, "question": "How many articles did they have?", "title": "Unsupervised Machine Commenting with Neural Variational Topic Model" }, { "answers": [ "" ], "context": "Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.", "id": 106, "question": "What news comment dataset was used?", "title": "Unsupervised Machine Commenting with Neural Variational Topic Model" }, { "answers": [ "" ], "context": "With ever-increasing amounts of data available, there is an increase in the need to offer tooling to speed up processing, and eventually making sense of this data. Because fully-automated tools to extract meaning from any given input to any desired level of detail have yet to be developed, this task is still at least supervised, and often (partially) resolved by humans; we refer to these humans as knowledge workers. Knowledge workers are professionals that have to go through large amounts of data and consolidate, prepare and process it on a daily basis. This data can originate from highly diverse portals and resources and depending on type or category, the data needs to be channelled through specific down-stream processing pipelines. We aim to create a platform for curation technologies that can deal with such data from diverse sources and that provides natural language processing (NLP) pipelines tailored to particular content types and genres, rendering this initial classification an important sub-task.", "id": 107, "question": "By how much do they outperform standard BERT?", "title": "Enriching BERT with Knowledge Graph Embeddings for Document Classification" }, { "answers": [ "", "" ], "context": "A central challenge in work on genre classification is the definition of a both rigid (for theoretical purposes) and flexible (for practical purposes) mode of representation that is able to model various dimensions and characteristics of arbitrary text genres. The size of the challenge can be illustrated by the observation that there is no clear agreement among researchers regarding actual genre labels or their scope and consistency. There is a substantial amount of previous work on the definition of genre taxonomies, genre ontologies, or sets of labels BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Since we work with the dataset provided by the organisers of the 2019 GermEval shared task, we adopt their hierarchy of labels as our genre palette. In the following, we focus on related work more relevant to our contribution.", "id": 108, "question": "What dataset do they use?", "title": "Enriching BERT with Knowledge Graph Embeddings for Document Classification" }, { "answers": [ "" ], "context": "Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books. Each record has:", "id": 109, "question": "How do they combine text representations with the knowledge graph embeddings?", "title": "Enriching BERT with Knowledge Graph Embeddings for Document Classification" }, { "answers": [ "" ], "context": "The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work.", "id": 110, "question": "What is the algorithm used for the classification tasks?", "title": "Diachronic Topics in New High German Poetry" }, { "answers": [ "" ], "context": "We approach diachronic variation of poetry from two perspectives. First, as distant reading task to visualize the development of clearly interpretable topics over time. Second, as a downstream task, i.e. supervised machine learning task to determine the year (the time-slot) of publication for a given poem. We infer topic distributions over documents as features and pit them against a simple style baseline.", "id": 111, "question": "Is the outcome of the LDA analysis evaluated in any way?", "title": "Diachronic Topics in New High German Poetry" }, { "answers": [ "", "" ], "context": "We retrieve the most important (likely) words for all 100 topics and interpret these (sorted) word lists as aggregated topics, e.g. topic 27 (figure 2) contains: Tugend (virtue), Kunst (art), Ruhm (fame), Geist (spirit), Verstand (mind) and Lob (praise). This topic as a whole describes the concept of ’artistic virtue’.", "id": 112, "question": "What is the corpus used in the study?", "title": "Diachronic Topics in New High German Poetry" }, { "answers": [ "", "" ], "context": "Knowledge graph(KG) has been proposed for several years and its most prominent application is in web search, for example, Google search triggers a certain entity card when a user's query matches or mentions an entity based on some statistical model. The core potential of a knowledge graph is about its capability of reasoning and inferring, and we have not seen revolutionary breakthrough in such areas yet. One main obstacle is obviously the lack of sufficient knowledge graph data, including entities, entities' descriptions, entities' attributes, and relationship between entities. A full functional knowledge graph supporting general purposed reasoning and inference might still require long years of the community's innovation and hardworking. On the other hand, many less demanding applications have great potential benefiting from the availability of information from the knowledge graph, such as query understanding and document understanding in information retrieval/search engines, simple inference in question answering systems, and easy reasoning in domain-limited decision support tools. Not only academy, but also industry companies have been heavily investing in knowledge graphs, such as Google's knowledge graph, Amazon's product graph, Facebook's Graph API, IBM's Watson, and Microsoft's Satori etc.", "id": 113, "question": "What are the traditional methods to identifying important attributes?", "title": "Important Attribute Identification in Knowledge Graph" }, { "answers": [ "" ], "context": "Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attribute-value extraction. In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions, and then score the attributes using the Bayes model. In BIBREF3 , Rahul Rai proposed to identify product attributes from customer online reviews using part-of-speech(POS) tagging patterns, and to evaluate their importance with several different frequency metrics. In BIBREF4 , Lee et al. developed a system to extract concept-attribute pairs from multiple data sources, such as Probase, general web documents, query logs and external knowledge base, and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model. Those approaches typically suffer from the poor quality of the pattern rules, and the ranking process is used to identify relatively more precise attributes from all attribute candidates.", "id": 114, "question": "What do you use to calculate word/sub-word embeddings", "title": "Important Attribute Identification in Knowledge Graph" }, { "answers": [ "" ], "context": "There have been broad researches on entity detection, relationship extraction, and also missing relationship prediction. For example: BIBREF13 , BIBREF14 and BIBREF15 explained how to construct a knowledge graph and how to perform representation learning on knowledge graphs. Some research has been performed on attribute extraction, such as BIBREF16 and BIBREF4 ; the latter one is quite special that it also simultaneously computes the attribute importance. As for modeling attribute importance for an existing knowledge graph which has completed attribute extractions, we found only a few existing research, all of which used simple co-occurrences to rank entity attributes. In reality, many knowledge graphs do not contain attribute importance information, for example, in the most famous Wikidata, a large amount of entities have many attributes, and it is difficult to know which attributes are significant and deserve more attention. In this research we focus on identifying important attributes in existing knowledge graphs. Specifically, we propose a new method of using extra user generated data source for evaluating the attribute importance, and we use the recently proposed state-of-the-art word/sub-word embedding techniques to match the external data with the attribute definition and values from entities in knowledge graphs. And then we use the statistics obtained from the matching to compare the attribute importance. Our method has general extensibility to any knowledge graph without attribute importance. When there is a possibility of finding external textual data source, our proposed method will work, even if the external data does not exactly match the attribute textual data, since the vector embedding performs semantic matching and does not require exact string matching.", "id": 115, "question": "What user generated text data do you use?", "title": "Important Attribute Identification in Knowledge Graph" }, { "answers": [ "" ], "context": "Characteristic metrics are a set of unsupervised measures that quantitatively describe or summarize the properties of a data collection. These metrics generally do not use ground-truth labels and only measure the intrinsic characteristics of data. The most prominent example is descriptive statistics that summarizes a data collection by a group of unsupervised measures such as mean or median for central tendency, variance or minimum-maximum for dispersion, skewness for symmetry, and kurtosis for heavy-tailed analysis.", "id": 116, "question": "Did they propose other metrics?", "title": "Diversity, Density, and Homogeneity: Quantitative Characteristic Metrics for Text Collections" }, { "answers": [ "", "" ], "context": "A building block of characteristic metrics for text collections is the language representation method. A classic way to represent a sentence or a paragraph is n-gram, with dimension equals to the size of vocabulary. More advanced methods learn a relatively low dimensional latent space that represents each word or token as a continuous semantic vector such as word2vec BIBREF9, GloVe BIBREF10, and fastText BIBREF11. These methods have been widely adopted with consistent performance improvements on many NLP tasks. Also, there has been extensive research on representing a whole sentence as a vector such as a plain or weighted average of word vectors BIBREF12, skip-thought vectors BIBREF13, and self-attentive sentence encoders BIBREF14.", "id": 117, "question": "Which real-world datasets did they use?", "title": "Diversity, Density, and Homogeneity: Quantitative Characteristic Metrics for Text Collections" }, { "answers": [ "" ], "context": "We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions.", "id": 118, "question": "How did they obtain human intuitions?", "title": "Diversity, Density, and Homogeneity: Quantitative Characteristic Metrics for Text Collections" }, { "answers": [ "" ], "context": "Decisions made in international organisations are fundamental to international development efforts and initiatives. It is in these global governance arenas that the rules of the global economic system, which have a huge impact on development outcomes are agreed on; decisions are made about large-scale funding for development issues, such as health and infrastructure; and key development goals and targets are agreed on, as can be seen with the Millennium Development Goals (MDGs). More generally, international organisations have a profound influence on the ideas that shape international development efforts BIBREF0 .", "id": 119, "question": "What are the country-specific drivers of international development rhetoric?", "title": "What Drives the International Development Agenda? An NLP Analysis of the United Nations General Debate 1970-2016" }, { "answers": [ "", "" ], "context": "In the analysis we consider the nature of international development issues raised in the UN General Debates, and the effect of structural covariates on the level of developmental rhetoric in the GD statements. To do this, we first implement a structural topic model BIBREF4 . This enables us to identify the key international development topics discussed in the GD. We model topic prevalence in the context of the structural covariates. In addition, we control for region fixed effects and time trend. The aim is to allow the observed metadata to affect the frequency with which a topic is discussed in General Debate speeches. This allows us to test the degree of association between covariates (and region/time effects) and the average proportion of a document discussing a topic.", "id": 120, "question": "Is the dataset multilingual?", "title": "What Drives the International Development Agenda? An NLP Analysis of the United Nations General Debate 1970-2016" }, { "answers": [ " They focus on exclusivity and semantic coherence measures: Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. They select select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence." ], "context": "We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.", "id": 121, "question": "How are the main international development topics that states raise identified?", "title": "What Drives the International Development Agenda? An NLP Analysis of the United Nations General Debate 1970-2016" }, { "answers": [ "" ], "context": "QnAMaker aims to simplify the process of bot creation by extracting Question-Answer (QA) pairs from data given by users into a Knowledge Base (KB) and providing a conversational layer over it. KB here refers to one instance of azure search index, where the extracted QA are stored. Whenever a developer creates a KB using QnAMaker, they automatically get all NLP capabilities required to answer user's queries. There are other systems such as Google's Dialogflow, IBM's Watson Discovery which tries to solve this problem. QnAMaker provides unique features for the ease of development such as the ability to add a persona-based chit-chat layer on top of the bot. Additionally, bot developers get automatic feedback from the system based on end-user traffic and interaction which helps them in enriching the KB; we call this feature active-learning. Our system also allows user to add Multi-Turn structure to KB using hierarchical extraction and contextual ranking. QnAMaker today supports over 35 languages, and is the only system among its competitors to follow a Server-Client architecture; all the KB data rests only in the client's subscription, giving users total control over their data. QnAMaker is part of Microsoft Cognitive Service and currently runs using the Microsoft Azure Stack.", "id": 122, "question": "What experiments do the authors present to validate their system?", "title": "QnAMaker: Data to Bot in 2 Minutes" }, { "answers": [ "" ], "context": "As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:", "id": 123, "question": "How does the conversation layer work?", "title": "QnAMaker: Data to Bot in 2 Minutes" }, { "answers": [ "", "" ], "context": "Creating a bot is a 3-step process for a bot developer:", "id": 124, "question": "What components is the QnAMaker composed of?", "title": "QnAMaker: Data to Bot in 2 Minutes" }, { "answers": [ "", "" ], "context": "Since Och BIBREF0 proposed minimum error rate training (MERT) to exactly optimize objective evaluation measures, MERT has become a standard model tuning technique in statistical machine translation (SMT). Though MERT performs better by improving its searching algorithm BIBREF1, BIBREF2, BIBREF3, BIBREF4, it does not work reasonably when there are lots of features. As a result, margin infused relaxed algorithms (MIRA) dominate in this case BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10.", "id": 125, "question": "How they measure robustness in experiments?", "title": "A simple discriminative training method for machine translation with large-scale features" }, { "answers": [ "" ], "context": "Plackett-Luce was firstly proposed to predict ranks of horses in gambling BIBREF13. Let $\\mathbf {r}=(r_{1},r_{2}\\ldots r_{N})$ be $N$ horses with a probability distribution $\\mathcal {P}$ on their abilities to win a game, and a rank $\\mathbf {\\pi }=(\\pi (1),\\pi (2)\\ldots \\pi (|\\mathbf {\\pi }|))$ of horses can be understood as a generative procedure, where $\\pi (j)$ denotes the index of the horse in the $j$th position.", "id": 126, "question": "Is new method inferior in terms of robustness to MIRAs in experiments?", "title": "A simple discriminative training method for machine translation with large-scale features" }, { "answers": [ "" ], "context": "In SMT, let $\\mathbf {f}=(f_{1},f_{2}\\ldots )$ denote source sentences, and $\\mathbf {e}=(\\lbrace e_{1,1},\\ldots \\rbrace ,\\lbrace e_{2,1},\\ldots \\rbrace \\ldots )$ denote target hypotheses. A set of features are defined on both source and target side. We refer to $h(e_{i,*})$ as a feature vector of a hypothesis from the $i$th source sentence, and its score from a ranking function is defined as the inner product $h(e_{i,*})^{T}w$ of the weight vector $w$ and the feature vector.", "id": 127, "question": "What experiments with large-scale features are performed?", "title": "A simple discriminative training method for machine translation with large-scale features" }, { "answers": [ "" ], "context": "Currently, voice-controlled smart devices are widely used in multiple areas to fulfill various tasks, e.g. playing music, acquiring weather information and booking tickets. The SLU system employs several modules to enable the understanding of the semantics of the input speeches. When there is an incoming speech, the ASR module picks it up and attempts to transcribe the speech. An ASR model could generate multiple interpretations for most speeches, which can be ranked by their associated confidence scores. Among the $n$-best hypotheses, the top-1 hypothesis is usually transformed to the NLU module for downstream tasks such as domain classification, intent classification and named entity recognition (slot tagging). Multi-domain NLU modules are usually designed hierarchically BIBREF0. For one incoming utterance, NLU modules will firstly classify the utterance as one of many possible domains and the further analysis on intent classification and slot tagging will be domain-specific.", "id": 128, "question": "Which ASR system(s) is used in this work?", "title": "Improving Spoken Language Understanding By Exploiting ASR N-best Hypotheses" }, { "answers": [ "" ], "context": "The preliminary architecture is shown in Fig. FIGREF4. For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance. Finally, a fully connected Feed-forward Neural Network (FNN) followed by a softmax layer, labeled as a multilayer perceptron (MLP) module, is used to perform the domain/intent classification task based on the vector.", "id": 129, "question": "What are the series of simple models?", "title": "Improving Spoken Language Understanding By Exploiting ASR N-best Hypotheses" }, { "answers": [ "", "" ], "context": "Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation. The models evaluating with $n$-bests and a BM (pre-trained on transcription) are called Direct Models (in Fig. FIGREF7):", "id": 130, "question": "Over which datasets/corpora is this work evaluated?", "title": "Improving Spoken Language Understanding By Exploiting ASR N-best Hypotheses" }, { "answers": [ "Yes, Open IE", "" ], "context": "We developed a syntactic text simplification (TS) approach that can be used as a preprocessing step to facilitate and improve the performance of a wide range of artificial intelligence (AI) tasks, such as Machine Translation, Information Extraction (IE) or Text Summarization. Since shorter sentences are generally better processed by natural language processing (NLP) systems BIBREF0, the goal of our approach is to break down a complex source sentence into a set of minimal propositions, i.e. a sequence of sound, self-contained utterances, with each of them presenting a minimal semantic unit that cannot be further decomposed into meaningful propositions BIBREF1.", "id": 131, "question": "Is the semantic hierarchy representation used for any task?", "title": "DisSim: A Discourse-Aware Syntactic Text Simplification Frameworkfor English and German" }, { "answers": [ "" ], "context": "We present DisSim, a discourse-aware sentence splitting approach for English and German that creates a semantic hierarchy of simplified sentences. It takes a sentence as input and performs a recursive transformation process that is based upon a small set of 35 hand-crafted grammar rules for the English version and 29 rules for the German approach. These patterns were heuristically determined in a comprehensive linguistic analysis and encode syntactic and lexical features that can be derived from a sentence's parse tree. Each rule specifies (1) how to split up and rephrase the input into structurally simplified sentences and (2) how to set up a semantic hierarchy between them. They are recursively applied on a given source sentence in a top-down fashion. When no more rule matches, the algorithm stops and returns the generated discourse tree.", "id": 132, "question": "What are the corpora used for the task?", "title": "DisSim: A Discourse-Aware Syntactic Text Simplification Frameworkfor English and German" }, { "answers": [ "the English version is evaluated. The German version evaluation is in progress " ], "context": "In a first step, source sentences that present a complex linguistic form are turned into clean, compact structures by decomposing clausal and phrasal components. For this purpose, the transformation rules encode both the splitting points and rephrasing procedure for reconstructing proper sentences.", "id": 133, "question": "Is the model evaluated?", "title": "DisSim: A Discourse-Aware Syntactic Text Simplification Frameworkfor English and German" }, { "answers": [ "" ], "context": "Word embeddings have great practical importance since they can be used as pre-computed high-density features to ML models, significantly reducing the amount of training data required in a variety of NLP tasks. However, there are several inter-related challenges with computing and consistently distributing word embeddings concerning the:", "id": 134, "question": "What new metrics are suggested to track progress?", "title": "Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects" }, { "answers": [ "", "" ], "context": "There are several approaches to generating word embeddings. One can build models that explicitly aim at generating word embeddings, such as Word2Vec or GloVe BIBREF1 , BIBREF2 , or one can extract such embeddings as by-products of more general models, which implicitly compute such word embeddings in the process of solving other language tasks.", "id": 135, "question": "What intrinsic evaluation metrics are used?", "title": "Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects" }, { "answers": [ "" ], "context": "The neural word embedding model we use in our experiments is heavily inspired in the one described in BIBREF4 , but ours is one layer deeper and is set to solve a slightly different word prediction task. Given a sequence of 5 words - INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 , the task the model tries to perform is that of predicting the middle word, INLINEFORM5 , based on the two words on the left - INLINEFORM6 INLINEFORM7 - and the two words on the right - INLINEFORM8 INLINEFORM9 : INLINEFORM10 . This should produce embeddings that closely capture distributional similarity, so that words that belong to the same semantic class, or which are synonyms and antonyms of each other, will be embedded in “close” regions of the embedding hyper-space.", "id": 136, "question": "What experimental results suggest that using less than 50% of the available training examples might result in overfitting?", "title": "Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects" }, { "answers": [ "", "" ], "context": "A great deal of commonsense knowledge about the world we live is procedural in nature and involves steps that show ways to achieve specific goals. Understanding and reasoning about procedural texts (e.g. cooking recipes, how-to guides, scientific processes) are very hard for machines as it demands modeling the intrinsic dynamics of the procedures BIBREF0, BIBREF1, BIBREF2. That is, one must be aware of the entities present in the text, infer relations among them and even anticipate changes in the states of the entities after each action. For example, consider the cheeseburger recipe presented in Fig. FIGREF2. The instruction “salt and pepper each patty and cook for 2 to 3 minutes on the first side” in Step 5 entails mixing three basic ingredients, the ground beef, salt and pepper, together and then applying heat to the mix, which in turn causes chemical changes that alter both the appearance and the taste. From a natural language understanding perspective, the main difficulty arises when a model sees the word patty again at a later stage of the recipe. It still corresponds to the same entity, but its form is totally different.", "id": 137, "question": "What multimodality is available in the dataset?", "title": "Procedural Reasoning Networks for Understanding Multimodal Procedures" }, { "answers": [ "" ], "context": "In our study, we particularly focus on the visual reasoning tasks of RecipeQA, namely visual cloze, visual coherence, and visual ordering tasks, each of which examines a different reasoning skill. We briefly describe these tasks below.", "id": 138, "question": "What are previously reported models?", "title": "Procedural Reasoning Networks for Understanding Multimodal Procedures" }, { "answers": [ "Average accuracy of proposed model vs best prevous result:\nSingle-task Training: 57.57 vs 55.06\nMulti-task Training: 50.17 vs 50.59" ], "context": "In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images.", "id": 139, "question": "How better is accuracy of new model compared to previously reported models?", "title": "Procedural Reasoning Networks for Understanding Multimodal Procedures" }, { "answers": [ "", "" ], "context": "Electronic health records (EHRs) systematically collect patients' clinical information, such as health profiles, histories of present illness, past medical histories, examination results and treatment plans BIBREF0 . By analyzing EHRs, many useful information, closely related to patients, can be discovered BIBREF1 . Since Chinese EHRs are recorded without explicit word delimiters (e.g., “UTF8gkai糖尿病酮症酸中毒” (diabetic ketoacidosis)), Chinese word segmentation (CWS) is a prerequisite for processing EHRs. Currently, state-of-the-art CWS methods usually require large amounts of manually-labeled data to reach their full potential. However, there are many challenges inherent in labeling EHRs. First, EHRs have many medical terminologies, such as “UTF8gkai高血压性心脏病” (hypertensive heart disease) and “UTF8gkai罗氏芬” (Rocephin), so only annotators with medical backgrounds can be qualified to label EHRs. Second, EHRs may involve personal privacies of patients. Therefore, they cannot be openly published on a large scale for labeling. The above two problems lead to the high annotation cost and insufficient training corpus in the research of CWS in medical text.", "id": 140, "question": "How does the scoring model work?", "title": "Active Learning for Chinese Word Segmentation in Medical Text" }, { "answers": [ "Active learning methods has a learning engine (mainly used for training of classification problems) and the selection engine (which chooses samples that need to be relabeled by annotators from unlabeled data). Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively." ], "context": "In past decades, researches on CWS have a long history and various methods have been proposed BIBREF13 , BIBREF14 , BIBREF15 , which is an important task for Chinese NLP BIBREF7 . These methods are mainly focus on two categories: supervised learning and deep learning BIBREF2 .", "id": 141, "question": "How does the active learning model work?", "title": "Active Learning for Chinese Word Segmentation in Medical Text" }, { "answers": [ "" ], "context": "Active learning BIBREF22 mainly aims to ease the data collection process by automatically deciding which instances should be labeled by annotators to train a model as quickly and effectively as possible BIBREF23 . The sampling strategy plays a key role in active learning. In the past decade, the rapid development of active learning has resulted in various sampling strategies, such as uncertainty sampling BIBREF24 , query-by-committee BIBREF25 and information gain BIBREF26 . Currently, the most mainstream sampling strategy is uncertainty sampling. It focuses its selection on samples closest to the decision boundary of the classifier and then chooses these samples for annotators to relabel BIBREF27 .", "id": 142, "question": "Which neural network architectures are employed?", "title": "Active Learning for Chinese Word Segmentation in Medical Text" }, { "answers": [ "" ], "context": "A script is “a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor” BIBREF0 . Script events describe an action/activity along with the involved participants. For example, in the script describing a visit to a restaurant, typical events are entering the restaurant, ordering food or eating. Participants in this scenario can include animate objects like the waiter and the customer, as well as inanimate objects such as cutlery or food.", "id": 143, "question": "What are the key points in the role of script knowledge that can be studied?", "title": "InScript: Narrative texts annotated with script information" }, { "answers": [ "For event types and participant types, there was a moderate to substantial level of agreement using the Fleiss' Kappa. For coreference chain annotation, there was average agreement of 90.5%.", "Moderate agreement of 0.64-0.68 Fleiss’ Kappa over event type labels, 0.77 Fleiss’ Kappa over participant labels, and good agreement of 90.5% over coreference information." ], "context": "We selected 10 scenarios from different available scenario lists (e.g. Regneri:2010 , VanDerMeer2009, and the OMICS corpus BIBREF1 ), including scripts of different complexity (Taking a bath vs. Flying in an airplane) and specificity (Riding a public bus vs. Repairing a flat bicycle tire). For the full scenario list see Table 2 .", "id": 144, "question": "Did the annotators agreed and how much?", "title": "InScript: Narrative texts annotated with script information" }, { "answers": [ "" ], "context": "Statistics for the corpus are given in Table 2 . On average, each story has a length of 12 sentences and 217 words with 98 word types on average. Stories are coherent and concentrate mainly on the corresponding scenario. Neglecting auxiliaries, modals and copulas, on average each story has 32 verbs, out of which 58% denote events related to the respective scenario. As can be seen in Table 2 , there is some variation in stories across scenarios: The flying in an airplane scenario, for example, is most complex in terms of the number of sentences, tokens and word types that are used. This is probably due to the inherent complexity of the scenario: Taking a flight, for example, is more complicated and takes more steps than taking a bath. The average count of sentences, tokens and types is also very high for the baking a cake scenario. Stories from the scenario often resemble cake recipes, which usually contain very detailed steps, so people tend to give more detailed descriptions in the stories.", "id": 145, "question": "How many subjects have been used to create the annotations?", "title": "InScript: Narrative texts annotated with script information" }, { "answers": [ " Kinship and Nations knowledge graphs, YAGO3-10 and WN18KGs knowledge graphs ", "" ], "context": "Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering. Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts. These dense representation models for link prediction include tensor factorization BIBREF0 , BIBREF1 , BIBREF2 , algebraic operations BIBREF3 , BIBREF4 , BIBREF5 , multiple embeddings BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , and complex neural models BIBREF10 , BIBREF11 . However, there are only a few studies BIBREF12 , BIBREF13 that investigate the quality of the different KG models. There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions. In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned, which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (). First, we consider perturbations that red!50!blackremove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction. As an example, consider the excerpt from a KG in Figure 1 with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria. Our proposed graph perturbation, shown in Figure 1 , identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette's child. We also study attacks that green!50!blackadd a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph. An example attack for the original graph in Figure 1 , is depicted in Figure 1 . Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 .", "id": 146, "question": "What datasets are used to evaluate this approach?", "title": "Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications" }, { "answers": [ "" ], "context": "In this section, we briefly introduce some notations, and existing relational embedding approaches that model knowledge graph completion using dense vectors. In KGs, facts are represented using triples of subject, relation, and object, $\\langle s, r, o\\rangle $ , where $s,o\\in \\xi $ , the set of entities, and $r\\in $ , the set of relations. To model the KG, a scoring function $\\psi :\\xi \\times \\times \\xi \\rightarrow $ is learned to evaluate whether any given fact is true. In this work, we focus on multiplicative models of link prediction, specifically DistMult BIBREF2 because of its simplicity and popularity, and ConvE BIBREF10 because of its high accuracy. We can represent the scoring function of such methods as $\\psi (s,r,o) = , ) \\cdot $ , where $,,\\in ^d$ are embeddings of the subject, relation, and object respectively. In DistMult, $, ) = \\odot $ , where $\\odot $ is element-wise multiplication operator. Similarly, in ConvE, $, )$ is computed by a convolution on the concatenation of $$ and $s,o\\in \\xi $0 .", "id": 147, "question": "How is this approach used to detect incorrect facts?", "title": "Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications" }, { "answers": [ "" ], "context": "For adversarial modifications on KGs, we first define the space of possible modifications. For a target triple $\\langle s, r, o\\rangle $ , we constrain the possible triples that we can remove (or inject) to be in the form of $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ i.e $s^{\\prime }$ and $r^{\\prime }$ may be different from the target, but the object is not. We analyze other forms of modifications such as $\\langle s, r^{\\prime }, o^{\\prime }\\rangle $ and $\\langle s, r^{\\prime }, o\\rangle $ in appendices \"Modifications of the Form 〈s,r ' ,o ' 〉\\langle s, r^{\\prime }, o^{\\prime } \\rangle \" and \"Modifications of the Form 〈s,r ' ,o〉\\langle s, r^{\\prime }, o \\rangle \" , and leave empirical evaluation of these modifications for future work.", "id": 148, "question": "Can this adversarial approach be used to directly improve model accuracy?", "title": "Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications" }, { "answers": [ "" ], "context": "Topic models, such as latent Dirichlet allocation (LDA), allow us to analyze large collections of documents by revealing their underlying themes, or topics, and how each document exhibits them BIBREF0 . Therefore, it is not surprising that topic models have become a standard tool in data analysis, with many applications that go even beyond their original purpose of modeling textual data, such as analyzing images BIBREF1 , BIBREF2 , videos BIBREF3 , survey data BIBREF4 or social networks data BIBREF5 .", "id": 149, "question": "what are the advantages of the proposed model?", "title": "Learning Supervised Topic Models for Classification and Regression from Crowds" }, { "answers": [ "" ], "context": "Latent Dirichlet allocation (LDA) soon proved to be a powerful tool for modeling documents BIBREF0 and images BIBREF1 by extracting their underlying topics, where topics are probability distributions across words, and each document is characterized by a probability distribution across topics. However, the need to model the relationship between documents and labels quickly gave rise to many supervised variants of LDA. One of the first notable works was that of supervised LDA (sLDA) BIBREF6 . By extending LDA through the inclusion of a response variable that is linearly dependent on the mean topic-assignments of the words in a document, sLDA is able to jointly model the documents and their responses, in order to find latent topics that will best predict the response variables for future unlabeled documents. Although initially developed for general continuous response variables, sLDA was later extended to classification problems BIBREF2 , by modeling the relationship between topic-assignments and labels with a softmax function as in logistic regression.", "id": 150, "question": "what are the state of the art approaches?", "title": "Learning Supervised Topic Models for Classification and Regression from Crowds" }, { "answers": [ "", "" ], "context": "Learning from multiple annotators is an increasingly important research topic. Since the early work of Dawid and Skeene BIBREF19 , who attempted to obtain point estimates of the error rates of patients given repeated but conflicting responses to various medical questions, many approaches have been proposed. These usually rely on latent variable models. For example, in BIBREF20 the authors propose a model to estimate the ground truth from the labels of multiple experts, which is then used to train a classifier.", "id": 151, "question": "what datasets were used?", "title": "Learning Supervised Topic Models for Classification and Regression from Crowds" }, { "answers": [ "", "They crawled travel information from the Web to build a database, created a multi-domain goal generator from the database, collected dialogue between workers an automatically annotated dialogue acts. " ], "context": "Recently, there have been a variety of task-oriented dialogue models thanks to the prosperity of neural architectures BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the research is still largely limited by the availability of large-scale high-quality dialogue data. Many corpora have advanced the research of task-oriented dialogue systems, most of which are single domain conversations, including ATIS BIBREF6, DSTC 2 BIBREF7, Frames BIBREF8, KVRET BIBREF9, WOZ 2.0 BIBREF10 and M2M BIBREF11.", "id": 152, "question": "How was the dataset collected?", "title": "CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset" }, { "answers": [ "" ], "context": "According to whether the dialogue agent is human or machine, we can group the collection methods of existing task-oriented dialogue datasets into three categories. The first one is human-to-human dialogues. One of the earliest and well-known ATIS dataset BIBREF6 used this setting, followed by BIBREF8, BIBREF9, BIBREF10, BIBREF15, BIBREF16 and BIBREF12. Though this setting requires many human efforts, it can collect natural and diverse dialogues. The second one is human-to-machine dialogues, which need a ready dialogue system to converse with humans. The famous Dialogue State Tracking Challenges provided a set of human-to-machine dialogue data BIBREF17, BIBREF7. The performance of the dialogue system will largely influence the quality of dialogue data. The third one is machine-to-machine dialogues. It needs to build both user and system simulators to generate dialogue outlines, then use templates BIBREF3 to generate dialogues or further employ people to paraphrase the dialogues to make them more natural BIBREF11, BIBREF13. It needs much less human effort. However, the complexity and diversity of dialogue policy are limited by the simulators. To explore dialogue policy in multi-domain scenarios, and to collect natural and diverse dialogues, we resort to the human-to-human setting.", "id": 153, "question": "What are the benchmark models?", "title": "CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset" }, { "answers": [ "" ], "context": "Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:", "id": 154, "question": "How was the corpus annotated?", "title": "CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset" }, { "answers": [ "Only Bert base and Bert large are compared to proposed approach." ], "context": "As traditional word embedding algorithms BIBREF1 are known to struggle with rare words, several techniques for improving their representations have been proposed over the last few years. These approaches exploit either the contexts in which rare words occur BIBREF2, BIBREF3, BIBREF4, BIBREF5, their surface-form BIBREF6, BIBREF7, BIBREF8, or both BIBREF9, BIBREF10. However, all of these approaches are designed for and evaluated on uncontextualized word embeddings.", "id": 155, "question": "What models other than standalone BERT is new model compared to?", "title": "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance" }, { "answers": [ "" ], "context": "Incorporating surface-form information (e.g., morphemes, characters or character $n$-grams) is a commonly used technique for improving word representations. For context-independent word embeddings, this information can either be injected into a given embedding space BIBREF6, BIBREF8, or a model can directly be given access to it during training BIBREF7, BIBREF24, BIBREF25. In the area of contextualized representations, many architectures employ subword segmentation methods BIBREF12, BIBREF13, BIBREF26, BIBREF14, whereas others use convolutional neural networks to directly access character-level information BIBREF27, BIBREF11, BIBREF17.", "id": 156, "question": "How much is representaton improved for rare/medum frequency words compared to standalone BERT and previous work?", "title": "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance" }, { "answers": [ "", "" ], "context": "We review the architecture of the form-context model (FCM) BIBREF9, which forms the basis for our model. Given a set of $d$-dimensional high-quality embeddings for frequent words, FCM can be used to induce embeddings for infrequent words that are appropriate for the given embedding space. This is done as follows: Given a word $w$ and a context $C$ in which it occurs, a surface-form embedding $v_{(w,{C})}^\\text{form} \\in \\mathbb {R}^d$ is obtained similar to BIBREF7 by averaging over embeddings of all $n$-grams in $w$; these $n$-gram embeddings are learned during training. Similarly, a context embedding $v_{(w,{C})}^\\text{context} \\in \\mathbb {R}^d$ is obtained by averaging over the embeddings of all words in $C$. The so-obtained form and context embeddings are then combined using a gate", "id": 157, "question": "What are three downstream task datasets?", "title": "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance" }, { "answers": [ "" ], "context": "To overcome both limitations described above, we introduce Bertram, an approach that combines a pretrained BERT language model BIBREF13 with Attentive Mimicking BIBREF19. To this end, let $d_h$ be the hidden dimension size and $l_\\text{max}$ be the number of layers for the BERT model being used. We denote with $e_{t}$ the (uncontextualized) embedding assigned to a token $t$ by BERT and, given a sequence of such uncontextualized embeddings $\\mathbf {e} = e_1, \\ldots , e_n$, we denote by $\\textbf {h}_j^l(\\textbf {e})$ the contextualized representation of the $j$-th token at layer $l$ when the model is given $\\mathbf {e}$ as input.", "id": 158, "question": "What is dataset for word probing task?", "title": "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance" }, { "answers": [ "" ], "context": "Entity Linking (EL), which is also called Entity Disambiguation (ED), is the task of mapping mentions in text to corresponding entities in a given knowledge Base (KB). This task is an important and challenging stage in text understanding because mentions are usually ambiguous, i.e., different named entities may share the same surface form and the same entity may have multiple aliases. EL is key for information retrieval (IE) and has many applications, such as knowledge base population (KBP), question answering (QA), etc.", "id": 159, "question": "How fast is the model compared to baselines?", "title": "Joint Entity Linking with Deep Reinforcement Learning" }, { "answers": [ "Comparing with the highest performing baseline: 1.3 points on ACE2004 dataset, 0.6 points on CWEB dataset, and 0.86 points in the average of all scores." ], "context": "The overall structure of our RLEL model is shown in Figure 2. The proposed framework mainly includes three parts: Local Encoder which encodes local features of mentions and their candidate entities, Global Encoder which encodes the global coherence of mentions in a sequence manner and Entity Selector which selects an entity from the candidate set. As the Entity Selector and the Global Encoder are correlated mutually, we train them jointly. Moreover, the Local Encoder as the basis of the entire framework will be independently trained before the joint training process starts. In the following, we will introduce the technical details of these modules.", "id": 160, "question": "How big is the performance difference between this method and the baseline?", "title": "Joint Entity Linking with Deep Reinforcement Learning" }, { "answers": [ "", "" ], "context": "Before introducing our model, we firstly define the entity linking task. Formally, given a document $D$ with a set of mentions $M = \\lbrace m_1, m_2,...,m_k\\rbrace $ , each mention $ m_t \\in D$ has a set of candidate entities $C_{m_t} = \\lbrace e_{t}^1, e_{t}^2,..., e_{t}^n\\rbrace $ . The task of entity linking is to map each mention $m_t$ to its corresponding correct target entity $e_{t}^+$ or return \"NIL\" if there is not correct target entity in the knowledge base. Before selecting the target entity, we need to generate a certain number of candidate entities for model selection.", "id": 161, "question": "What datasets used for evaluation?", "title": "Joint Entity Linking with Deep Reinforcement Learning" }, { "answers": [ "" ], "context": "Given a mention $m_t$ and the corresponding candidate set $\\lbrace e_t^1, e_t^2,..., \\\\ e_t^k\\rbrace $ , we aim to get their local representation based on the mention context and the candidate entity description. For each mention, we firstly select its $n$ surrounding words, and represent them as word embedding using a pre-trained lookup table BIBREF11 . Then, we use Long Short-Term Memory (LSTM) networks to encode the contextual word sequence $\\lbrace w_c^1, w_c^2,..., w_c^n\\rbrace $ as a fixed-size vector $V_{m_t}$ . The description of entity is encoded as $D_{e_t^i}$ in the same way. Apart from the description of entity, there are many other valuable information in the knowledge base. To make full use of these information, many researchers trained entity embeddings by combining the description, category, and relationship of entities. As shown in BIBREF0 , entity embeddings compress the semantic meaning of entities and drastically reduce the need for manually designed features or co-occurrence statistics. Therefore, we use the pre-trained entity embedding $E_{e_t^i}$ and concatenate it with the description vector $D_{e_t^i}$ to enrich the entity representation. The concatenation result is denoted by $V_{e_t^i}$ .", "id": 162, "question": "what are the mentioned cues?", "title": "Joint Entity Linking with Deep Reinforcement Learning" }, { "answers": [ "" ], "context": "The BioASQ Challenge includes a question answering task (Phase B, part B) where the aim is to find the “ideal answer” — that is, an answer that would normally be given by a person BIBREF0. This is in contrast with most other question answering challenges where the aim is normally to give an exact answer, usually a fact-based answer or a list. Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts, the task can be seen as an instance of query-based multi-document summarisation.", "id": 163, "question": "How did the author's work rank among other submissions on the challenge?", "title": "Classification Betters Regression in Query-based Multi-document Summarisation Techniques for Question Answering: Macquarie University at BioASQ7b" }, { "answers": [ "classification, regression, neural methods", "" ], "context": "The BioASQ challenge has organised annual challenges on biomedical semantic indexing and question answering since 2013 BIBREF0. Every year there has been a task about semantic indexing (task a) and another about question answering (task b), and occasionally there have been additional tasks. The tasks defined for 2019 are:", "id": 164, "question": "What approaches without reinforcement learning have been tried?", "title": "Classification Betters Regression in Query-based Multi-document Summarisation Techniques for Question Answering: Macquarie University at BioASQ7b" }, { "answers": [ "" ], "context": "Our past participation in BioASQ BIBREF1, BIBREF2 and this paper focus on extractive approaches to summarisation. Our decision to focus on extractive approaches is based on the observation that a relatively large number of sentences from the input snippets has very high ROUGE scores, thus suggesting that human annotators had a general tendency to copy text from the input to generate the target summaries BIBREF1. Our past participating systems used regression approaches using the following framework:", "id": 165, "question": "What classification approaches were experimented for this task?", "title": "Classification Betters Regression in Query-based Multi-document Summarisation Techniques for Question Answering: Macquarie University at BioASQ7b" }, { "answers": [ "" ], "context": "Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28.", "id": 166, "question": "Did classification models perform better than previous regression one?", "title": "Classification Betters Regression in Query-based Multi-document Summarisation Techniques for Question Answering: Macquarie University at BioASQ7b" }, { "answers": [ "" ], "context": "The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 , BIBREF3 projects. Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked. The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources. This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms. Unfortunately, neither resource perfectly realizes its schema. On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in fig:disagreement.", "id": 167, "question": "What are the main sources of recall errors in the mapping?", "title": "Marrying Universal Dependencies and Universal Morphology" }, { "answers": [ "" ], "context": "Morphological inflection is the act of altering the base form of a word (the lemma, represented in fixed-width type) to encode morphosyntactic features. As an example from English, prove takes on the form proved to indicate that the action occurred in the past. (We will represent all surface forms in quotation marks.) The process occurs in the majority of the world's widely-spoken languages, typically through meaningful affixes. The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing: The likelihood of observing a particular word form diminishes.", "id": 168, "question": "Do they look for inconsistencies between different languages' annotations in UniMorph?", "title": "Marrying Universal Dependencies and Universal Morphology" }, { "answers": [ "" ], "context": "Unlike the Penn Treebank tags, the UD and UniMorph schemata are cross-lingual and include a fuller lexicon of attribute-value pairs, such as Person: 1. Each was built according to a different set of principles. UD's schema is constructed bottom-up, adapting to include new features when they're identified in languages. UniMorph, conversely, is top-down: A cross-lingual survey of the literature of morphological phenomena guided its design. UniMorph aims to be linguistically complete, containing all known morphosyntactic attributes. Both schemata share one long-term goal: a total inventory for annotating the possible morphosyntactic features of a word.", "id": 169, "question": "Do they look for inconsistencies between different UD treebanks?", "title": "Marrying Universal Dependencies and Universal Morphology" }, { "answers": [ "Ar, Bg, Ca, Cs, Da, De, En, Es, Eu, Fa, Fi, Fr, Ga, He, Hi, Hu, It, La, Lt, Lv, Nb, Nl, Nn, PL, Pt, Ro, Ru, Sl, Sv, Tr, Uk, Ur", "" ], "context": "The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes (also called features in UD) annotating meaning or syntax, as well as language-specific attributes. In order to ensure consistent annotation, attributes are included into the general UD schema if they occur in several corpora. Language-specific attributes are used when only one corpus annotates for a specific feature.", "id": 170, "question": "Which languages do they validate on?", "title": "Marrying Universal Dependencies and Universal Morphology" }, { "answers": [ "" ], "context": "Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation.", "id": 171, "question": "Does the paper evaluate any adjustment to improve the predicion accuracy of face and audio features?", "title": "Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning" }, { "answers": [ "" ], "context": "A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9, BIBREF10, BIBREF11. As the reliability and reproducability of findings with this method have been critically discussed BIBREF12, the trend has increasingly shifted to perform the recognition directly on images and videos, especially with deep learning. For instance, jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences. kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks.", "id": 172, "question": "How is face and audio data analysis evaluated?", "title": "Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning" }, { "answers": [ "For the emotion recognition from text they use described neural network as baseline.\nFor audio and face there is no baseline." ], "context": "Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers. rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines. Next to such discriminative approaches, hidden Markov models are well-studied, however, there is no agreement on which feature-based classifier is most suitable BIBREF13. Similar to the facial expression modality, recent efforts on applying deep learning have been increased for acoustic speech processing. For instance, lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal. Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learning-based emotion recognition.", "id": 173, "question": "What is the baseline method for the task?", "title": "Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning" }, { "answers": [ "", "" ], "context": "Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain. On the side of resource creation, the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories, including a subset of emotions. Another popular resource is the NRC dictionary by Mohammad2012b. It contains more than 10000 words for a set of discrete emotion classes. Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes. Further, annotated corpora have been created for a set of different domains, for instance fairy tales BIBREF15, Blogs BIBREF16, Twitter BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, Facebook BIBREF22, news headlines BIBREF23, dialogues BIBREF24, literature BIBREF25, or self reports on emotion events BIBREF26 (see BIBREF27 for an overview).", "id": 174, "question": "What are the emotion detection tools used for audio and face input?", "title": "Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning" }, { "answers": [ "Training data with 159000, 80000, 40000, 20000, 10000 and 5000 sentences, and 7584 sentences for development", "" ], "context": "While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:", "id": 175, "question": "what amounts of size were used on german-english?", "title": "Revisiting Low-Resource Neural Machine Translation: A Case Study" }, { "answers": [ "" ], "context": "Figure FIGREF4 reproduces a plot by BIBREF3 which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by BIBREF4 are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource settings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions.", "id": 176, "question": "what were their experimental results in the low-resource dataset?", "title": "Revisiting Low-Resource Neural Machine Translation: A Case Study" }, { "answers": [ "" ], "context": "The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model BIBREF5 to the training of parts of the NMT model with additional objectives, including a language modelling objective BIBREF5 , BIBREF6 , BIBREF7 , an autoencoding objective BIBREF8 , BIBREF9 , or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language BIBREF6 , BIBREF10 , BIBREF11 . As an extreme case, models that rely exclusively on monolingual data have been shown to work BIBREF12 , BIBREF13 , BIBREF14 , BIBREF4 . Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .", "id": 177, "question": "what are the methods they compare with in the korean-english dataset?", "title": "Revisiting Low-Resource Neural Machine Translation: A Case Study" }, { "answers": [ "" ], "context": "We consider the hyperparameters used by BIBREF3 to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture BIBREF25 , label smoothing BIBREF26 , dropout BIBREF27 , word dropout BIBREF28 , layer normalization BIBREF29 and tied embeddings BIBREF30 .", "id": 178, "question": "what pitfalls are mentioned in the paper?", "title": "Revisiting Low-Resource Neural Machine Translation: A Case Study" }, { "answers": [ "", "" ], "context": "Over the past two decades, the rise of social media and the digitization of news and discussion platforms have radically transformed how individuals and groups create, process and share news and information. As Alan Rusbridger, former-editor-in-chief of the newspaper The Guardian has it, these technologically-driven shifts in the ways people communicate, organize themselves and express their beliefs and opinions, have", "id": 179, "question": "Does the paper report the results of previous models applied to the same tasks?", "title": "Facilitating on-line opinion dynamics by mining expressions of causation. The case of climate change debates on The Guardian" }, { "answers": [ "" ], "context": "The objective of the present article is to critically examine possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics on the basis of an experimental data analytics pipeline or observatory for mining and analyzing climate change-related user comments from the news website of The Guardian (TheGuardian.com). Combining insights from the social and political sciences with computational methods for the linguistic analysis of texts, this observatory provides a series of spatial (network) representations of the opinion landscapes on climate change on the basis of causation frames expressed in news website comments. This allows for the exploration of opinion spaces at different levels of detail and aggregation.", "id": 180, "question": "How is the quality of the discussion evaluated?", "title": "Facilitating on-line opinion dynamics by mining expressions of causation. The case of climate change debates on The Guardian" }, { "answers": [ "" ], "context": "In order to study on-line opinion dynamics and build the corresponding climate change opinion observatory discussed in this paper, a corpus of climate-change related news articles and news website comments was analyzed. Concretely, articles from the ‘climate change’ subsection from the news website of The Guardian dated from 2009 up to April 2019 were processed, along with up to 200 comments and associated metadata for articles where commenting was enabled at the time of publication. The choice for studying opinion dynamics using data from The Guardian is motivated by this news website's prominent position in the media landscape as well as its communicative setting, which is geared towards user engagement. Through this interaction with readers, the news platform embodies many of the recent shifts that characterize our present-day media ecology.", "id": 181, "question": "What is the technique used for text analysis and mining?", "title": "Facilitating on-line opinion dynamics by mining expressions of causation. The case of climate change debates on The Guardian" }, { "answers": [ "" ], "context": "In traditional experimental settings, survey techniques and associated statistical models provide researchers with established methods to gauge and analyze the opinions of a population. When studying opinion landscapes through on-line social media, however, harvesting beliefs from big textual data such as news website comments and developing or appropriating models for their analysis is a non-trivial task BIBREF12, BIBREF13, BIBREF14.", "id": 182, "question": "What are the causal mapping methods employed?", "title": "Facilitating on-line opinion dynamics by mining expressions of causation. The case of climate change debates on The Guardian" }, { "answers": [ "" ], "context": "Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:", "id": 183, "question": "What is the previous work's model?", "title": "\"Hinglish\"Language -- Modeling a Messy Code-Mixed Language" }, { "answers": [ "", "" ], "context": "From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:", "id": 184, "question": "What dataset is used?", "title": "\"Hinglish\"Language -- Modeling a Messy Code-Mixed Language" }, { "answers": [ "", "Resulting dataset was 7934 messages for train and 700 messages for test." ], "context": "Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.", "id": 185, "question": "How big is the dataset?", "title": "\"Hinglish\"Language -- Modeling a Messy Code-Mixed Language" }, { "answers": [ "" ], "context": "In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.", "id": 186, "question": "How is the dataset collected?", "title": "\"Hinglish\"Language -- Modeling a Messy Code-Mixed Language" }, { "answers": [ "" ], "context": "We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:", "id": 187, "question": "Was each text augmentation technique experimented individually?", "title": "\"Hinglish\"Language -- Modeling a Messy Code-Mixed Language" }, { "answers": [ "" ], "context": "The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:", "id": 188, "question": "What models do previous work use?", "title": "\"Hinglish\"Language -- Modeling a Messy Code-Mixed Language" }, { "answers": [ "" ], "context": "We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.", "id": 189, "question": "Does the dataset contain content from various social media platforms?", "title": "\"Hinglish\"Language -- Modeling a Messy Code-Mixed Language" }, { "answers": [ "" ], "context": "For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:", "id": 190, "question": "What dataset is used?", "title": "\"Hinglish\"Language -- Modeling a Messy Code-Mixed Language" }, { "answers": [ "", "" ], "context": "Multilingual BERT (mBERT; BIBREF0) is gaining popularity as a contextual representation for various multilingual tasks, such as dependency parsing BIBREF1, BIBREF2, cross-lingual natural language inference (XNLI) or named-entity recognition (NER) BIBREF3, BIBREF4, BIBREF5.", "id": 191, "question": "How they demonstrate that language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment?", "title": "How Language-Neutral is Multilingual BERT?" }, { "answers": [ "" ], "context": "Since the publication of mBERT BIBREF0, many positive experimental results were published.", "id": 192, "question": "Are language-specific and language-neutral components disjunctive?", "title": "How Language-Neutral is Multilingual BERT?" }, { "answers": [ "" ], "context": "Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.", "id": 193, "question": "How they show that mBERT representations can be split into a language-specific component and a language-neutral component?", "title": "How Language-Neutral is Multilingual BERT?" }, { "answers": [ "" ], "context": "We employ five probing tasks to evaluate the language neutrality of the representations.", "id": 194, "question": "What challenges this work presents that must be solved to build better language-neutral representations?", "title": "How Language-Neutral is Multilingual BERT?" }, { "answers": [ "" ], "context": "Empathetic chatbots are conversational agents that can understand user emotions and respond appropriately. Incorporating empathy into the dialogue system is essential to achieve better human-robot interaction because naturally, humans express and perceive emotion in natural language to increase their sense of social bonding. In the early development stage of such conversational systems, most of the efforts were put into developing hand-crafted rules of engagement. Recently, a modularized empathetic dialogue system, XiaoIce BIBREF0 achieved an impressive number of conversational turns per session, which was even higher than average conversations between humans. Despite the promising results of XiaoIce, this system is designed using a complex architecture with hundreds of independent components, such as Natural Language Understanding and Response Generation modules, using a tremendous amount of labeled data for training each of them.", "id": 195, "question": "What is the performance of their system?", "title": "CAiRE: An End-to-End Empathetic Chatbot" }, { "answers": [ "" ], "context": "As shown in Figure FIGREF4 , our user interface is based solely on text inputs. Users can type anything in the input box and get a response immediately from the server. A report button is added at the bottom to allow users to report unethical dialogues, which will then be marked and saved in our back-end server separately. To facilitate the need for teaching our chatbot how to respond properly, we add an edit button next to the response. When the user clicks it, a new input box will appear, and the user can type in the appropriate response they think the chatbot should have replied with.", "id": 196, "question": "What evaluation metrics are used?", "title": "CAiRE: An End-to-End Empathetic Chatbot" }, { "answers": [ "" ], "context": "Due to the high demand for GPU computations during response generation, the computation cost needs to be well distributed across different GPUs to support multiple users. We adopt several approaches to maximize the utility of GPUs without crashing the system. Firstly, we set up two independent processes in each GTX 1080Ti, where we found the highest GPU utilities to be around 90%, with both processes working stably. Secondly, we employ a load-balancing module to distribute the requests to idle processes based on their working loads. During a stress testing, we simulated users sending requests every 2 seconds, and using 8 GPUs, we were able to support more than 50 concurrent requests.", "id": 197, "question": "What is the source of the dialogues?", "title": "CAiRE: An End-to-End Empathetic Chatbot" }, { "answers": [ "", "" ], "context": "We apply the Generative Pre-trained Transformer (GPT) BIBREF2 as our pre-trained language model. GPT is a multi-layer Transformer decoder with a causal self-attention which is pre-trained, unsupervised, on the BooksCorpus dataset. BooksCorpus dataset contains over 7,000 unique unpublished books from a variety of genres. Pre-training on such large contiguous text corpus enables the model to capture long-range dialogue context information. Furthermore, as existing EmpatheticDialogue dataset BIBREF4 is relatively small, fine-tuning only on such dataset will limit the chitchat topic of the model. Hence, we first integrate persona into CAiRE, and pre-train the model on PersonaChat BIBREF3 , following a previous transfer-learning strategy BIBREF1 . This pre-training procedure allows CAiRE to have a more consistent persona, thus improving the engagement and consistency of the model. We refer interested readers to the code repository recently released by HuggingFace. Finally, in order to optimize empathy in CAiRE, we fine-tune this pre-trained model using EmpatheticDialogue dataset to help CAiRE understand users' feeling.", "id": 198, "question": "What pretrained LM is used?", "title": "CAiRE: An End-to-End Empathetic Chatbot" }, { "answers": [ "" ], "context": "Fueled by recent advances in deep-learning and language processing, NLP systems are increasingly being used for prediction and decision-making in many fields BIBREF0, including sensitive ones such as health, commerce and law BIBREF1. Unfortunately, these highly flexible and highly effective neural models are also opaque. There is therefore a critical need for explaining learning-based models' decisions.", "id": 199, "question": "What approaches they propose?", "title": "Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?" }, { "answers": [ "" ], "context": "There is considerable research effort in attempting to define and categorize the desiderata of a learned system's interpretation, most of which revolves around specific use-cases BIBREF17, BIBREF15.", "id": 200, "question": "What faithfulness criteria does they propose?", "title": "Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?" }, { "answers": [ "", "" ], "context": "A distinction is often made between two methods of achieving interpretability: (1) interpreting existing models via post-hoc techniques; and (2) designing inherently interpretable models. BIBREF29 argues in favor of inherently interpretable models, which by design claim to provide more faithful interpretations than post-hoc interpretation of black-box models.", "id": 201, "question": "Which are three assumptions in current approaches for defining faithfulness?", "title": "Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?" }, { "answers": [ "" ], "context": "While explanations have many different use-cases, such as model debugging, lawful guarantees or health-critical guarantees, one other possible use-case with particularly prominent evaluation literature is Intelligent User Interfaces (IUI), via Human-Computer Interaction (HCI), of automatic models assisting human decision-makers. In this case, the goal of the explanation is to increase the degree of trust between the user and the system, giving the user more nuance towards whether the system's decision is likely correct, or not. In the general case, the final evaluation metric is the performance of the user at their task BIBREF34. For example, BIBREF35 evaluate various explanations of a model in a setting of trivia question answering.", "id": 202, "question": "Which are key points in guidelines for faithfulness evaluation?", "title": "Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?" }, { "answers": [ "" ], "context": "Deep learning has achieved tremendous success for many NLP tasks. However, unlike traditional methods that provide optimized weights for human understandable features, the behavior of deep learning models is much harder to interpret. Due to the high dimensionality of word embeddings, and the complex, typically recurrent architectures used for textual data, it is often unclear how and why a deep learning model reaches its decisions.", "id": 203, "question": "Did they use the state-of-the-art model to analyze the attention?", "title": "Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference" }, { "answers": [ "", "" ], "context": "In NLI BIBREF4 , we are given two sentences, a premise and a hypothesis, the goal is to decide the logical relationship (Entailment, Neutral, or Contradiction) between them.", "id": 204, "question": "What is the performance of their model?", "title": "Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference" }, { "answers": [ "" ], "context": "In this work, we are primarily interested in the internal workings of the NLI model. In particular, we focus on the attention and the gating signals of LSTM readers, and how they contribute to the decisions of the model.", "id": 205, "question": "How many layers are there in their model?", "title": "Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference" }, { "answers": [ "" ], "context": "Attention has been widely used in many NLP tasks BIBREF12 , BIBREF13 , BIBREF14 and is probably one of the most critical parts that affects the inference decisions. Several pieces of prior work in NLI have attempted to visualize the attention layer to provide some understanding of their models BIBREF5 , BIBREF15 . Such visualizations generate a heatmap representing the similarity between the hidden states of the premise and the hypothesis (Eq. 19 of Appendix). Unfortunately the similarities are often the same regardless of the decision.", "id": 206, "question": "Did they compare with gradient-based methods?", "title": "Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference" }, { "answers": [ "machine comprehension" ], "context": "Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs.", "id": 207, "question": "What MC abbreviate for?", "title": "Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering" }, { "answers": [ "" ], "context": "Recent advance on reading comprehension and question answering has been closely associated with the availability of various datasets. BIBREF0 released the MCTest data consisting of 500 short, fictional open-domain stories and 2000 questions. The CNN/Daily Mail dataset BIBREF1 contains news articles for close style machine comprehension, in which only entities are removed and tested for comprehension. Children's Book Test (CBT) BIBREF2 leverages named entities, common nouns, verbs, and prepositions to test reading comprehension. The Stanford Question Answering Dataset (SQuAD) BIBREF3 is more recently released dataset, which consists of more than 100,000 questions for documents taken from Wikipedia across a wide range of topics. The question-answer pairs are annotated through crowdsourcing. Answers are spans of text marked in the original documents. In this paper, we use SQuAD to evaluate our models.", "id": 208, "question": "how much of improvement the adaptation model can get?", "title": "Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering" }, { "answers": [ "", "" ], "context": "Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.", "id": 209, "question": "what is the architecture of the baseline model?", "title": "Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering" }, { "answers": [ "" ], "context": "The interplay of syntax and semantics of natural language questions is of interest for question representation. We attempt to incorporate syntactic information in questions representation with TreeLSTM BIBREF13 , BIBREF14 . In general a TreeLSTM could perform semantic composition over given syntactic structures.", "id": 210, "question": "What is the exact performance on SQUAD?", "title": "Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering" }, { "answers": [ "High correlation results range from 0.472 to 0.936" ], "context": "Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems.", "id": 211, "question": "What are their correlation results?", "title": "SUM-QE: a BERT-based Summary Quality Estimation Model" }, { "answers": [ "" ], "context": "Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3, BIBREF2 are recall-oriented; they basically measure the content from a model (reference) summary that is preserved in peer (system generated) summaries. Pyramid requires substantial human effort, even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9. ROUGE is the most commonly used evaluation metric BIBREF10, BIBREF11, BIBREF12. Inspired by BLEU BIBREF4, it relies on common $n$-grams or subsequences between peer and model summaries. Many ROUGE versions are available, but it remains hard to decide which one to use BIBREF13. Being recall-based, ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries. BIBREF14 proposed a regression model for measuring summary quality without references. The scores of their model correlate well with Pyramid and Responsiveness, but text quality is only addressed indirectly.", "id": 212, "question": "What dataset do they use?", "title": "SUM-QE: a BERT-based Summary Quality Estimation Model" }, { "answers": [ "", "BiGRUs with attention, ROUGE, Language model, and next sentence prediction " ], "context": "We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems).", "id": 213, "question": "What simpler models do they look at?", "title": "SUM-QE: a BERT-based Summary Quality Estimation Model" }, { "answers": [ "Grammaticality, non-redundancy, referential clarity, focus, structure & coherence" ], "context": "In Sum-QE, each peer summary is converted into a sequence of token embeddings, consumed by an encoder $\\mathcal {E}$ to produce a (dense vector) summary representation $h$. Then, a regressor $\\mathcal {R}$ predicts a quality score $S_{\\mathcal {Q}}$ as an affine transformation of $h$:", "id": 214, "question": "What linguistic quality aspects are addressed?", "title": "SUM-QE: a BERT-based Summary Quality Estimation Model" }, { "answers": [ "", "" ], "context": "Knowledge graphs are usually collections of factual triples—(head entity, relation, tail entity), which represent human knowledge in a structured way. In the past few years, we have witnessed the great achievement of knowledge graphs in many areas, such as natural language processing BIBREF0, question answering BIBREF1, and recommendation systems BIBREF2.", "id": 215, "question": "What benchmark datasets are used for the link prediction task?", "title": "Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction" }, { "answers": [ "" ], "context": "In this section, we will describe the related work and the key differences between them and our work in two aspects—the model category and the way to model hierarchy structures in knowledge graphs.", "id": 216, "question": "What are state-of-the art models for this task?", "title": "Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction" }, { "answers": [ "" ], "context": "Roughly speaking, we can divide knowledge graph embedding models into three categories—translational distance models, bilinear models, and neural network based models. Table TABREF2 exhibits several popular models.", "id": 217, "question": "How better does HAKE model peform than state-of-the-art methods?", "title": "Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction" }, { "answers": [ "" ], "context": "Another related problem is how to model hierarchy structures in knowledge graphs. Some recent work considers the problem in different ways. BIBREF25 embed entities and categories jointly into a semantic space and designs models for the concept categorization and dataless hierarchical classification tasks. BIBREF13 use clustering algorithms to model the hierarchical relation structures. BIBREF12 proposed TKRL, which embeds the type information into knowledge graph embeddings. That is, TKRL requires additional hierarchical type information for entities.", "id": 218, "question": "How are entities mapped onto polar coordinate system?", "title": "Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction" }, { "answers": [ "", "" ], "context": "Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–", "id": 219, "question": "What additional techniques are incorporated?", "title": "Machine Translation from Natural Language to Code using Long-Short Term Memory" }, { "answers": [ "A parallel corpus where the source is an English expression of code and the target is Python code.", "" ], "context": "Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–", "id": 220, "question": "What dataset do they use?", "title": "Machine Translation from Natural Language to Code using Long-Short Term Memory" }, { "answers": [ "" ], "context": "According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.", "id": 221, "question": "Do they compare to other models?", "title": "Machine Translation from Natural Language to Code using Long-Short Term Memory" }, { "answers": [ "" ], "context": "One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-", "id": 222, "question": "What is the architecture of the system?", "title": "Machine Translation from Natural Language to Code using Long-Short Term Memory" }, { "answers": [ "" ], "context": "Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?", "id": 223, "question": "How long are expressions in layman's language?", "title": "Machine Translation from Natural Language to Code using Long-Short Term Memory" }, { "answers": [ "" ], "context": "The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.", "id": 224, "question": "What additional techniques could be incorporated to further improve accuracy?", "title": "Machine Translation from Natural Language to Code using Long-Short Term Memory" }, { "answers": [ "" ], "context": "SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.", "id": 225, "question": "What programming language is target language?", "title": "Machine Translation from Natural Language to Code using Long-Short Term Memory" }, { "answers": [ "" ], "context": "SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.", "id": 226, "question": "What dataset is used to measure accuracy?", "title": "Machine Translation from Natural Language to Code using Long-Short Term Memory" }, { "answers": [ "", "" ], "context": "“ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016)", "id": 227, "question": "Is text-to-image synthesis trained is suppervized or unsuppervized manner?", "title": "A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis" }, { "answers": [ "" ], "context": "In the early stages of research, text-to-image synthesis was mainly carried out through a search and supervised learning combined process BIBREF4, as shown in Figure FIGREF4. In order to connect text descriptions to images, one could use correlation between keywords (or keyphrase) & images that identifies informative and “picturable” text units; then, these units would search for the most likely image parts conditioned on the text, eventually optimizing the picture layout conditioned on both the text and the image parts. Such methods often integrated multiple artificial intelligence key components, including natural language processing, computer vision, computer graphics, and machine learning.", "id": 228, "question": "What challenges remain unresolved?", "title": "A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis" }, { "answers": [ "" ], "context": "Although generative model based text-to-image synthesis provides much more realistic image synthesis results, the image generation is still conditioned by the limited attributes. In recent years, several papers have been published on the subject of text-to-image synthesis. Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11.", "id": 229, "question": "What is the conclusion of comparison of proposed solution?", "title": "A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis" }, { "answers": [ "Semantic Enhancement GANs: DC-GANs, MC-GAN\nResolution Enhancement GANs: StackGANs, AttnGAN, HDGAN\nDiversity Enhancement GANs: AC-GAN, TAC-GAN etc.\nMotion Enhancement GAGs: T2S, T2V, StoryGAN" ], "context": "With the growth and success of GANs, deep convolutional decoder networks, and multimodal learning methods, these techniques were some of the first procedures which aimed to solve the challenge of image synthesis. Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments, with numerous proposals and publications detailing their contributions. Because GANs, introduced by BIBREF9, are emerging research topics, their practical applications to image synthesis are still in their infancy. Recently, many new GAN architectures and designs have been proposed to use GANs for different applications, e.g. using GANs to generate sentimental texts BIBREF18, or using GANs to transform natural images into cartoons BIBREF19.", "id": 230, "question": "What is typical GAN architecture for each text-to-image synhesis group?", "title": "A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis" }, { "answers": [ "" ], "context": "Incorporating sub-word structures like substrings, morphemes and characters to the creation of word representations significantly increases their quality as reflected both by intrinsic metrics and performance in a wide range of downstream tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .", "id": 231, "question": "Where do they employ feature-wise sigmoid gating?", "title": "Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study" }, { "answers": [ "" ], "context": "We are interested in studying different ways of combining word representations, obtained from different hierarchies, into a single word representation. Specifically, we want to study how combining word representations (1) taken directly from a word embedding lookup table, and (2) obtained from a function over the characters composing them, affects the quality of the final word representations.", "id": 232, "question": "Which model architecture do they use to obtain representations?", "title": "Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study" }, { "answers": [ "" ], "context": "The function INLINEFORM0 is composed of an embedding layer, an optional context function, and an aggregation function.", "id": 233, "question": "Which downstream sentence-level tasks do they evaluate on?", "title": "Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study" }, { "answers": [ "", "" ], "context": "We tested three different methods for combining INLINEFORM0 with INLINEFORM1 : simple concatenation, a learned scalar gate BIBREF11 , and a learned vector gate (also referred to as feature-wise sigmoidal gate). Additionally, we compared these methods to two baselines: using pre-trained word vectors only, and using character-only features for representing words. See fig:methods for a visual description of the proposed methods.", "id": 234, "question": "Which similarity datasets do they use?", "title": "Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study" }, { "answers": [ "" ], "context": "Distantly-supervised information extraction systems extract relation tuples with a set of pre-defined relations from text. Traditionally, researchers BIBREF0, BIBREF1, BIBREF2 use pipeline approaches where a named entity recognition (NER) system is used to identify the entities in a sentence and then a classifier is used to find the relation (or no relation) between them. However, due to the complete separation of entity detection and relation classification, these models miss the interaction between multiple relation tuples present in a sentence.", "id": 235, "question": "Are there datasets with relation tuples annotated, how big are datasets available?", "title": "Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction" }, { "answers": [ "" ], "context": "A relation tuple consists of two entities and a relation. Such tuples can be found in sentences where an entity is a text span in a sentence and a relation comes from a pre-defined set $R$. These tuples may share one or both entities among them. Based on this, we divide the sentences into three classes: (i) No Entity Overlap (NEO): A sentence in this class has one or more tuples, but they do not share any entities. (ii) Entity Pair Overlap (EPO): A sentence in this class has more than one tuple, and at least two tuples share both the entities in the same or reverse order. (iii) Single Entity Overlap (SEO): A sentence in this class has more than one tuple and at least two tuples share exactly one entity. It should be noted that a sentence can belong to both EPO and SEO classes. Our task is to extract all relation tuples present in a sentence.", "id": 236, "question": "Which one of two proposed approaches performed better in experiments?", "title": "Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction" }, { "answers": [ "" ], "context": "In this task, input to the system is a sequence of words, and output is a set of relation tuples. In our first approach, we represent each tuple as entity1 ; entity2 ; relation. We use `;' as a separator token to separate the tuple components. Multiple tuples are separated using the `$\\vert $' token. We have included one example of such representation in Table TABREF1. Multiple relation tuples with overlapping entities and different lengths of entities can be represented in a simple way using these special tokens (; and $\\vert $). During inference, after the end of sequence generation, relation tuples can be extracted easily using these special tokens. Due to this uniform representation scheme, where entity tokens, relation tokens, and special tokens are treated similarly, we use a shared vocabulary between the encoder and decoder which includes all of these tokens. The input sentence contains clue words for every relation which can help generate the relation tokens. We use two special tokens so that the model can distinguish between the beginning of a relation tuple and the beginning of a tuple component. To extract the relation tuples from a sentence using the encoder-decoder model, the model has to generate the entity tokens, find relation clue words and map them to the relation tokens, and generate the special tokens at appropriate time. Our experiments show that the encoder-decoder models can achieve this quite effectively.", "id": 237, "question": "What is previous work authors reffer to?", "title": "Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction" }, { "answers": [ "", "" ], "context": "We create a single vocabulary $V$ consisting of the source sentence tokens, relation names from relation set $R$, special separator tokens (`;', `$\\vert $'), start-of-target-sequence token (SOS), end-of-target-sequence token (EOS), and unknown word token (UNK). Word-level embeddings are formed by two components: (1) pre-trained word vectors (2) character embedding-based feature vectors. We use a word embedding layer $\\mathbf {E}_w \\in \\mathbb {R}^{\\vert V \\vert \\times d_w}$ and a character embedding layer $\\mathbf {E}_c \\in \\mathbb {R}^{\\vert A \\vert \\times d_c}$, where $d_w$ is the dimension of word vectors, $A$ is the character alphabet of input sentence tokens, and $d_c$ is the dimension of character embedding vectors. Following BIBREF7 (BIBREF7), we use a convolutional neural network with max-pooling to extract a feature vector of size $d_f$ for every word. Word embeddings and character embedding-based feature vectors are concatenated ($\\Vert $) to obtain the representation of the input tokens.", "id": 238, "question": "How higher are F1 scores compared to previous work?", "title": "Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction" }, { "answers": [ "", "" ], "context": "[block]I.1em", "id": 239, "question": "what were the baselines?", "title": "Learning to Rank Scientific Documents from the Crowd" }, { "answers": [ "" ], "context": "The number of biomedical research papers published has increased dramatically in recent years. As of October, 2016, PubMed houses over 26 million citations, with almost 1 million from the first 3 quarters of 2016 alone . It has become impossible for any one person to actually read all of the work being published. We require tools to help us determine which research articles would be most informative and related to a particular question or document. For example, a common task when reading articles is to find articles that are most related to another. Major research search engines offer such a “related articles” feature. However, we propose that instead of measuring relatedness by text-similarity measures, we build a model that is able to infer relatedness from the authors' judgments.", "id": 240, "question": "what is the supervised model they developed?", "title": "Learning to Rank Scientific Documents from the Crowd" }, { "answers": [ "" ], "context": "In order to develop and evaluate ranking algorithms we need a benchmark dataset. However, to the best of our knowledge, we know of no openly available benchmark dataset for bibliographic query-by-document systems. We therefore created such a benchmark dataset.", "id": 241, "question": "what is the size of this built corpus?", "title": "Learning to Rank Scientific Documents from the Crowd" }, { "answers": [ "" ], "context": "Learning-to-rank is a technique for reordering the results returned from a search engine query. Generally, the initial query to a search engine is concerned more with recall than precision: the goal is to obtain a subset of potentially related documents from the corpus. Then, given this set of potentially related documents, learning-to-rank algorithms reorder the documents such that the most relevant documents appear at the top of the list. This process is illustrated in Figure FIGREF6 .", "id": 242, "question": "what crowdsourcing platform is used?", "title": "Learning to Rank Scientific Documents from the Crowd" }, { "answers": [ "", "" ], "context": "In recent years, social media, forums, blogs and other forms of online communication tools have radically affected everyday life, especially how people express their opinions and comments. The extraction of useful information (such as people's opinion about companies brand) from the huge amount of unstructured data is vital for most companies and organizations BIBREF0 . The product reviews are important for business owners as they can take business decision accordingly to automatically classify user’s opinions towards products and services. The application of sentiment analysis is not limited to product or movie reviews but can be applied to different fields such as news, politics, sport etc. For example, in online political debates, the sentiment analysis can be used to identify people's opinions on a certain election candidate or political parties BIBREF1 BIBREF2 BIBREF3 . In this context, sentiment analysis has been widely used in different languages by using traditional and advanced machine learning techniques. However, limited research has been conducted to develop models for the Persian language.", "id": 243, "question": "Which deep learning model performed better?", "title": "Exploiting Deep Learning for Persian Sentiment Analysis" }, { "answers": [ "" ], "context": "In the literature, extensive research has been carried out to model novel sentiment analysis models using both shallow and deep learning algorithms. For example, the authors in BIBREF10 proposed a novel deep learning approach for polarity detection in product reviews. The authors addressed two major limitations of stacked denoising of autoencoders, high computational cost and the lack of scalability of high dimensional features. Their experimental results showed the effectiveness of proposed autoencoders in achieving accuracy upto 87%. Zhai et al., BIBREF11 proposed a five layers autoencoder for learning the specific representation of textual data. The autoencoders are generalised using loss function and derived discriminative loss function from label information. The experimental results showed that the model outperformed bag of words, denoising autoencoders and other traditional methods, achieving accuracy rate up to 85% . Sun et al., BIBREF12 proposed a novel method to extract contextual information from text using a convolutional autoencoder architecture. The experimental results showed that the proposed model outperformed traditional SVM and Nave Bayes models, reporting accuracy of 83.1 %, 63.9% and 67.8% respectively.", "id": 244, "question": "By how much did the results improve?", "title": "Exploiting Deep Learning for Persian Sentiment Analysis" }, { "answers": [ "" ], "context": "The novel dataset used in this work was collected manually and includes Persian movie reviews from 2014 to 2016. A subset of dataset was used to train the neural network (60% training dataset) and rest of the data (40%) was used to test and validate the performance of the trained neural network (testing set (30%), validation set (10%)). There are two types of labels in the dataset: positive or negative. The reviews were manually annotated by three native Persian speakers aged between 30 and 50 years old.", "id": 245, "question": "What was their performance on the dataset?", "title": "Exploiting Deep Learning for Persian Sentiment Analysis" }, { "answers": [ "" ], "context": "Sentiment analysis has been used extensively for a wide of range of real-world applications, ranging from product reviews, surveys feedback, to business intelligence, and operational improvements. However, the majority of research efforts are devoted to English-language only, where information of great importance is also available in other languages. In this work, we focus on developing sentiment analysis models for Persian language, specifically for Persian movie reviews. Two deep learning models (deep autoencoders and deep CNNs) are developed and compared with the the state-of-the-art shallow MLP based machine learning model. Simulations results revealed the outperformance of our proposed CNN model over autoencoders and MLP. In future, we intend to exploit more advanced deep learning models such as Long Short-Term Memory (LSTM) and LSTM-CNNs to further evaluate the performance of our developed novel Persian dataset.", "id": 246, "question": "How large is the dataset?", "title": "Exploiting Deep Learning for Persian Sentiment Analysis" }, { "answers": [ "", "" ], "context": "0pt0.03.03 *", "id": 247, "question": "Did the authors use crowdsourcing platforms?", "title": "Talk the Walk: Navigating New York City through Grounded Dialogue" }, { "answers": [ "" ], "context": "As artificial intelligence plays an ever more prominent role in everyday human lives, it becomes increasingly important to enable machines to communicate via natural language—not only with humans, but also with each other. Learning algorithms for natural language understanding, such as in machine translation and reading comprehension, have progressed at an unprecedented rate in recent years, but still rely on static, large-scale, text-only datasets that lack crucial aspects of how humans understand and produce natural language. Namely, humans develop language capabilities by being embodied in an environment which they can perceive, manipulate and move around in; and by interacting with other humans. Hence, we argue that we should incorporate all three fundamental aspects of human language acquisition—perception, action and interactive communication—and develop a task and dataset to that effect.", "id": 248, "question": "How was the dataset collected?", "title": "Talk the Walk: Navigating New York City through Grounded Dialogue" }, { "answers": [ "English" ], "context": "We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera. Most parts of the city are grid-like and uniform, which makes it well-suited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side—see Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.", "id": 249, "question": "What language do the agents talk in?", "title": "Talk the Walk: Navigating New York City through Grounded Dialogue" }, { "answers": [ "" ], "context": "For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners INLINEFORM0 . Next, we construct the overhead map of the environment, i.e. INLINEFORM1 with INLINEFORM2 and INLINEFORM3 . We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random.", "id": 250, "question": "What evaluation metrics did the authors look at?", "title": "Talk the Walk: Navigating New York City through Grounded Dialogue" }, { "answers": [ "" ], "context": "We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs.", "id": 251, "question": "What data did they use?", "title": "Talk the Walk: Navigating New York City through Grounded Dialogue" }, { "answers": [ "", "" ], "context": "In recent years, the spread of misinformation has become a growing concern for researchers and the public at large BIBREF1 . Researchers at MIT found that social media users are more likely to share false information than true information BIBREF2 . Due to renewed focus on finding ways to foster healthy political conversation, the profile of factcheckers has been raised.", "id": 252, "question": "Do the authors report results only on English data?", "title": "Real-time Claim Detection from News Articles and Retrieval of Semantically-Similar Factchecks" }, { "answers": [ "" ], "context": "It is important to decide what sentences are claims before attempting to cluster them. The first such claim detection system to have been created is ClaimBuster BIBREF6 , which scores sentences with an SVM to determine how likely they are to be politically pertinent statements. Similarly, ClaimRank BIBREF7 uses real claims checked by factchecking institutions as training data in order to surface sentences that are worthy of factchecking.", "id": 253, "question": "How is the accuracy of the system measured?", "title": "Real-time Claim Detection from News Articles and Retrieval of Semantically-Similar Factchecks" }, { "answers": [ "" ], "context": "It is much easier to build a dataset and reliably evaluate a model if the starting definitions are clear and objective. Questions around what is an interesting or pertinent claim are inherently subjective. For example, it is obvious that a politician will judge their opponents' claims to be more important to factcheck than their own.", "id": 254, "question": "How is an incoming claim used to retrieve similar factchecked claims?", "title": "Real-time Claim Detection from News Articles and Retrieval of Semantically-Similar Factchecks" }, { "answers": [ "" ], "context": "In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions.", "id": 255, "question": "What existing corpus is used for comparison in these experiments?", "title": "Real-time Claim Detection from News Articles and Retrieval of Semantically-Similar Factchecks" }, { "answers": [ "" ], "context": "We decided to follow a methodology upon the DBScan method of clustering BIBREF24 . DBScan considers all distances between pairs of points. If they are under INLINEFORM0 then those two are linked. Once the number of connected points exceeds a minimum size threshold, they are considered a cluster and all other points are considered to be unclustered. This method is advantageous for our purposes because unlike other methods, such as K-Means, it does not require the number of clusters to be specified. To create a system that can build clusters dynamically, adding one point at a time, we set the minimum cluster size to one, meaning that every point is a member of a cluster.", "id": 256, "question": "What are the components in the factchecking algorithm? ", "title": "Real-time Claim Detection from News Articles and Retrieval of Semantically-Similar Factchecks" }, { "answers": [ "" ], "context": "Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems and a large number of datasets are now available BIBREF0, BIBREF1, BIBREF2. However, these datasets suffer from annotation artifacts and other biases, which allow systems to “cheat”: Instead of learning to read texts, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a matching semantic type BIBREF3, BIBREF4. To give another example, many RC datasets contain a large number of “easy” problems that can be solved by looking at the first few words of the question Sugawara2018. In order to provide a reliable measure of progress, an RC dataset thus needs to be robust to such simple heuristics.", "id": 257, "question": "What is the baseline?", "title": "RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension" }, { "answers": [ "" ], "context": "We formally define RC-QED as follows:", "id": 258, "question": "What dataset was used in the experiment?", "title": "RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension" }, { "answers": [ "", "" ], "context": "This paper instantiates RC-QED by employing multiple choice, entity-based multi-hop QA BIBREF0 as a testbed (henceforth, RC-QED$^{\\rm E}$). In entity-based multi-hop QA, machines need to combine relational facts between entities to derive an answer. For example, in Figure FIGREF1, understanding the facts about Barracuda, Little Queen, and Portrait Records stated in each article is required. This design choice restricts a problem domain, but it provides interesting challenges as discussed in Section SECREF46. In addition, such entity-based chaining is known to account for the majority of reasoning types required for multi-hop reasoning BIBREF2.", "id": 259, "question": "Did they use any crowdsourcing platform?", "title": "RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension" }, { "answers": [ "" ], "context": "To acquire a large-scale corpus of NLDs, we use crowdsourcing (CS). Although CS is a powerful tool for large-scale dataset creation BIBREF2, BIBREF8, quality control for complex tasks is still challenging. We thus carefully design an incentive structure for crowdworkers, following Yang2018HotpotQA:Answering.", "id": 260, "question": "How was the dataset annotated?", "title": "RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension" }, { "answers": [ "" ], "context": "Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).", "id": 261, "question": "What is the source of the proposed dataset?", "title": "RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension" }, { "answers": [ "" ], "context": "Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets\"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks.", "id": 262, "question": "How many label options are there in the multi-label task?", "title": "Event Outcome Prediction using Sentiment Analysis and Crowd Wisdom in Microblog Feeds" }, { "answers": [ "" ], "context": "Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph.", "id": 263, "question": "What is the interannotator agreement of the crowd sourced users?", "title": "Event Outcome Prediction using Sentiment Analysis and Crowd Wisdom in Microblog Feeds" }, { "answers": [ "", "" ], "context": "Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@\" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets.", "id": 264, "question": "Who are the experts?", "title": "Event Outcome Prediction using Sentiment Analysis and Crowd Wisdom in Microblog Feeds" }, { "answers": [ "" ], "context": "As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as \"I hate Donald Trump, but I will vote for him\". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet.", "id": 265, "question": "Who is the crowd in these experiments?", "title": "Event Outcome Prediction using Sentiment Analysis and Crowd Wisdom in Microblog Feeds" }, { "answers": [ "" ], "context": "Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis.", "id": 266, "question": "How do you establish the ground truth of who won a debate?", "title": "Event Outcome Prediction using Sentiment Analysis and Crowd Wisdom in Microblog Feeds" }, { "answers": [ "Accuracy of best proposed method KANE (LSTM+Concatenation) are 0.8011, 0.8592, 0.8605 compared to best state-of-the art method R-GCN + LR 0.7721, 0.8193, 0.8229 on three datasets respectively." ], "context": "In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\\textit {head entity}, relation, \\textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\\textit {Donald Trump}, Born In, \\textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\\textit {Born}$ and $\\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\\textit {Donald Trump},Father of, \\textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\\textit {Donald Trump},Born, \\textit {\"June 14, 1946\"})$.", "id": 267, "question": "How much better is performance of proposed method than state-of-the-art methods in experiments?", "title": "Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding" }, { "answers": [ "" ], "context": "In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\\textbf {h}$ and $\\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\\textbf {t}$ of tail entity should be close to the head entity's embedding $\\textbf {r}$ plus the relation vector $\\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction\". Hence, TransE defines the following loss function:", "id": 268, "question": "What further analysis is done?", "title": "Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding" }, { "answers": [ "" ], "context": "In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:", "id": 269, "question": "What seven state-of-the-art methods are used for comparison?", "title": "Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding" }, { "answers": [ "", "" ], "context": "In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively.", "id": 270, "question": "What three datasets are used to measure performance?", "title": "Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding" }, { "answers": [ "" ], "context": "The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.", "id": 271, "question": "How does KANE capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner?", "title": "Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding" }, { "answers": [ "" ], "context": "The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.", "id": 272, "question": "What are recent works on knowedge graph embeddings authors mention?", "title": "Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding" }, { "answers": [ "" ], "context": "The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.", "id": 273, "question": "Do they report results only on English data?", "title": "A Computational Approach to Automatic Prediction of Drunk Texting" }, { "answers": [ "" ], "context": "Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.", "id": 274, "question": "Do the authors mention any confounds to their study?", "title": "A Computational Approach to Automatic Prediction of Drunk Texting" }, { "answers": [ "Human evaluators" ], "context": "Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:", "id": 275, "question": "What baseline model is used?", "title": "A Computational Approach to Automatic Prediction of Drunk Texting" }, { "answers": [ "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalisation, Length, Emoticon (Presence/Count ) \n and Sentiment Ratio", "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalization, Length, Emoticon (Presence/Count), Sentiment Ratio." ], "context": "We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:", "id": 276, "question": "What stylistic features are used to detect drunk texts?", "title": "A Computational Approach to Automatic Prediction of Drunk Texting" }, { "answers": [ "" ], "context": "The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.", "id": 277, "question": "Is the data acquired under distant supervision verified by humans at any stage?", "title": "A Computational Approach to Automatic Prediction of Drunk Texting" }, { "answers": [ "" ], "context": "Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class.", "id": 278, "question": "What hashtags are used for distant supervision?", "title": "A Computational Approach to Automatic Prediction of Drunk Texting" }, { "answers": [ "" ], "context": "Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction.", "id": 279, "question": "Do the authors equate drunk tweeting with drunk texting? ", "title": "A Computational Approach to Automatic Prediction of Drunk Texting" }, { "answers": [ "", "" ], "context": "Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .", "id": 280, "question": "What corpus was the source of the OpenIE extractions?", "title": "Answering Complex Questions Using Open Information Extraction" }, { "answers": [ "51.7 and 51.6 on 4th and 8th grade question sets with no curated knowledge. 47.5 and 48.0 on 4th and 8th grade question sets when both solvers are given the same knowledge" ], "context": "We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).", "id": 281, "question": "What is the accuracy of the proposed technique?", "title": "Answering Complex Questions Using Open Information Extraction" }, { "answers": [ "" ], "context": "We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.", "id": 282, "question": "Is an entity linking process used?", "title": "Answering Complex Questions Using Open Information Extraction" }, { "answers": [ "" ], "context": "We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T).", "id": 283, "question": "Are the OpenIE extractions all triples?", "title": "Answering Complex Questions Using Open Information Extraction" }, { "answers": [ "" ], "context": "Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\\lbrace a_i\\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.", "id": 284, "question": "What method was used to generate the OpenIE extractions?", "title": "Answering Complex Questions Using Open Information Extraction" }, { "answers": [ "" ], "context": "Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \\cup T^{\\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .", "id": 285, "question": "Can the method answer multi-hop questions?", "title": "Answering Complex Questions Using Open Information Extraction" }, { "answers": [ "" ], "context": "Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .", "id": 286, "question": "What was the textual source to which OpenIE was applied?", "title": "Answering Complex Questions Using Open Information Extraction" }, { "answers": [ "" ], "context": "Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.", "id": 287, "question": "What OpenIE method was used to generate the extractions?", "title": "Answering Complex Questions Using Open Information Extraction" }, { "answers": [ "" ], "context": "We describe four classes of failures that we observed, and the future work they suggest.", "id": 288, "question": "Is their method capable of multi-hop reasoning?", "title": "Answering Complex Questions Using Open Information Extraction" }, { "answers": [ "" ], "context": "Word sense disambiguation (WSD) is a natural language processing task of identifying the particular word senses of polysemous words used in a sentence. Recently, a lot of attention was paid to the problem of WSD for the Russian language BIBREF0 , BIBREF1 , BIBREF2 . This problem is especially difficult because of both linguistic issues – namely, the rich morphology of Russian and other Slavic languages in general – and technical challenges like the lack of software and language resources required for addressing the problem.", "id": 289, "question": "Do the authors offer any hypothesis about why the dense mode outperformed the sparse one?", "title": "An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages" }, { "answers": [ "" ], "context": "Although the problem of WSD has been addressed in many SemEval campaigns BIBREF3 , BIBREF4 , BIBREF5 , we focus here on word sense disambiguation systems rather than on the research methodologies.", "id": 290, "question": "What evaluation is conducted?", "title": "An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages" }, { "answers": [ "" ], "context": "Watasense is implemented in the Python programming language using the scikit-learn BIBREF10 and Gensim BIBREF11 libraries. Watasense offers a Web interface (Figure FIGREF2 ), a command-line tool, and an application programming interface (API) for deployment within other applications.", "id": 291, "question": "Which corpus of synsets are used?", "title": "An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages" }, { "answers": [ "" ], "context": "A sentence is represented as a list of spans. A span is a quadruple: INLINEFORM0 , where INLINEFORM1 is the word or the token, INLINEFORM2 is the part of speech tag, INLINEFORM3 is the lemma, INLINEFORM4 is the position of the word in the sentence. These data are provided by tokenizer, part-of-speech tagger, and lemmatizer that are specific for the given language. The WSD results are represented as a map of spans to the corresponding word sense identifiers.", "id": 292, "question": "What measure of semantic similarity is used?", "title": "An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages" }, { "answers": [ "The dataset comes with a ranked set of relevant documents. Hence the baselines do not use a retrieval system." ], "context": "Factoid Question Answering (QA) aims to extract answers, from an underlying knowledge source, to information seeking questions posed in natural language. Depending on the knowledge source available there are two main approaches for factoid QA. Structured sources, including Knowledge Bases (KBs) such as Freebase BIBREF1 , are easier to process automatically since the information is organized according to a fixed schema. In this case the question is parsed into a logical form in order to query against the KB. However, even the largest KBs are often incomplete BIBREF2 , BIBREF3 , and hence can only answer a limited subset of all possible factoid questions.", "id": 293, "question": "Which retrieval system was used for baselines?", "title": "Quasar: Datasets for Question Answering by Search and Reading" }, { "answers": [ "" ], "context": "Named Entity Recognition (NER) is one of information extraction subtasks that is responsible for detecting entity elements from raw text and can determine the category in which the element belongs, these categories include the names of persons, organizations, locations, expressions of times, quantities, monetary values and percentages.", "id": 294, "question": "What word embeddings were used?", "title": "Error Analysis for Vietnamese Named Entity Recognition on Deep Neural Network Models" }, { "answers": [ "" ], "context": "Previously publicly available NER systems do not use DNN, for example, the MITRE Identification Scrubber Toolkit (MIST) BIBREF0, Stanford NER BIBREF1, BANNER BIBREF2 and NERsuite BIBREF3. NER systems for Vietnamese language processing used traditional machine learning methods such as Maximum Entropy Markov Model (MEMM), Support Vector Machine (SVM) and Conditional Random Field (CRF). In particular, most of the toolkits for NER task attempted to use MEMM BIBREF4, and CRF BIBREF5 to solve this problem.", "id": 295, "question": "What type of errors were produced by the BLSTM-CNN-CRF system?", "title": "Error Analysis for Vietnamese Named Entity Recognition on Deep Neural Network Models" }, { "answers": [ "Best BLSTM-CNN-CRF had F1 score 86.87 vs 86.69 of best BLSTM-CRF " ], "context": "The results of our analysis experiments are reported in precision and recall over all labels (name of person, location, organization and miscellaneous). The process of analyzing errors has 2 steps:", "id": 296, "question": "How much better was the BLSTM-CNN-CRF than the BLSTM-CRF?", "title": "Error Analysis for Vietnamese Named Entity Recognition on Deep Neural Network Models" }, { "answers": [ "Multitask learning is used for the task of predicting relevance of a comment on a different question to a given question, where the supplemental tasks are predicting relevance between the questions, and between the comment and the corresponding question" ], "context": "Community question answering (cQA) is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions. In the past decade, these websites have attracted a great number of users, and have accumulated a large collection of question-comment threads generated by these users. However, the low restriction results in a high variation in answer quality, which makes it time-consuming to search for useful information from the existing content. It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question, or when looking for solutions from comments of an existing question.", "id": 297, "question": "What supplemental tasks are used for multitask learning?", "title": "Recurrent Neural Network Encoder with Attention for Community Question Answering" }, { "answers": [ "" ], "context": "Earlier work of community question answering relied heavily on feature engineering, linguistic tools, and external resource. BIBREF3 and BIBREF4 utilized rich non-textual features such as answer's profile. BIBREF5 syntactically analyzed the question and extracted name entity features. BIBREF6 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment.", "id": 298, "question": "Is the improvement actually coming from using an RNN?", "title": "Recurrent Neural Network Encoder with Attention for Community Question Answering" }, { "answers": [ "0.007 MAP on Task A, 0.032 MAP on Task B, 0.055 MAP on Task C" ], "context": "In this section, we first discuss long short-term memory (LSTM) units and an associated attention mechanism. Next, we explain how we can encode a pair of sentences into a dense vector for predicting relationships using an LSTM with an attention mechanism. Finally, we apply these models to predict question-question similarity, question-comment similarity, and question-external comment similarity.", "id": 299, "question": "How much performance gap between their approach and the strong handcrafted method?", "title": "Recurrent Neural Network Encoder with Attention for Community Question Answering" }, { "answers": [ "" ], "context": "LSTMs have shown great success in many different fields. An LSTM unit contains a memory cell with self-connections, as well as three multiplicative gates to control information flow. Given input vector $x_t$ , previous hidden outputs $h_{t-1}$ , and previous cell state $c_{t-1}$ , LSTM units operate as follows: ", "id": 300, "question": "What is a strong feature-based method?", "title": "Recurrent Neural Network Encoder with Attention for Community Question Answering" }, { "answers": [ "" ], "context": "A traditional RNN encoder-decoder approach BIBREF11 first encodes an arbitrary length input sequence into a fixed-length dense vector that can be used as input to subsequent classification models, or to initialize the hidden state of a secondary decoder. However, the requirement to compress all necessary information into a single fixed length vector can be problematic. A neural attention model BIBREF12 BIBREF13 has been recently proposed to alleviate this issue by enabling the network to attend to past outputs when decoding. Thus, the encoder no longer needs to represent an entire sequence with one vector; instead, it encodes information into a sequence of vectors, and adaptively chooses a subset of the vectors when decoding.", "id": 301, "question": "Did they experimnet in other languages?", "title": "Recurrent Neural Network Encoder with Attention for Community Question Answering" }, { "answers": [ "" ], "context": "Targeted sentiment classification is a fine-grained sentiment analysis task, which aims at determining the sentiment polarities (e.g., negative, neutral, or positive) of a sentence over “opinion targets” that explicitly appear in the sentence. For example, given a sentence “I hated their service, but their food was great”, the sentiment polarities for the target “service” and “food” are negative and positive respectively. A target is usually an entity or an entity aspect.", "id": 302, "question": "Do they use multi-attention heads?", "title": "Attentional Encoder Network for Targeted Sentiment Classification" }, { "answers": [ "Proposed model has 1.16 million parameters and 11.04 MB." ], "context": "The research approach of the targeted sentiment classification task including traditional machine learning methods and neural networks methods.", "id": 303, "question": "How big is their model?", "title": "Attentional Encoder Network for Targeted Sentiment Classification" }, { "answers": [ "" ], "context": "Given a context sequence INLINEFORM0 and a target sequence INLINEFORM1 , where INLINEFORM2 is a sub-sequence of INLINEFORM3 . The goal of this model is to predict the sentiment polarity of the sentence INLINEFORM4 over the target INLINEFORM5 .", "id": 304, "question": "How is their model different from BERT?", "title": "Attentional Encoder Network for Targeted Sentiment Classification" }, { "answers": [ "" ], "context": "Opinion mining BIBREF0 is a huge field that covers many NLP tasks ranging from sentiment analysis BIBREF1 , aspect extraction BIBREF2 , and opinion summarization BIBREF3 , among others. Despite the vast literature on opinion mining, the task on suggestion mining has given little attention. Suggestion mining BIBREF4 is the task of collecting and categorizing suggestions about a certain product. This is important because while opinions indirectly give hints on how to improve a product (e.g. analyzing reviews), suggestions are direct improvement requests (e.g. tips, advice, recommendations) from people who have used the product.", "id": 305, "question": "What datasets were used?", "title": "ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples" }, { "answers": [ "" ], "context": "We present our model JESSI, which stands for Joint Encoders for Stable Suggestion Inference, shown in Figure FIGREF4 . Given a sentence INLINEFORM0 , JESSI returns a binary suggestion label INLINEFORM1 . JESSI consists of four important components: (1) A BERT-based encoder that leverages general knowledge acquired from a large pre-trained language model, (2) A CNN-based encoder that learns task-specific sentence representations, (3) an MLP classifier that predicts the label given the joint encodings, and (4) a domain adversarial training module that prevents the model to distinguish between the two domains.", "id": 306, "question": "How did they do compared to other teams?", "title": "ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples" }, { "answers": [ "" ], "context": "Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms.", "id": 307, "question": "Which tested technique was the worst performer?", "title": "DENS: A Dataset for Multi-class Emotion Analysis" }, { "answers": [ "9" ], "context": "Using the categorical basic emotion model BIBREF3, BIBREF4, BIBREF5 studied creating lexicons from tweets for use in emotion analysis. Recently, BIBREF1, BIBREF6 and BIBREF2 proposed shared-tasks for multi-class emotion analysis based on tweets.", "id": 308, "question": "How many emotions do they look at?", "title": "DENS: A Dataset for Multi-class Emotion Analysis" }, { "answers": [ "" ], "context": "In this section, we describe the process used to collect and annotate the dataset.", "id": 309, "question": "What are the baseline benchmarks?", "title": "DENS: A Dataset for Multi-class Emotion Analysis" }, { "answers": [ "" ], "context": "The dataset is annotated based on a modified Plutchik’s wheel of emotions.", "id": 310, "question": "What is the size of this dataset?", "title": "DENS: A Dataset for Multi-class Emotion Analysis" }, { "answers": [ "" ], "context": "We selected both classic and modern narratives in English for this dataset. The modern narratives were sampled based on popularity from Wattpad. We parsed selected narratives into passages, where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens.", "id": 311, "question": "How many annotators were there?", "title": "DENS: A Dataset for Multi-class Emotion Analysis" }, { "answers": [ "" ], "context": "State-of-the-art speech recognition accuracy has significantly improved over the past few years since the application of deep neural networks BIBREF0 , BIBREF1 . Recently, it has been shown that with the application of both neural network acoustic model and language model, an automatic speech recognizer can approach human-level accuracy on the Switchboard conversational speech recognition benchmark using around 2,000 hours of transcribed data BIBREF2 . While progress is mainly driven by well engineered neural network architectures and a large amount of training data, the hidden Markov model (HMM) that has been the backbone for speech recognition for decades is still playing a central role. Though tremendously successful for the problem of speech recognition, the HMM-based pipeline factorizes the whole system into several components, and building these components separately may be less computationally efficient when developing a large-scale system from thousands to hundred of thousands of examples BIBREF3 .", "id": 312, "question": "Can SCRF be used to pretrain the model?", "title": "Multitask Learning with CTC and Segmental CRF for Speech Recognition" }, { "answers": [ "" ], "context": "A common way for marking information about gender, number, and case in language is morphology, or the structure of a given word in the language. However, different languages mark such information in different ways – for example, in some languages gender may be marked on the head word of a syntactic dependency relation, while in other languages it is marked on the dependent, on both, or on none of them BIBREF0 . This morphological diversity creates a challenge for machine translation, as there are ambiguous cases where more than one correct translation exists for the same source sentence. For example, while the English sentence “I love language” is ambiguous with respect to the gender of the speaker, Hebrew marks verbs for the gender of their subject and does not allow gender-neutral translation. This allows two possible Hebrew translations – one in a masculine and the other in a feminine form. As a consequence, a sentence-level translator (either human or machine) must commit to the gender of the speaker, adding information that is not present in the source. Without additional context, this choice must be done arbitrarily by relying on language conventions, world knowledge or statistical (stereotypical) knowledge.", "id": 313, "question": "What conclusions are drawn from the syntactic analysis?", "title": "Filling Gender&Number Gaps in Neural Machine Translation with Black-box Context Injection" }, { "answers": [ "" ], "context": "Different languages use different morphological features marking different properties on different elements. For example, English marks for number, case, aspect, tense, person, and degree of comparison. However, English does not mark gender on nouns and verbs. Even when a certain property is marked, languages differ in the form and location of the marking BIBREF0 . For example, marking can occur on the head of a syntactic dependency construction, on its argument, on both (requiring agreement), or on none of them. Translation systems must generate correct target-language morphology as part of the translation process. This requires knowledge of both the source-side and target-side morphology. Current state-of-the-art translation systems do capture many aspects of natural language, including morphology, when a relevant context is available BIBREF2 , BIBREF3 , but resort to “guessing” based on the training-data statistics when it is not. Complications arise when different languages convey different kinds of information in their morphological systems. In such cases, a translation system may be required to remove information available in the source sentence, or to add information not available in it, where the latter can be especially tricky.", "id": 314, "question": "What type of syntactic analysis is performed?", "title": "Filling Gender&Number Gaps in Neural Machine Translation with Black-box Context Injection" }, { "answers": [ "" ], "context": "Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.", "id": 315, "question": "How is it demonstrated that the correct gender and number information is injected using this system?", "title": "Filling Gender&Number Gaps in Neural Machine Translation with Black-box Context Injection" }, { "answers": [ "" ], "context": "To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience.", "id": 316, "question": "Which neural machine translation system is used?", "title": "Filling Gender&Number Gaps in Neural Machine Translation with Black-box Context Injection" }, { "answers": [ "" ], "context": "We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence.", "id": 317, "question": "What are the components of the black-box context injection system?", "title": "Filling Gender&Number Gaps in Neural Machine Translation with Black-box Context Injection" }, { "answers": [ "" ], "context": "Although development of the first speech recognition systems began half a century ago, there has been a significant increase of the accuracy of ASR systems and number of their applications for the recent ten years, even for low-resource languages BIBREF0 , BIBREF1 .", "id": 318, "question": "What normalization techniques are mentioned?", "title": "Exploring End-to-End Techniques for Low-Resource Speech Recognition" }, { "answers": [ "" ], "context": "Development of CTC-based systems originates from the paper BIBREF3 where CTC loss was introduced. This loss is a total probability of labels sequence given observation sequence, which takes into account all possible alignments induced by a given words sequence.", "id": 319, "question": "What features do they experiment with?", "title": "Exploring End-to-End Techniques for Low-Resource Speech Recognition" }, { "answers": [ "" ], "context": "For all experiments we used conversational speech from IARPA Babel Turkish Language Pack (LDC2016S10). This corpus contains about 80 hours of transcribed speech for training and 10 hours for development. The dataset is rather small compared to widely used benchmarks for conversational speech: English Switchboard corpus (300 hours, LDC97S62) and Fisher dataset (2000 hours, LDC2004S13 and LDC2005S13).", "id": 320, "question": "Which architecture is their best model?", "title": "Exploring End-to-End Techniques for Low-Resource Speech Recognition" }, { "answers": [ "" ], "context": "We tried to explore the behavior of different neural network architectures in case when rather small data is available. We used multi-layer bidirectional LSTM networks, tried fully-convolutional architecture similar to Wav2Letter BIBREF8 and explored DeepSpeech-like architecture developed by Salesforce (DS-SF) BIBREF14 .", "id": 321, "question": "What kind of spontaneous speech is used?", "title": "Exploring End-to-End Techniques for Low-Resource Speech Recognition" }, { "answers": [ "Only MTMSM specifically tried to tackle the multi-span questions. Their approach consisted of two parts: first train a dedicated categorical variable to predict the number of spans to extract and the second was to generalize the single-span head method of extracting a span" ], "context": "The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published.", "id": 322, "question": "What approach did previous models use for multi-span questions?", "title": "Tag-based Multi-Span Extraction in Reading Comprehension" }, { "answers": [ "" ], "context": "Numerically-aware QANet (NAQANet) BIBREF3 was the model released with DROP. It uses QANET BIBREF5, at the time the best-performing published model on SQuAD 1.1 BIBREF0 (without data augmentation or pretraining), as the encoder. On top of QANET, NAQANet adds four different output layers, which we refer to as \"heads\". Each of these heads is designed to tackle a specific question type from DROP, where these types where identified by DROP's authors post-creation of the dataset. These four heads are (1) Passage span head, designed for producing answers that consist of a single span from the passage. This head deals with the type of questions already introduced in SQuAD. (2) Question span head, for answers that consist of a single span from the question. (3) Arithmetic head, for answers that require adding or subtracting numbers from the passage. (4) Count head, for answers that require counting and sorting entities from the text. In addition, to determine which head should be used to predict an answer, a 4-way categorical variable, as per the number of heads, is trained. We denote this categorical variable as the \"head predictor\".", "id": 323, "question": "How they use sequence tagging to answer multi-span questions?", "title": "Tag-based Multi-Span Extraction in Reading Comprehension" }, { "answers": [ "For single-span questions, the proposed LARGE-SQUAD improve performance of the MTMSNlarge baseline for 2.1 EM and 1.55 F1.\nFor number type question, MTMSNlarge baseline have improvement over LARGE-SQUAD for 3,11 EM and 2,98 F1. \nFor date question, LARGE-SQUAD have improvements in 2,02 EM but MTMSNlarge have improvement of 4,39 F1." ], "context": "Problem statement. Given a pair $(x^P,x^Q)$ of a passage and a question respectively, both comprised of tokens from a vocabulary $V$, we wish to predict an answer $y$. The answer could be either a collection of spans from the input, or a number, supposedly arrived to by performing arithmetic reasoning on the input. We want to estimate $p(y;x^P,x^Q)$.", "id": 324, "question": "What is difference in peformance between proposed model and state-of-the art on other question types?", "title": "Tag-based Multi-Span Extraction in Reading Comprehension" }, { "answers": [ "The proposed model achieves EM 77,63 and F1 80,73 on the test and EM 76,95 and F1 80,25 on the dev" ], "context": "Assume there are $K$ answer heads in the model and their weights denoted by $\\theta $. For each pair $(x^P,x^Q)$ we assume a latent categorical random variable $z\\in \\left\\lbrace 1,\\ldots \\,K\\right\\rbrace $ such that the probability of an answer $y$ is", "id": 325, "question": "What is the performance of proposed model on entire DROP dataset?", "title": "Tag-based Multi-Span Extraction in Reading Comprehension" }, { "answers": [ "" ], "context": "Before going over the answer heads, two additional components should be introduced - the summary vectors, and the head predictor.", "id": 326, "question": "What is the previous model that attempted to tackle multi-span questions as a part of its design?", "title": "Tag-based Multi-Span Extraction in Reading Comprehension" }, { "answers": [ "" ], "context": "Data annotation is a key bottleneck in many data driven algorithms. Specifically, deep learning models, which became a prominent tool in many data driven tasks in recent years, require large datasets to work well. However, many tasks require manual annotations which are relatively hard to obtain at scale. An attractive alternative is lightly supervised learning BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. For example, in label regularization BIBREF0 the model is trained to fit the true label proportions of an unlabeled dataset. Label regularization is special case of expectation regularization (XR) BIBREF0 , in which the model is trained to fit the conditional probabilities of labels given features.", "id": 327, "question": "How much more data does the model trained using XR loss have access to, compared to the fully supervised model?", "title": "Transfer Learning Between Related Tasks Using Expected Label Proportions" }, { "answers": [ "" ], "context": "An effective way to supplement small annotated datasets is to use lightly supervised learning, in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. Previous work in lightly-supervised learning focused on training classifiers by using prior knowledge of label proportions BIBREF2 , BIBREF3 , BIBREF10 , BIBREF0 , BIBREF11 , BIBREF12 , BIBREF7 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF8 or prior knowledge of features label associations BIBREF1 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In the context of NLP, BIBREF17 suggested to use distributional similarities of words to train sequence models for part-of-speech tagging and a classified ads information extraction task. BIBREF19 used background lexical information in terms of word-class associations to train a sentiment classifier. BIBREF21 , BIBREF22 suggested to exploit the bilingual correlations between a resource rich language and a resource poor language to train a classifier for the resource poor language in a lightly supervised manner.", "id": 328, "question": "Does the system trained only using XR loss outperform the fully supervised neural system?", "title": "Transfer Learning Between Related Tasks Using Expected Label Proportions" }, { "answers": [ "BiLSTM-XR-Dev Estimation accuracy is 83.31 for SemEval-15 and 87.68 for SemEval-16.\nBiLSTM-XR accuracy is 83.31 for SemEval-15 and 88.12 for SemEval-16.\n" ], "context": "Expectation Regularization (XR) BIBREF0 is a lightly supervised learning method, in which the model is trained to fit the conditional probabilities of labels given features. In the context of NLP, XR was used by BIBREF20 to train twitter-user attribute prediction using hundreds of noisy distributional expectations based on census demographics. Here, we suggest using XR to train a target task (aspect-level sentiment) based on the output of a related source-task classifier (sentence-level sentiment).", "id": 329, "question": "How accurate is the aspect based sentiment classifier trained only using the XR loss?", "title": "Transfer Learning Between Related Tasks Using Expected Label Proportions" }, { "answers": [ "" ], "context": "In the aspect-based sentiment classification (ABSC) task, we are given a sentence and an aspect, and need to determine the sentiment that is expressed towards the aspect. For example the sentence “Excellent food, although the interior could use some help.“ has two aspects: food and interior, a positive sentiment is expressed about the food, but a negative sentiment is expressed about the interior. A sentence INLINEFORM0 , may contain 0 or more aspects INLINEFORM1 , where each aspect corresponds to a sub-sequence of the original sentence, and has an associated sentiment label (Neg, Pos, or Neu). Concretely, we follow the task definition in the SemEval-2015 and SemEval-2016 shared tasks BIBREF23 , BIBREF24 , in which the relevant aspects are given and the task focuses on finding the sentiment label of the aspects.", "id": 330, "question": "How is the expectation regularization loss defined?", "title": "Transfer Learning Between Related Tasks Using Expected Label Proportions" }, { "answers": [ "The Lemming model in BIBREF17" ], "context": "While producing a sentence, humans combine various types of knowledge to produce fluent output—various shades of meaning are expressed through word selection and tone, while the language is made to conform to underlying structural rules via syntax and morphology. Native speakers are often quick to identify disfluency, even if the meaning of a sentence is mostly clear.", "id": 331, "question": "What were the non-neural baselines used for the task?", "title": "The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection" }, { "answers": [ "" ], "context": "Research in Conversational AI (also known as Spoken Dialogue Systems) has applications ranging from home devices to robotics, and has a growing presence in industry. A key problem in real-world Dialogue Systems is Natural Language Understanding (NLU) – the process of extracting structured representations of meaning from user utterances. In fact, the effective extraction of semantics is an essential feature, being the entry point of any Natural Language interaction system. Apart from challenges given by the inherent complexity and ambiguity of human language, other challenges arise whenever the NLU has to operate over multiple domains. In fact, interaction patterns, domain, and language vary depending on the device the user is interacting with. For example, chit-chatting and instruction-giving for executing an action are different processes in terms of language, domain, syntax and interaction schemes involved. And what if the user combines two interaction domains: “play some music, but first what's the weather tomorrow”?", "id": 332, "question": "Which publicly available NLU dataset is used?", "title": "Hierarchical Multi-Task Natural Language Understanding for Cross-domain Conversational AI: HERMIT NLU" }, { "answers": [ "" ], "context": "A cross-domain dialogue agent must be able to handle heterogeneous types of conversation, such as chit-chatting, giving directions, entertaining, and triggering domain/task actions. A domain-independent and rich meaning representation is thus required to properly capture the intent of the user. Meaning is modelled here through three layers of knowledge: dialogue acts, frames, and frame arguments. Frames and arguments can be in turn mapped to domain-dependent intents and slots, or to Frame Semantics' BIBREF0 structures (i.e. semantic frames and frame elements, respectively), which allow handling of heterogeneous domains and language.", "id": 333, "question": "What metrics other than entity tagging are compared?", "title": "Hierarchical Multi-Task Natural Language Understanding for Cross-domain Conversational AI: HERMIT NLU" }, { "answers": [ "" ], "context": "Many machine reading comprehension (MRC) datasets have been released in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 to benchmark a system's ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein.", "id": 334, "question": "Do they provide decision sequences as supervision while training models?", "title": "Interactive Machine Comprehension with Information Seeking Agents" }, { "answers": [ "They evaluate F1 score and agent's test performance on their own built interactive datasets (iSQuAD and iNewsQA)" ], "context": "Skip-reading BIBREF6, BIBREF7, BIBREF8 is an existing setting in which MRC models read partial documents. Concretely, these methods assume that not all tokens in the input sequence are useful, and therefore learn to skip irrelevant tokens based on the current input and their internal memory. Since skipping decisions are discrete, the models are often optimized by the REINFORCE algorithm BIBREF9. For example, the structural-jump-LSTM proposed in BIBREF10 learns to skip and jump over chunks of text. In a similar vein, BIBREF11 designed a QA task where the model reads streaming data unidirectionally, without knowing when the question will be provided. Skip-reading approaches are limited in that they only consider jumping over a few consecutive tokens and the skipping operations are usually unidirectional. Based on the assumption that a single pass of reading may not provide sufficient information, multi-pass reading methods have also been studied BIBREF12, BIBREF13.", "id": 335, "question": "What are the models evaluated on?", "title": "Interactive Machine Comprehension with Information Seeking Agents" }, { "answers": [ "" ], "context": "We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\\lbrace p, q, a\\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.", "id": 336, "question": "How do they train models in this setup?", "title": "Interactive Machine Comprehension with Information Seeking Agents" }, { "answers": [ "" ], "context": "As described in the previous section, we convert MRC tasks into sequential decision-making problems (which we will refer to as games). These can be described naturally within the reinforcement learning (RL) framework. Formally, tasks in iMRC are partially observable Markov decision processes (POMDP) BIBREF17. An iMRC data-point is a discrete-time POMDP defined by $(S, T, A, \\Omega , O, R, \\gamma )$, where $\\gamma \\in [0, 1]$ is the discount factor and the other elements are described in detail below.", "id": 337, "question": "What commands does their setup provide to models seeking information?", "title": "Interactive Machine Comprehension with Information Seeking Agents" }, { "answers": [ "" ], "context": "Social Media platforms such as Facebook, Twitter or Reddit have empowered individuals' voices and facilitated freedom of expression. However they have also been a breeding ground for hate speech and other types of online harassment. Hate speech is defined in legal literature as speech (or any form of expression) that expresses (or seeks to promote, or has the capacity to increase) hatred against a person or a group of people because of a characteristic they share, or a group to which they belong BIBREF0. Twitter develops this definition in its hateful conduct policy as violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.", "id": 338, "question": "What models do they propose?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "The literature on detecting hate speech on online textual publications is extensive. Schmidt and Wiegand BIBREF1 recently provided a good survey of it, where they review the terminology used over time, the features used, the existing datasets and the different approaches. However, the field lacks a consistent dataset and evaluation protocol to compare proposed methods. Saleem et al. BIBREF2 compare different classification methods detecting hate speech in Reddit and other forums. Wassem and Hovy BIBREF3 worked on hate speech detection on twitter, published a manually annotated dataset and studied its hate distribution. Later Wassem BIBREF4 extended the previous published dataset and compared amateur and expert annotations, concluding that amateur annotators are more likely than expert annotators to label items as hate speech. Park and Fung BIBREF5 worked on Wassem datasets and proposed a classification method using a CNN over Word2Vec BIBREF6 word embeddings, showing also classification results on racism and sexism hate sub-classes. Davidson et al. BIBREF7 also worked on hate speech detection on twitter, publishing another manually annotated dataset. They test different classifiers such as SVMs and decision trees and provide a performance comparison. Malmasi and Zampieri BIBREF8 worked on Davidson's dataset improving his results using more elaborated features. ElSherief et al. BIBREF9 studied hate speech on twitter and selected the most frequent terms in hate tweets based on Hatebase, a hate expression repository. They propose a big hate dataset but it lacks manual annotations, and all the tweets containing certain hate expressions are considered hate speech. Zhang et al. BIBREF10 recently proposed a more sophisticated approach for hate speech detection, using a CNN and a GRU BIBREF11 over Word2Vec BIBREF6 word embeddings. They show experiments in different datasets outperforming previous methods. Next, we summarize existing hate speech datasets:", "id": 339, "question": "Are all tweets in English?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "A typical task in multimodal visual and textual analysis is to learn an alignment between feature spaces. To do that, usually a CNN and a RNN are trained jointly to learn a joint embedding space from aligned multimodal data. This approach is applied in tasks such as image captioning BIBREF14, BIBREF15 and multimodal image retrieval BIBREF16, BIBREF17. On the other hand, instead of explicitly learning an alignment between two spaces, the goal of Visual Question Answering (VQA) is to merge both data modalities in order to decide which answer is correct. This problem requires modeling very precise correlations between the image and the question representations. The VQA task requirements are similar to our hate speech detection problem in multimodal publications, where we have a visual and a textual input and we need to combine both sources of information to understand the global context and make a decision. We thus take inspiration from the VQA literature for the tested models. Early VQA methods BIBREF18 fuse textual and visual information by feature concatenation. Later methods, such as Multimodal Compact Bilinear pooling BIBREF19, utilize bilinear pooling to learn multimodal features. An important limitation of these methods is that the multimodal features are fused in the latter model stage, so the textual and visual relationships are modeled only in the last layers. Another limitation is that the visual features are obtained by representing the output of the CNN as a one dimensional vector, which losses the spatial information of the input images. In a recent work, Gao et al. BIBREF20 propose a feature fusion scheme to overcome these limitations. They learn convolution kernels from the textual information –which they call question-guided kernels– and convolve them with the visual information in an earlier stage to get the multimodal features. Margffoy-Tuay et al. BIBREF21 use a similar approach to combine visual and textual information, but they address a different task: instance segmentation guided by natural language queries. We inspire in these latest feature fusion works to build the models for hate speech detection.", "id": 340, "question": "How large is the dataset?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "Unimodal LSTM vs Best Multimodal (FCM)\n- F score: 0.703 vs 0.704\n- AUC: 0.732 vs 0.734 \n- Mean Accuracy: 68.3 vs 68.4 " ], "context": "Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps.", "id": 341, "question": "What is the results of multimodal compared to unimodal models?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter.", "id": 342, "question": "What is author's opinion on why current multimodal models cannot outperform models analyzing only text?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "We aim to create a multimodal hate speech database where all the instances contain visual and textual information that we can later process to determine if a tweet is hate speech or not. But a considerable amount of the images of the selected tweets contain only textual information, such as screenshots of other tweets. To ensure that all the dataset instances contain both visual and textual information, we remove those tweets. To do that, we use TextFCN BIBREF22, BIBREF23 , a Fully Convolutional Network that produces a pixel wise text probability map of an image. We set empirical thresholds to discard images that have a substantial total text probability, filtering out $23\\%$ of the collected tweets.", "id": 343, "question": "What metrics are used to benchmark the results?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers.", "id": 344, "question": "How is data collected, manual collection or Twitter api?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "All images are resized such that their shortest size has 500 pixels. During training, online data augmentation is applied as random cropping of $299\\times 299$ patches and mirroring. We use a CNN as the image features extractor which is an Imagenet BIBREF24 pre-trained Google Inception v3 architecture BIBREF25. The fine-tuning process of the Inception v3 layers aims to modify its weights to extract the features that, combined with the textual information, are optimal for hate speech detection.", "id": 345, "question": "How many tweats does MMHS150k contains, 150000?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Since our dataset is not big enough to train a GloVe word embedding model, we used a pre-trained model that has been trained in two billion tweets. This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter. To process the tweets text before generating the word embeddings, we use the same pipeline as the model authors, which includes generating symbols to encode Twitter special interactions such as user mentions (@user) or hashtags (#hashtag). To encode the tweet text and input it later to multimodal models, we use the LSTM hidden state after processing the last tweet word. Since the LSTM has been trained for hate speech classification, it extracts the most useful information for this task from the text, which is encoded in the hidden state after inputting the last tweet word.", "id": 346, "question": "What unimodal detection models were used?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "The text in the image can also contain important information to decide if a publication is hate speech or not, so we extract it and also input it to our model. To do so, we use Google Vision API Text Detection module BIBREF27. We input the tweet text and the text from the image separately to the multimodal models, so it might learn different relations between them and between them and the image. For instance, the model could learn to relate the image text with the area in the image where the text appears, so it could learn to interpret the text in a different way depending on the location where it is written in the image. The image text is also encoded by the LSTM as the hidden state after processing its last word.", "id": 347, "question": "What different models for multimodal detection were proposed?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any).", "id": 348, "question": "What annotations are available in the dataset - tweat used hate speach or not?", "title": "Exploring Hate Speech Detection in Multimodal Publications" }, { "answers": [ "" ], "context": "Short text clustering is of great importance due to its various applications, such as user profiling BIBREF0 and recommendation BIBREF1 , for nowaday's social media dataset emerged day by day. However, short text clustering has the data sparsity problem and most words only occur once in each short text BIBREF2 . As a result, the Term Frequency-Inverse Document Frequency (TF-IDF) measure cannot work well in short text setting. In order to address this problem, some researchers work on expanding and enriching the context of data from Wikipedia BIBREF3 or an ontology BIBREF4 . However, these methods involve solid Natural Language Processing (NLP) knowledge and still use high-dimensional representation which may result in a waste of both memory and computation time. Another way to overcome these issues is to explore some sophisticated models to cluster short texts. For example, Yin and Wang BIBREF5 proposed a Dirichlet multinomial mixture model-based approach for short text clustering. Yet how to design an effective model is an open question, and most of these methods directly trained based on Bag-of-Words (BoW) are shallow structures which cannot preserve the accurate semantic similarities.", "id": 349, "question": "What were the evaluation metrics used?", "title": "Self-Taught Convolutional Neural Networks for Short Text Clustering" }, { "answers": [ "On SearchSnippets dataset ACC 77.01%, NMI 62.94%, on StackOverflow dataset ACC 51.14%, NMI 49.08%, on Biomedical dataset ACC 43.00%, NMI 38.18%" ], "context": "In this section, we review the related work from the following two perspectives: short text clustering and deep neural networks.", "id": 350, "question": "What were their performance results?", "title": "Self-Taught Convolutional Neural Networks for Short Text Clustering" }, { "answers": [ "on SearchSnippets dataset by 6.72% in ACC, by 6.94% in NMI; on Biomedical dataset by 5.77% in ACC, 3.91% in NMI" ], "context": "There have been several studies that attempted to overcome the sparseness of short text representation. One way is to expand and enrich the context of data. For example, Banerjee et al. BIBREF3 proposed a method of improving the accuracy of short text clustering by enriching their representation with additional features from Wikipedia, and Fodeh et al. BIBREF4 incorporate semantic knowledge from an ontology into text clustering. However, these works need solid NLP knowledge and still use high-dimensional representation which may result in a waste of both memory and computation time. Another direction is to map the original features into reduced space, such as Latent Semantic Analysis (LSA) BIBREF17 , Laplacian Eigenmaps (LE) BIBREF18 , and Locality Preserving Indexing (LPI) BIBREF19 . Even some researchers explored some sophisticated models to cluster short texts. For example, Yin and Wang BIBREF5 proposed a Dirichlet multinomial mixture model-based approach for short text clustering. Moreover, some studies even focus the above both two streams. For example, Tang et al. BIBREF20 proposed a novel framework which enrich the text features by employing machine translation and reduce the original features simultaneously through matrix factorization techniques.", "id": 351, "question": "By how much did they outperform the other methods?", "title": "Self-Taught Convolutional Neural Networks for Short Text Clustering" }, { "answers": [ "" ], "context": "Recently, there is a revival of interest in DNN and many researchers have concentrated on using Deep Learning to learn features. Hinton and Salakhutdinov BIBREF21 use DAE to learn text representation. During the fine-tuning procedure, they use backpropagation to find codes that are good at reconstructing the word-count vector.", "id": 352, "question": "Which popular clustering methods did they experiment with?", "title": "Self-Taught Convolutional Neural Networks for Short Text Clustering" }, { "answers": [ "" ], "context": "Assume that we are given a dataset of INLINEFORM0 training texts denoted as: INLINEFORM1 , where INLINEFORM2 is the dimensionality of the original BoW representation. Denote its tag set as INLINEFORM3 and the pre-trained word embedding set as INLINEFORM4 , where INLINEFORM5 is the dimensionality of word vectors and INLINEFORM6 is the vocabulary size. In order to learn the INLINEFORM7 -dimensional deep feature representation INLINEFORM8 from CNN in an unsupervised manner, some unsupervised dimensionality reduction methods INLINEFORM9 are employed to guide the learning of CNN model. Our goal is to cluster these texts INLINEFORM10 into clusters INLINEFORM11 based on the learned deep feature representation while preserving the semantic consistency.", "id": 353, "question": "What datasets did they use?", "title": "Self-Taught Convolutional Neural Networks for Short Text Clustering" }, { "answers": [ "" ], "context": "Students are exposed to simple arithmetic word problems starting in elementary school, and most become proficient in solving them at a young age. Automatic solvers of such problems could potentially help educators, as well as become an integral part of general question answering services. However, it has been challenging to write programs to solve even such elementary school level problems well.", "id": 354, "question": "Does pre-training on general text corpus improve performance?", "title": "Solving Arithmetic Word Problems Automatically Using Transformer and Unambiguous Representations" }, { "answers": [ "" ], "context": "Past strategies have used rules and templates to match sentences to arithmetic expressions. Some such approaches seemed to solve problems impressively within a narrow domain, but performed poorly when out of domain, lacking generality BIBREF6, BIBREF7, BIBREF8, BIBREF9. Kushman et al. BIBREF3 used feature extraction and template-based categorization by representing equations as expression forests and finding a near match. Such methods required human intervention in the form of feature engineering and development of templates and rules, which is not desirable for expandability and adaptability. Hosseini et al. BIBREF2 performed statistical similarity analysis to obtain acceptable results, but did not perform well with texts that were dissimilar to training examples.", "id": 355, "question": "What neural configurations are explored?", "title": "Solving Arithmetic Word Problems Automatically Using Transformer and Unambiguous Representations" }, { "answers": [ "" ], "context": "We view math word problem solving as a sequence-to-sequence translation problem. RNNs have excelled in sequence-to-sequence problems such as translation and question answering. The recent introduction of attention mechanisms has improved the performance of RNN models. Vaswani et al. BIBREF0 introduced the Transformer network, which uses stacks of attention layers instead of recurrence. Applications of Transformers have achieved state-of-the-art performance in many NLP tasks. We use this architecture to produce character sequences that are arithmetic expressions. The models we experiment with are easy and efficient to train, allowing us to test several configurations for a comprehensive comparison. We use several configurations of Transformer networks to learn the prefix, postfix, and infix notations of MWP equations independently.", "id": 356, "question": "Are the Transformers masked?", "title": "Solving Arithmetic Word Problems Automatically Using Transformer and Unambiguous Representations" }, { "answers": [ "" ], "context": "We work with four individual datasets. The datasets contain addition, subtraction, multiplication, and division word problems.", "id": 357, "question": "How is this problem evaluated?", "title": "Solving Arithmetic Word Problems Automatically Using Transformer and Unambiguous Representations" }, { "answers": [ "" ], "context": "We take a simple approach to convert infix expressions found in the MWPs to the other two representations. Two stacks are filled by iterating through string characters, one with operators found in the equation and the other with the operands. From these stacks, we form a binary tree structure. Traversing an expression tree in pre-order results in a prefix conversion. Post-order traversal gives us a postfix expression. Three versions of our training and testing data are created to correspond to each type of expression. By training on different representations, we expect our test results to change.", "id": 358, "question": "What datasets do they use?", "title": "Solving Arithmetic Word Problems Automatically Using Transformer and Unambiguous Representations" }, { "answers": [ "" ], "context": "Voice-controlled virtual assistants (VVA) such as Siri and Alexa have experienced an exponential growth in terms of number of users and provided capabilities. They are used by millions for a variety of tasks including shopping, playing music, and even telling jokes. Arguably, their success is due in part to the emotional and personalized experience they provide. One important aspect of this emotional interaction is humor, a fundamental element of communication. Not only can it create in the user a sense of personality, but also be used as fallback technique for out-of-domain queries BIBREF0. Usually, a VVA's humorous responses are invoked by users with the phrase \"Tell me a joke\". In order to improve the joke experience and overall user satisfaction with a VVA, we propose to personalize the response to each request. To achieve this, a method should be able to recognize and evaluate humor, a challenging task that has been the focus of extensive work. Some authors have applied traditional NLP techniques BIBREF1, while others deep learning models BIBREF2. Moreover, BIBREF3 follows a semantic-based approach, while BIBREF4 and BIBREF5 tackle the challenge from a cognitive and linguistic perspective respectively.", "id": 359, "question": "What evaluation metrics were used?", "title": "What Do You Mean I'm Funny? Personalizing the Joke Skill of a Voice-Controlled Virtual Assistant" }, { "answers": [ "" ], "context": "Generating labels for this VVA skill is challenging. Label generation through explicit user feedback is unavailable since asking users for feedback creates friction and degrade the user experience. In addition, available humor datasets such as BIBREF3, BIBREF11 only contain jokes and corresponding labels, but not the additional features we need to personalize the jokes.", "id": 360, "question": "Where did the real production data come from?", "title": "What Do You Mean I'm Funny? Personalizing the Joke Skill of a Voice-Controlled Virtual Assistant" }, { "answers": [ "" ], "context": "All models have access to the same raw features, which we conceptually separate into user, item and contextual features. Examples of features in each of these categories are shown in Table TABREF4. Some of these are used directly by the models, while others need to be pre-processed. The manner in which each model consumes them is explained next.", "id": 361, "question": "What feedback labels are used?", "title": "What Do You Mean I'm Funny? Personalizing the Joke Skill of a Voice-Controlled Virtual Assistant" }, { "answers": [ "" ], "context": "Over the past few years, the term big data has become an important key point for research into data mining and information retrieval. Through the years, the quantity of data managed across enterprises has evolved from a simple and imperceptible task to an extent to which it has become the central performance improvement problem. In other words, it evolved to be the next frontier for innovation, competition and productivity BIBREF0. Extracting knowledge from data is now a very competitive environment. Many companies process vast amounts of customer/user data in order to improve the quality of experience (QoE) of their customers. For instance, a typical use-case scenario would be a book seller that performs an automatic extraction of the content of the books a customer has bought, and subsequently extracts knowledge of what customers prefer to read. The knowledge extracted could then be used to recommend other books. Book recommending systems are typical examples where data mining techniques should be considered as the primary tool for making future decisions BIBREF1.", "id": 362, "question": "What representations for textual documents do they use?", "title": "A Measure of Similarity in Textual Data Using Spearman's Rank Correlation Coefficient" }, { "answers": [ "" ], "context": "In this section we provide a brief background of vector space representation of TDs and existing similarity measures that have been widely used in statistical text analysis. To begin with, we consider the representation of documents.", "id": 363, "question": "Which dataset(s) do they use?", "title": "A Measure of Similarity in Textual Data Using Spearman's Rank Correlation Coefficient" }, { "answers": [ "" ], "context": "A document $d$ can be defined as a finite sequence of terms (independent textual entities within a document, for example, words), namely $d=(t_1,t_2,\\dots ,t_n)$. A general idea is to associate weight to each term $t_i$ within $d$, such that", "id": 364, "question": "How do they evaluate knowledge extraction performance?", "title": "A Measure of Similarity in Textual Data Using Spearman's Rank Correlation Coefficient" }, { "answers": [ "" ], "context": "Pretrained word representations have a long history in Natural Language Processing (NLP), from non-neural methods BIBREF0, BIBREF1, BIBREF2 to neural word embeddings BIBREF3, BIBREF4 and to contextualised representations BIBREF5, BIBREF6. Approaches shifted more recently from using these representations as an input to task-specific architectures to replacing these architectures with large pretrained language models. These models are then fine-tuned to the task at hand with large improvements in performance over a wide range of tasks BIBREF7, BIBREF8, BIBREF9, BIBREF10.", "id": 365, "question": "What is CamemBERT trained on?", "title": "CamemBERT: a Tasty French Language Model" }, { "answers": [ "" ], "context": "The first neural word vector representations were non-contextualised word embeddings, most notably word2vec BIBREF3, GloVe BIBREF4 and fastText BIBREF14, which were designed to be used as input to task-specific neural architectures. Contextualised word representations such as ELMo BIBREF5 and flair BIBREF6, improved the expressivity of word embeddings by taking context into account. They improved the performance of downstream tasks when they replaced traditional word representations. This paved the way towards larger contextualised models that replaced downstream architectures in most tasks. These approaches, trained with language modeling objectives, range from LSTM-based architectures such as ULMFiT BIBREF15 to the successful transformer-based architectures such as GPT2 BIBREF8, BERT BIBREF7, RoBERTa BIBREF9 and more recently ALBERT BIBREF16 and T5 BIBREF10.", "id": 366, "question": "Which tasks does CamemBERT not improve on?", "title": "CamemBERT: a Tasty French Language Model" }, { "answers": [ "POS and DP task: CONLL 2018\nNER task: (no extensive work) Strong baselines CRF and BiLSTM-CRF\nNLI task: mBERT or XLM (not clear from text)" ], "context": "Since the introduction of word2vec BIBREF3, many attempts have been made to create monolingual models for a wide range of languages. For non-contextual word embeddings, the first two attempts were by BIBREF17 and BIBREF18 who created word embeddings for a large number of languages using Wikipedia. Later BIBREF19 trained fastText word embeddings for 157 languages using Common Crawl and showed that using crawled data significantly increased the performance of the embeddings relatively to those trained only on Wikipedia.", "id": 367, "question": "What is the state of the art?", "title": "CamemBERT: a Tasty French Language Model" }, { "answers": [ "" ], "context": "Following the success of large pretrained language models, they were extended to the multilingual setting with multilingual BERT , a single multilingual model for 104 different languages trained on Wikipedia data, and later XLM BIBREF12, which greatly improved unsupervised machine translation. A few monolingual models have been released: ELMo models for Japanese, Portuguese, German and Basque and BERT for Simplified and Traditional Chinese and German.", "id": 368, "question": "How much better was results of CamemBERT than previous results on these tasks?", "title": "CamemBERT: a Tasty French Language Model" }, { "answers": [ "" ], "context": "Our approach is based on RoBERTa BIBREF9, which replicates and improves the initial BERT by identifying key hyper-parameters for more robust performance.", "id": 369, "question": "Was CamemBERT compared against multilingual BERT on these tasks?", "title": "CamemBERT: a Tasty French Language Model" }, { "answers": [ "" ], "context": "Similar to RoBERTa and BERT, CamemBERT is a multi-layer bidirectional Transformer BIBREF21. Given the widespread usage of Transformers, we do not describe them in detail here and refer the reader to BIBREF21. CamemBERT uses the original BERT $_{\\small \\textsc {BASE}}$ configuration: 12 layers, 768 hidden dimensions, 12 attention heads, which amounts to 110M parameters.", "id": 370, "question": "How long was CamemBERT trained?", "title": "CamemBERT: a Tasty French Language Model" }, { "answers": [ "" ], "context": "We train our model on the Masked Language Modeling (MLM) task. Given an input text sequence composed of $N$ tokens $x_1, ..., x_N$, we select $15\\%$ of tokens for possible replacement. Among those selected tokens, 80% are replaced with the special $<$mask$>$ token, 10% are left unchanged and 10% are replaced by a random token. The model is then trained to predict the initial masked tokens using cross-entropy loss.", "id": 371, "question": "What data is used for training CamemBERT?", "title": "CamemBERT: a Tasty French Language Model" }, { "answers": [ "" ], "context": "Controversy is a phenomenom with a high impact at various levels. It has been broadly studied from the perspective of different disciplines, ranging from the seminal analysis of the conflicts within the members of a karate club BIBREF0 to political issues in modern times BIBREF1, BIBREF2. The irruption of digital social networks BIBREF3 gave raise to new ways of intentionally intervening on them for taking some advantage BIBREF4, BIBREF5. Moreover highly contrasting points of view in some groups tend to provoke conflicts that lead to attacks from one community to the other by harassing, “brigading”, or “trolling” it BIBREF6. The existing literature shows different issues that controversy brings up such as splitting of communities, biased information, hateful discussions and attacks between groups, generally proposing ways to solve them. For example, Kumar, Srijan, et al. BIBREF6 analyze many techniques to defend us from attacks in Reddit while Stewart, et al. BIBREF4 insinuate that there was external interference in Twitter during the 2016 US presidential elections to benefit one candidate. Also, as shown in BIBREF7, detecting controversy could provide the basis to improve the “news diet\" of readers, offering the possibility to connect users with different points of views by recommending them new content to read BIBREF8.", "id": 372, "question": "What are the state of the art measures?", "title": "Vocabulary-based Method for Quantifying Controversy in Social Media" }, { "answers": [ "" ], "context": "Many previous works are dedicated to quantifying the polarization observed in online social networks and social media BIBREF1, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23. The main characteristic of those works is that the measures proposed are based on the structural characteristics of the underlying graph. Among them, we highlight the work of Garimella et al.BIBREF23 that presents an extensive comparison of controversy measures, different graph-building approaches, and data sources, achieving the best performance of all. In their research they propose different metrics to measure polarization on Twitter. Their techniques based on the structure of the endorsement graph can successfully detect whether a discussion (represented by a set of tweets), is controversial or not regardless of the context and most importantly, without the need of any domain expertise. They also consider two different methods to measure controversy based on the analysis of the posts contents, but both fail when used to create a measure of controversy.", "id": 373, "question": "What controversial topics are experimented with?", "title": "Vocabulary-based Method for Quantifying Controversy in Social Media" }, { "answers": [ "" ], "context": "Our approach to measuring controversy is based on a systematic way of characterizing social media activity through its content. We employ a pipeline with five stages, namely graph building, community identification, model training, predicting and controversy measure. The final output of the pipeline is a value that measures how controversial a topic is, with higher values corresponding to higher degrees of controversy. The method is based on analysing posts content through Fasttext BIBREF34, a library for efficient learning of word representations and sentence classification developed by Facebook Research team. In short, our method works as follows: through Fasttext we train a language-agnostic model which can predict the community of many users by their jargon. Then we take there predictions and compute a score based on the physic notion Dipole Moment using a language approach to identify core or characteristic users and set the polarity trough them. We provide a detailed description of each stage in the following.", "id": 374, "question": "What datasets did they use?", "title": "Vocabulary-based Method for Quantifying Controversy in Social Media" }, { "answers": [ "" ], "context": "In this section we report the results obtained by running the above proposed method over different discussions.", "id": 375, "question": "What social media platform is observed?", "title": "Vocabulary-based Method for Quantifying Controversy in Social Media" }, { "answers": [ "" ], "context": "In the literature, a topic is often defined by a single hashtag. However, this might be too restrictive in many cases. In our approach, a topic is operationalized as an specific hashtags or key words. Sometimes a discussion in a particular moment could not have a defined hashtag but it could be around a certain keyword, i.e. a word or expression that is not specifically a hashtag but it is widely used in the topic. For example during the Brazilian presidential elections in 2018 we captured the discussion by the mentions to the word Bolsonaro, that is the principal candidate's surname.", "id": 376, "question": "How many languages do they experiment with?", "title": "Vocabulary-based Method for Quantifying Controversy in Social Media" }, { "answers": [ "" ], "context": "Microblog sentiment analysis; Twitter opinion mining", "id": 377, "question": "What is the current SOTA for sentiment analysis on Twitter at the time of writing?", "title": "Semantic Sentiment Analysis of Twitter Data" }, { "answers": [ "Tweets noisy nature, use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, short (length limited) text" ], "context": "Sentiment Analysis: This is text analysis aiming to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a piece of text.", "id": 378, "question": "What difficulties does sentiment analysis on Twitter have, compared to sentiment analysis in other domains?", "title": "Semantic Sentiment Analysis of Twitter Data" }, { "answers": [ "" ], "context": "Sentiment analysis on Twitter is the use of natural language processing techniques to identify and categorize opinions expressed in a tweet, in order to determine the author's attitude toward a particular topic or in general. Typically, discrete labels such as positive, negative, neutral, and objective are used for this purpose, but it is also possible to use labels on an ordinal scale, or even continuous numerical values.", "id": 379, "question": "What are the metrics to evaluate sentiment analysis on Twitter?", "title": "Semantic Sentiment Analysis of Twitter Data" }, { "answers": [ "27.41 transformation on average of single seed sentence is available in dataset." ], "context": "Vector representations are becoming truly essential in majority of natural language processing tasks. Word embeddings became widely popular with the introduction of word2vec BIBREF0 and GloVe BIBREF1 and their properties have been analyzed in length from various aspects.", "id": 380, "question": "How many sentence transformations on average are available per unique sentence in dataset?", "title": "COSTRA 1.0: A Dataset of Complex Sentence Transformations" }, { "answers": [ "For each source sentence, transformation sentences that are transformed according to some criteria (paraphrase, minimal change etc.)" ], "context": "As hinted above, there are many methods of converting a sequence of words into a vector in a highly dimensional space. To name a few: BiLSTM with the max-pooling trained for natural language inference BIBREF13, masked language modeling and next sentence prediction using bidirectional Transformer BIBREF14, max-pooling last states of neural machine translation among many languages BIBREF15 or the encoder final state in attentionless neural machine translation BIBREF16.", "id": 381, "question": "What annotations are available in the dataset?", "title": "COSTRA 1.0: A Dataset of Complex Sentence Transformations" }, { "answers": [ "Yes, as new sentences." ], "context": "We acquired the data in two rounds of annotation. In the first one, we were looking for original and uncommon sentence change suggestions. In the second one, we collected sentence alternations using ideas from the first round. The first and second rounds of annotation could be broadly called as collecting ideas and collecting data, respectively.", "id": 382, "question": "How are possible sentence transformations represented in dataset, as new sentences?", "title": "COSTRA 1.0: A Dataset of Complex Sentence Transformations" }, { "answers": [ "- paraphrase 1\n- paraphrase 2\n- different meaning\n- opposite meaning\n- nonsense\n- minimal change\n- generalization\n- gossip\n- formal sentence\n- non-standard sentence\n- simple sentence\n- possibility\n- ban\n- future\n- past" ], "context": "We manually selected 15 newspaper headlines. Eleven annotators were asked to modify each headline up to 20 times and describe the modification with a short name. They were given an example sentence and several of its possible alternations, see tab:firstroundexamples.", "id": 383, "question": "What are all 15 types of modifications ilustrated in the dataset?", "title": "COSTRA 1.0: A Dataset of Complex Sentence Transformations" }, { "answers": [ "" ], "context": "We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.", "id": 384, "question": "Is this dataset publicly available?", "title": "COSTRA 1.0: A Dataset of Complex Sentence Transformations" }, { "answers": [ "" ], "context": "The source sentences for annotations were selected from Czech data of Global Voices BIBREF24 and OpenSubtitles BIBREF25. We used two sources in order to have different styles of seed sentences, both journalistic and common spoken language. We considered only sentences with more than 5 and less than 15 words and we manually selected 150 of them for further annotation. This step was necessary to remove sentences that are:", "id": 385, "question": "Are some baseline models trained on this dataset?", "title": "COSTRA 1.0: A Dataset of Complex Sentence Transformations" }, { "answers": [ "" ], "context": "The annotation is a challenging task and the annotators naturally make mistakes. Unfortunately, a single typo can significantly influence the resulting embedding BIBREF26. After collecting all the sentence variations, we applied the statistical spellchecker and grammar checker Korektor BIBREF27 in order to minimize influence of typos to performance of embedding methods. We manually inspected 519 errors identified by Korektor and fixed 129, which were identified correctly.", "id": 386, "question": "Do they do any analysis of of how the modifications changed the starting set of sentences?", "title": "COSTRA 1.0: A Dataset of Complex Sentence Transformations" }, { "answers": [ "" ], "context": "In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics.", "id": 387, "question": "How do they introduce language variation?", "title": "COSTRA 1.0: A Dataset of Complex Sentence Transformations" }, { "answers": [ "" ], "context": "We embedded COSTRA sentences with LASER BIBREF15, the method that performed very well in revealing linear relations in BaBo2019. Having browsed a number of 2D visualizations (PCA and t-SNE) of the space, we have to conclude that visually, LASER space does not seem to exhibit any of the desired topological properties discussed above, see fig:pca for one example.", "id": 388, "question": "Do they use external resources to make modifications to sentences?", "title": "COSTRA 1.0: A Dataset of Complex Sentence Transformations" }, { "answers": [ "" ], "context": "Today's increasing flood of information on the web creates a need for automated multi-document summarization systems that produce high quality summaries. However, producing summaries in a multi-document setting is difficult, as the language used to display the same information in a sentence can vary significantly, making it difficult for summarization models to capture. Given the complexity of the task and the lack of datasets, most researchers use extractive summarization, where the final summary is composed of existing sentences in the input documents. More specifically, extractive summarization systems output summaries in two steps : via sentence ranking, where an importance score is assigned to each sentence, and via the subsequent sentence selection, where the most appropriate sentence is chosen, by considering 1) their importance and 2) their frequency among all documents. Due to data sparcity, models heavily rely on well-designed features at the word level BIBREF0, BIBREF1, BIBREF2, BIBREF3 or take advantage of other large, manually annotated datasets and then apply transfer learning BIBREF4. Additionally, most of the time, all sentences in the same collection of documents are processed independently and therefore, their relationships are lost.", "id": 389, "question": "How big is dataset domain-specific embedding are trained on?", "title": "Learning to Create Sentence Semantic Relation Graphs for Multi-Document Summarization" }, { "answers": [ "" ], "context": "Let $C$ denote a collection of related documents composed of a set of documents $\\lbrace D_i|i \\in [1,N]\\rbrace $ where $N$ is the number of documents. Moreover, each document $D_i$ consists of a set of sentences $\\lbrace S_{i,j}|j \\in [1,M]\\rbrace $, $M$ being the number of sentences in $D_i$. Given a collection of related documents $C$, our goal is to produce a summary $Sum$ using a subset of these in the input documents ordered in some way, such that $Sum = (S_{i_1,j_1},S_{i_2,j_2},...,S_{i_n,j_m})$.", "id": 390, "question": "How big is unrelated corpus universal embedding is traned on?", "title": "Learning to Create Sentence Semantic Relation Graphs for Multi-Document Summarization" }, { "answers": [ "" ], "context": "We model the semantic relationship among sentences using a graph representation. In this graph, each vertex is a sentence $S_{i,j}$ ($j$'th sentence of document $D_i$) from the collection documents $C$ and an undirected edge between $S_{i_u,j_u}$ and $S_{i_v,j_v}$ indicates their degree of similarity. In order to compute the semantic similarity, we use the model of BIBREF6 trained on the English Wikipedia corpus. In this manner, we incorporate general knowledge (i.e. not domain-specific) that will complete the specialized sentence embeddings obtained during training (see Section SECREF5). We process sentences by their model and compute the cosine similarity between every sentence pair, resulting in a complete graph. However, having a complete graph alone does not allow the model to leverage the semantic structure across sentences significantly, as every sentence pair is connected, and likewise, a sparse graph does not contain enough information to exploit semantic similarities. Furthermore, all edges have a weight above zero, since it is very unlikely that two sentence embeddings are completely orthogonal. To overcome this problem, we introduce an edge-removal-method, where every edge below a certain threshold $t_{sim}^g$ is removed in order to emphasize high sentence similarity. Nonetheless, $t_{sim}^g$ should not be too large, as we otherwise found the model to be prone to overfitting. After removing edges below $t_{sim}^g$, our sentence semantic relation graph is used as the adjacency matrix $A$. The impact of $t_{sim}^g$ with different values is shown in Section SECREF26.", "id": 391, "question": "How better are state-of-the-art results than this model? ", "title": "Learning to Create Sentence Semantic Relation Graphs for Multi-Document Summarization" }, { "answers": [ "accuracy of 86.63 on STS, 85.14 on Sanders and 80.9 on HCR" ], "context": "Twitter sentiment classification have intensively researched in recent years BIBREF0 BIBREF1 . Different approaches were developed for Twitter sentiment classification by using machine learning such as Support Vector Machine (SVM) with rule-based features BIBREF2 and the combination of SVMs and Naive Bayes (NB) BIBREF3 . In addition, hybrid approaches combining lexicon-based and machine learning methods also achieved high performance described in BIBREF4 . However, a problem of traditional machine learning is how to define a feature extractor for a specific domain in order to extract important features.", "id": 392, "question": "What were their results on the three datasets?", "title": "A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking" }, { "answers": [ "" ], "context": "Our proposed model consists of a deep learning classifier and a tweet processor. The deep learning classifier is a combination of DeepCNN and Bi-LSTM. The tweet processor standardizes tweets and then applies semantic rules on datasets. We construct a framework that treats the deep learning classifier and the tweet processor as two distinct components. We believe that standardizing data is an important step to achieve high accuracy. To formulate our problem in increasing the accuracy of the classifier, we illustrate our model in Figure. FIGREF4 as follows:", "id": 393, "question": "What was the baseline?", "title": "A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking" }, { "answers": [ "" ], "context": "Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .", "id": 394, "question": "Which datasets did they use?", "title": "A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking" }, { "answers": [ "" ], "context": "We firstly take unique properties of Twitter in order to reduce the feature space such as Username, Usage of links, None, URLs and Repeated Letters. We then process retweets, stop words, links, URLs, mentions, punctuation and accentuation. For emoticons, BIBREF0 revealed that the training process makes the use of emoticons as noisy labels and they stripped the emoticons out from their training dataset because BIBREF0 believed that if we consider the emoticons, there is a negative impact on the accuracies of classifiers. In addition, removing emoticons makes the classifiers learns from other features (e.g. unigrams and bi-grams) presented in tweets and the classifiers only use these non-emoticon features to predict the sentiment of tweets. However, there is a problem is that if the test set contains emoticons, they do not influence the classifiers because emoticon features do not contain in its training data. This is a limitation of BIBREF0 , because the emoticon features would be useful when classifying test data. Therefore, we keep emoticon features in the datasets because deep learning models can capture more information from emoticon features for increasing classification accuracy.", "id": 395, "question": "Are results reported only on English datasets?", "title": "A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking" }, { "answers": [ "" ], "context": "In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:", "id": 396, "question": "Which three Twitter sentiment classification datasets are used for experiments?", "title": "A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking" }, { "answers": [ "rules that compute polarity of words after POS tagging or parsing steps" ], "context": "To construct embedding inputs for our model, we use a fixed-sized word vocabulary INLINEFORM0 and a fixed-sized character vocabulary INLINEFORM1 . Given a word INLINEFORM2 is composed from characters INLINEFORM3 , the character-level embeddings are encoded by column vectors INLINEFORM4 in the embedding matrix INLINEFORM5 , where INLINEFORM6 is the size of the character vocabulary. For word-level embedding INLINEFORM7 , we use a pre-trained word-level embedding with dimension 200 or 300. A pre-trained word-level embedding can capture the syntactic and semantic information of words BIBREF17 . We build every word INLINEFORM8 into an embedding INLINEFORM9 which is constructed by two sub-vectors: the word-level embedding INLINEFORM10 and the character fixed-size feature vector INLINEFORM11 of INLINEFORM12 where INLINEFORM13 is the length of the filter of wide convolutions. We have INLINEFORM14 character fixed-size feature vectors corresponding to word-level embedding in a sentence.", "id": 397, "question": "What semantic rules are proposed?", "title": "A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking" }, { "answers": [ "" ], "context": "Knowledge graphs (KGs) such as Freebase BIBREF0 , DBpedia BIBREF1 , and YAGO BIBREF2 play a critical role in various NLP tasks, including question answering BIBREF3 , information retrieval BIBREF4 , and personalized recommendation BIBREF5 . A typical KG consists of numerous facts about a predefined set of entities. Each fact is in the form of a triplet INLINEFORM0 (or INLINEFORM1 for short), where INLINEFORM2 and INLINEFORM3 are two entities and INLINEFORM4 is a relation the fact describes. Due to the discrete and incomplete natures of KGs, various KG embedding models are proposed to facilitate KG completion tasks, e.g., link prediction and triplet classification. After vectorizing entities and relations in a low-dimensional space, those models predict missing facts by manipulating the involved entity and relation embeddings.", "id": 398, "question": "Which knowledge graph completion tasks do they experiment with?", "title": "Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding" }, { "answers": [ "" ], "context": "In recent years, representation learning problems on KGs have received much attention due to the wide applications of the resultant entity and relation embeddings. Typical KG embedding models include TransE BIBREF11 , Distmult BIBREF12 , Complex BIBREF13 , Analogy BIBREF14 , to name a few. For more explorations, we refer readers to an extensive survey BIBREF15 . However, conventional approaches on KG embedding work in a transductive manner. They require that all entities should be seen during training. Such limitation hinders them from efficiently generalizing to emerging entities.", "id": 399, "question": "Apart from using desired properties, do they evaluate their LAN approach in some other way?", "title": "Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding" }, { "answers": [ "" ], "context": "To relieve the issue of emerging entities, several inductive KG embedding models are proposed, including BIBREF16 xie2016representation, BIBREF6 shi2018open and BIBREF17 xie2016image which use description text or images as inputs. Although the resultant embeddings may be utilized for KG completion, it is not clear whether the embeddings are powerful enough to infer implicit or new facts beyond those expressed in the text/image. Moreover, when domain experts are recruited to introduce new entities via partial facts rather than text or images, those approaches may not help much.", "id": 400, "question": "Do they evaluate existing methods in terms of desired properties?", "title": "Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding" }, { "answers": [ "" ], "context": "It is well known that sentiment annotation or labeling is subjective BIBREF0. Annotators often have many disagreements. This is especially so for crowd-workers who are not well trained. That is why one always feels that there are many errors in an annotated dataset. In this paper, we study whether it is possible to build accurate sentiment classifiers even with noisy-labeled training data. Sentiment classification aims to classify a piece of text according to the polarity of the sentiment expressed in the text, e.g., positive or negative BIBREF1, BIBREF0, BIBREF2. In this work, we focus on sentence-level sentiment classification (SSC) with labeling errors.", "id": 401, "question": "How does the model differ from Generative Adversarial Networks?", "title": "Learning with Noisy Labels for Sentence-level Sentiment Classification" }, { "answers": [ "" ], "context": "Our work is related to sentence sentiment classification (SSC). SSC has been studied extensively BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. None of them can handle noisy labels. Since many social media datasets are noisy, researchers have tried to build robust models BIBREF29, BIBREF30, BIBREF31. However, they treat noisy data as additional information and don't specifically handle noisy labels. A noise-aware classification model in BIBREF12 trains using data annotated with multiple labels. BIBREF32 exploited the connection of users and noisy labels of sentiments in social networks. Since the two works use multiple-labeled data or users' information (we only use single-labeled data, and we do not use any additional information), they have different settings than ours.", "id": 402, "question": "What is the dataset used to train the model?", "title": "Learning with Noisy Labels for Sentence-level Sentiment Classification" }, { "answers": [ "Experiment 1: ACC around 0.5 with 50% noise rate in worst case - clearly higher than baselines for all noise rates\nExperiment 2: ACC on real noisy datasets: 0.7 on Movie, 0.79 on Laptop, 0.86 on Restaurant (clearly higher than baselines in almost all cases)" ], "context": "Our model builds on CNN BIBREF25. The key idea is to train two CNNs alternately, one for addressing the input noisy labels and the other for predicting `clean' labels. The overall architecture of the proposed model is given in Figure FIGREF2. Before going further, we first introduce a proposition, a property, and an assumption below.", "id": 403, "question": "What is the performance of the model?", "title": "Learning with Noisy Labels for Sentence-level Sentiment Classification" }, { "answers": [ "" ], "context": "In this section, we evaluate the performance of the proposed NetAb model. we conduct two types of experiments. (1) We corrupt clean-labeled datasets to produce noisy-labeled datasets to show the impact of noises on sentiment classification accuracy. (2) We collect some real noisy data and use them to train models to evaluate the performance of NetAb.", "id": 404, "question": "Is the model evaluated against a CNN baseline?", "title": "Learning with Noisy Labels for Sentence-level Sentiment Classification" }, { "answers": [ "" ], "context": "There has been significant research on style transfer, with the goal of changing the style of text while preserving its semantic content. The alternative where semantics are adjusted while keeping style intact, which we call semantic text exchange (STE), has not been investigated to the best of our knowledge. Consider the following example, where the replacement entity defines the new semantic context:", "id": 405, "question": "Does the model proposed beat the baseline models for all the values of the masking parameter tested?", "title": "Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange" }, { "answers": [ "" ], "context": "Word2Vec BIBREF3, BIBREF4 allows for analogy representation through vector arithmetic. We implement a baseline (W2V-STEM) using this technique. The Universal Sentence Encoder (USE) BIBREF5 encodes sentences and is trained on a variety of web sources and the Stanford Natural Language Inference corpus BIBREF6. Flair embeddings BIBREF7 are based on architectures such as BERT BIBREF8. We use USE for SMERTI as it is designed for transfer learning and shows higher performance on textual similarity tasks compared to other models BIBREF9.", "id": 406, "question": "Has STES been previously used in the literature to evaluate similar tasks?", "title": "Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange" }, { "answers": [ "" ], "context": "Text infilling is the task of filling in missing parts of sentences called masks. MaskGAN BIBREF10 is restricted to a single word per mask token, while SMERTI is capable of variable length infilling for more flexible output. BIBREF11 uses a transformer-based architecture. They fill in random masks, while SMERTI fills in masks guided by semantic similarity, resulting in more natural infilling and fulfillment of the STE task.", "id": 407, "question": "What are the baseline models mentioned in the paper?", "title": "Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange" }, { "answers": [ "ERR of 19.05 with i-vectors and 15.52 with x-vectors" ], "context": "Speaker recognition including identification and verification, aims to recognize claimed identities of speakers. After decades of research, performance of speaker recognition systems has been vastly improved, and the technique has been deployed to a wide range of practical applications. Nevertheless, the present speaker recognition approaches are still far from reliable in unconstrained conditions where uncertainties within the speech recordings could be arbitrary. These uncertainties might be caused by multiple factors, including free text, multiple channels, environmental noises, speaking styles, and physiological status. These uncertainties make the speaker recognition task highly challenging BIBREF0, BIBREF1.", "id": 408, "question": "What was the performance of both approaches on their dataset?", "title": "CN-CELEB: a challenging Chinese speaker recognition dataset" }, { "answers": [ "" ], "context": "The original purpose of the CN-Celeb dataset is to investigate the true difficulties of speaker recognition techniques in unconstrained conditions, and provide a resource for researchers to build prototype systems and evaluate the performance. Ideally, it can be used as a standalone data source, and can be also used with other datasets together, in particular VoxCeleb which is free and large. For this reason, CN-Celeb tries to be distinguished from but also complementary to VoxCeleb from the beginning of the design. This leads to three features that we have discussed in the previous section: Chinese focused, complex genres, and quality guarantee by human check.", "id": 409, "question": "What kind of settings do the utterances come from?", "title": "CN-CELEB: a challenging Chinese speaker recognition dataset" }, { "answers": [ "genre, entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement" ], "context": "Table TABREF13 summarizes the main difference between CN-Celeb and VoxCeleb. Compared to VoxCeleb, CN-Celeb is a more complex dataset and more challenging for speaker recognition research. More details of these challenges are as follows.", "id": 410, "question": "What genres are covered?", "title": "CN-CELEB: a challenging Chinese speaker recognition dataset" }, { "answers": [ "" ], "context": "CN-Celeb was collected following a two-stage strategy: firstly we used an automated pipeline to extract potential segments of the Person of Interest (POI), and then applied a human check to remove incorrect segments. This process is much faster than purely human-based segmentation, and reduces errors caused by a purely automated process.", "id": 411, "question": "Do they experiment with cross-genre setups?", "title": "CN-CELEB: a challenging Chinese speaker recognition dataset" }, { "answers": [ "x-vector" ], "context": "In this section, we present a series of experiments on speaker recognition using VoxCeleb and CN-Celeb, to compare the complexity of the two datasets.", "id": 412, "question": "Which of the two speech recognition models works better overall on CN-Celeb?", "title": "CN-CELEB: a challenging Chinese speaker recognition dataset" }, { "answers": [ "For i-vector system, performances are 11.75% inferior to voxceleb. For x-vector system, performances are 10.74% inferior to voxceleb" ], "context": "VoxCeleb: The entire dataset involves two parts: VoxCeleb1 and VoxCeleb2. We used SITW BIBREF21, a subset of VoxCeleb1 as the evaluation set. The rest of VoxCeleb1 was merged with VoxCeleb2 to form the training set (simply denoted by VoxCeleb). The training set involves $1,236,567$ utterances from $7,185$ speakers, and the evaluation set involves $6,445$ utterances from 299 speakers (precisely, this is the Eval. Core set within SITW).", "id": 413, "question": "By how much is performance on CN-Celeb inferior to performance on VoxCeleb?", "title": "CN-CELEB: a challenging Chinese speaker recognition dataset" }, { "answers": [ "" ], "context": "Deep neural network-based models are easy to overfit and result in losing their generalization due to limited size of training data. In order to address the issue, data augmentation methods are often applied to generate more training samples. Recent years have witnessed great success in applying data augmentation in the field of speech area BIBREF0 , BIBREF1 and computer vision BIBREF2 , BIBREF3 , BIBREF4 . Data augmentation in these areas can be easily performed by transformations like resizing, mirroring, random cropping, and color shifting. However, applying these universal transformations to texts is largely randomized and uncontrollable, which makes it impossible to ensure the semantic invariance and label correctness. For example, given a movie review “The actors is good\", by mirroring we get “doog si srotca ehT\", or by random cropping we get “actors is\", both of which are meaningless.", "id": 414, "question": "On what datasets is the new model evaluated on?", "title": "Conditional BERT Contextual Augmentation" }, { "answers": [ "Accuracy across six datasets" ], "context": "Language model pre-training has attracted wide attention and fine-tuning on pre-trained language model has shown to be effective for improving many downstream natural language processing tasks. Dai BIBREF7 pre-trained unlabeled data to improve Sequence Learning with recurrent networks. Howard BIBREF8 proposed a general transfer learning method, Universal Language Model Fine-tuning (ULMFiT), with the key techniques for fine-tuning a language model. Radford BIBREF9 proposed that by generative pre-training of a language model on a diverse corpus of unlabeled text, large gains on a diverse range of tasks could be realized. Radford BIBREF9 achieved large improvements on many sentence-level tasks from the GLUE benchmark BIBREF10 . BERT BIBREF11 obtained new state-of-the-art results on a broad range of diverse tasks. BERT pre-trained deep bidirectional representations which jointly conditioned on both left and right context in all layers, following by discriminative fine-tuning on each specific task. Unlike previous works fine-tuning pre-trained language model to perform discriminative tasks, we aim to apply pre-trained BERT on generative tasks by perform the masked language model(MLM) task. To generate sentences that are compatible with given labels, we retrofit BERT to conditional BERT, by introducing a conditional masked language model task and fine-tuning BERT on the task.", "id": 415, "question": "How do the authors measure performance?", "title": "Conditional BERT Contextual Augmentation" }, { "answers": [ "" ], "context": "Text data augmentation has been extensively studied in natural language processing. Sample-based methods includes downsampling from the majority classes and oversampling from the minority class, both of which perform weakly in practice. Generation-based methods employ deep generative models such as GANs BIBREF12 or VAEs BIBREF13 , BIBREF14 , trying to generate sentences from a continuous space with desired attributes of sentiment and tense. However, sentences generated in these methods are very hard to guarantee the quality both in label compatibility and sentence readability. In some specific areas BIBREF15 , BIBREF16 , BIBREF17 . word replacement augmentation was applied. Wang BIBREF18 proposed the use of neighboring words in continuous representations to create new instances for every word in a tweet to augment the training dataset. Zhang BIBREF19 extracted all replaceable words from the given text and randomly choose $r$ of them to be replaced, then substituted the replaceable words with synonyms from WordNet BIBREF5 . Kolomiyets BIBREF20 replaced only the headwords under a task-specific assumption that temporal trigger words usually occur as headwords. Kolomiyets BIBREF20 selected substitute words with top- $K$ scores given by the Latent Words LM BIBREF21 , which is a LM based on fixed length contexts. Fadaee BIBREF22 focused on the rare word problem in machine translation, replacing words in a source sentence with only rare words. A word in the translated sentence is also replaced using a word alignment method and a rightward LM. The work most similar to our research is Kobayashi BIBREF6 . Kobayashi used a fill-in-the-blank context for data augmentation by replacing every words in the sentence with language model. In order to prevent the generated words from reversing the information related to the labels of the sentences, Kobayashi BIBREF6 introduced a conditional constraint to control the replacement of words. Unlike previous works, we adopt a deep bidirectional language model to apply replacement, and the attention mechanism within our model allows a more structured memory for handling long-term dependencies in text, which resulting in more general and robust improvement on various downstream tasks.", "id": 416, "question": "Does the new objective perform better than the original objective bert is trained on?", "title": "Conditional BERT Contextual Augmentation" }, { "answers": [ "" ], "context": "In general, the language model(LM) models the probability of generating natural language sentences or documents. Given a sequence $\\textbf {\\textit {S}}$ of N tokens, $<t_1,t_2,...,t_N>$ , a forward language model allows us to predict the probability of the sequence as: ", "id": 417, "question": "Are other pretrained language models also evaluated for contextual augmentation? ", "title": "Conditional BERT Contextual Augmentation" }, { "answers": [ "" ], "context": "As shown in Fig 1 , our conditional BERT shares the same model architecture with the original BERT. The differences are the input representation and training procedure.", "id": 418, "question": "Do the authors report performance of conditional bert on tasks without data augmentation?", "title": "Conditional BERT Contextual Augmentation" }, { "answers": [ "" ], "context": "Question Generation (QG) concerns the task of “automatically generating questions from various inputs such as raw text, database, or semantic representation\" BIBREF0 . People have the ability to ask rich, creative, and revealing questions BIBREF1 ; e.g., asking Why did Gollum betray his master Frodo Baggins? after reading the fantasy novel The Lord of the Rings. How can machines be endowed with the ability to ask relevant and to-the-point questions, given various inputs? This is a challenging, complementary task to Question Answering (QA). Both QA and QG require an in-depth understanding of the input source and the ability to reason over relevant contexts. But beyond understanding, QG additionally integrates the challenges of Natural Language Generation (NLG), i.e., generating grammatically and semantically correct questions.", "id": 419, "question": "Do they cover data augmentation papers?", "title": "Recent Advances in Neural Question Generation" }, { "answers": [ "Kim et al. (2019)" ], "context": "For the sake of clean exposition, we first provide a broad overview of QG by conceptualizing the problem from the perspective of the three introduced aspects: (1) its learning paradigm, (2) its input modalities, and (3) the cognitive level it involves. This combines past research with recent trends, providing insights on how NQG connects to traditional QG research.", "id": 420, "question": "What is the latest paper covered by this survey?", "title": "Recent Advances in Neural Question Generation" }, { "answers": [ "" ], "context": "QG research traditionally considers two fundamental aspects in question asking: “What to ask” and “How to ask”. A typical QG task considers the identification of the important aspects to ask about (“what to ask”), and learning to realize such identified aspects as natural language (“how to ask”). Deciding what to ask is a form of machine understanding: a machine needs to capture important information dependent on the target application, akin to automatic summarization. Learning how to ask, however, focuses on aspects of the language quality such as grammatical correctness, semantically preciseness and language flexibility.", "id": 421, "question": "Do they survey visual question generation work?", "title": "Recent Advances in Neural Question Generation" }, { "answers": [ "" ], "context": "Question generation is an NLG task for which the input has a wealth of possibilities depending on applications. While a host of input modalities have been considered in other NLG tasks, such as text summarization BIBREF24 , image captioning BIBREF25 and table-to-text generation BIBREF26 , traditional QG mainly focused on textual inputs, especially declarative sentences, explained by the original application domains of question answering and education, which also typically featured textual inputs.", "id": 422, "question": "Do they survey multilingual aspects?", "title": "Recent Advances in Neural Question Generation" }, { "answers": [ "Considering \"What\" and \"How\" separately versus jointly optimizing for both." ], "context": "Finally, we consider the required cognitive process behind question asking, a distinguishing factor for questions BIBREF32 . A typical framework that attempts to categorize the cognitive levels involved in question asking comes from Bloom's taxonomy BIBREF33 , which has undergone several revisions and currently has six cognitive levels: Remembering, Understanding, Applying, Analyzing, Evaluating and Creating BIBREF32 .", "id": 423, "question": "What learning paradigms do they cover in this survey?", "title": "Recent Advances in Neural Question Generation" }, { "answers": [ "Textual inputs, knowledge bases, and images." ], "context": "As QG can be regarded as a dual task of QA, in principle any QA dataset can be used for QG as well. However, there are at least two corpus-related factors that affect the difficulty of question generation. The first is the required cognitive level to answer the question, as we discussed in the previous section. Current NQG has achieved promising results on datasets consisting mainly of shallow factoid questions, such as SQuAD BIBREF36 and MS MARCO BIBREF38 . However, the performance drops significantly on deep question datasets, such as LearningQ BIBREF8 , shown in Section \"Generation of Deep Questions\" . The second factor is the answer type, i.e., the expected form of the answer, typically having four settings: (1) the answer is a text span in the passage, which is usually the case for factoid questions, (2) human-generated, abstractive answer that may not appear in the passage, usually the case for deep questions, (3) multiple choice question where question and its distractors should be jointly generated, and (4) no given answer, which requires the model to automatically learn what is worthy to ask. The design of NQG system differs accordingly.", "id": 424, "question": "What are all the input modalities considered in prior work in question generation?", "title": "Recent Advances in Neural Question Generation" }, { "answers": [ "" ], "context": "Although the datasets are commonly shared between QG and QA, it is not the case for evaluation: it is challenging to define a gold standard of proper questions to ask. Meaningful, syntactically correct, semantically sound and natural are all useful criteria, yet they are hard to quantify. Most QG systems involve human evaluation, commonly by randomly sampling a few hundred generated questions, and asking human annotators to rate them on a 5-point Likert scale. The average rank or the percentage of best-ranked questions are reported and used for quality marks.", "id": 425, "question": "Do they survey non-neural methods for question generation?", "title": "Recent Advances in Neural Question Generation" }, { "answers": [ "" ], "context": "Named Entity Recognition is a major natural language processing task that recognizes the proper labels such as LOC (Location), PER (Person), ORG (Organization), etc. Like words or phrase, being a sort of language constituent, named entities also benefit from better representation for better processing. Continuous word representations, known as word embeddings, well capture semantic and syntactic regularities of words BIBREF0 and perform well in monolingual NE recognition BIBREF1 , BIBREF2 . Word embeddings also exhibit isomorphism structure across languages BIBREF3 . On account of these characteristics above, we attempt to utilize word embeddings to improve NE recognition for resource-poor languages with the help of richer ones. The state-of-the-art cross-lingual NE recognition methods are mainly based on annotation projection methods according to parallel corpora, translations BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 and Wikipedia methods BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 .", "id": 426, "question": "What is their model?", "title": "Open Named Entity Modeling from Embedding Distribution" }, { "answers": [ "" ], "context": "Seok BIBREF2 proposed that similar words are more likely to occupy close spatial positions, since their word embeddings carries syntactical and semantical informative clues. For an intuitive understanding, they listed the nearest neighbors of words included in the PER and ORG tags under cosine similarity metric. To empirically verify this observation and explore the performance of this property in Euclidean space , we list Top-5 nearest neighbors under Euclidean distance metric in Table 1 and illustrate a standard t-SNE BIBREF12 2- $D$ projection of the embeddings of three entity types with a sample of 500 words for each type.", "id": 427, "question": "Do they evaluate on NER data sets?", "title": "Open Named Entity Modeling from Embedding Distribution" }, { "answers": [ "" ], "context": "A lot of work has been done in the field of Twitter sentiment analysis till date. Sentiment analysis has been handled as a Natural Language Processing task at many levels of granularity. Most of these techniques use Machine Learning algorithms with features such as unigrams, n-grams, Part-Of-Speech (POS) tags. However, the training datasets are often very large, and hence with such a large number of features, this process requires a lot of computation power and time. The following question arises: What to do if we do not have resources that provide such a great amount of computation power? The existing solution to this problem is to use a smaller sample of the dataset. For sentiment analysis, if we train the model using a smaller randomly chosen sample, then we get low accuracy [16, 17]. In this paper, we propose a novel technique to sample tweets for building a sentiment classification model so that we get higher accuracy than the state-of-the-art baseline method, namely Distant Supervision, using a smaller set of tweets. Our model has lower computation time and higher accuracy compared to baseline model.", "id": 428, "question": "What previously proposed methods is this method compared against?", "title": "Efficient Twitter Sentiment Classification using Subjective Distant Supervision" }, { "answers": [ "" ], "context": "There has been a large amount of prior research in sentiment analysis of tweets. Read [10] shows that using emoticons as labels for positive and sentiment is effective for reducing dependencies in machine learning techniques. Alec Go [1] used Naive Bayes, SVM, and MaxEnt classifiers to train their model. This, as mentioned earlier, is our baseline model. Our model builds on this and achieves higher accuracy on a much smaller training dataset.", "id": 429, "question": "How is effective word score calculated?", "title": "Efficient Twitter Sentiment Classification using Subjective Distant Supervision" }, { "answers": [ "" ], "context": "Subjectivity refers to how someone's judgment is shaped by personal opinions and feelings instead of outside influences. An objective perspective is one that is not influenced by emotions, opinions, or personal feelings - it is a perspective based in fact, in things quantifiable and measurable. A subjective perspective is one open to greater interpretation based on personal feeling, emotion, aesthetics, etc.", "id": 430, "question": "How is tweet subjectivity measured?", "title": "Efficient Twitter Sentiment Classification using Subjective Distant Supervision" }, { "answers": [ "" ], "context": "Neural network based methods have made tremendous progress in image and text classification BIBREF0 , BIBREF1 . However, only recently has progress been made on more complex tasks that require logical reasoning. This success is based in part on the addition of memory and attention components to complex neural networks. For instance, memory networks BIBREF2 are able to reason over several facts written in natural language or (subject, relation, object) triplets. Attention mechanisms have been successful components in both machine translation BIBREF3 , BIBREF4 and image captioning models BIBREF5 .", "id": 431, "question": "Why is supporting fact supervision necessary for DMN?", "title": "Dynamic Memory Networks for Visual and Textual Question Answering" }, { "answers": [ "" ], "context": "We begin by outlining the DMN for question answering and the modules as presented in BIBREF6 .", "id": 432, "question": "What does supporting fact supervision mean?", "title": "Dynamic Memory Networks for Visual and Textual Question Answering" }, { "answers": [ "" ], "context": "We propose and compare several modeling choices for two crucial components: input representation, attention mechanism and memory update. The final DMN+ model obtains the highest accuracy on the bAbI-10k dataset without supporting facts and the VQA dataset BIBREF8 . Several design choices are motivated by intuition and accuracy improvements on that dataset.", "id": 433, "question": "What changes they did on input module?", "title": "Dynamic Memory Networks for Visual and Textual Question Answering" }, { "answers": [ "" ], "context": "In the DMN specified in BIBREF6 , a single GRU is used to process all the words in the story, extracting sentence representations by storing the hidden states produced at the end of sentence markers. The GRU also provides a temporal component by allowing a sentence to know the content of the sentences that came before them. Whilst this input module worked well for bAbI-1k with supporting facts, as reported in BIBREF6 , it did not perform well on bAbI-10k without supporting facts (Sec. \"Model Analysis\" ).", "id": 434, "question": "What improvements they did for DMN?", "title": "Dynamic Memory Networks for Visual and Textual Question Answering" }, { "answers": [ "" ], "context": "To apply the DMN to visual question answering, we introduce a new input module for images. The module splits an image into small local regions and considers each region equivalent to a sentence in the input module for text. The input module for VQA is composed of three parts, illustrated in Fig. 3 : local region feature extraction, visual feature embedding, and the input fusion layer introduced in Sec. \"Input Module for Text QA\" .", "id": 435, "question": "How does the model circumvent the lack of supporting facts during training?", "title": "Dynamic Memory Networks for Visual and Textual Question Answering" }, { "answers": [ "" ], "context": "The episodic memory module, as depicted in Fig. 4 , retrieves information from the input facts $\\overleftrightarrow{F} = [\\overleftrightarrow{f_1}, \\hdots , \\overleftrightarrow{f_N}]$ provided to it by focusing attention on a subset of these facts. We implement this attention by associating a single scalar value, the attention gate $g^t_i$ , with each fact $\\overleftrightarrow{f}_i$ during pass $t$ . This is computed by allowing interactions between the fact and both the question representation and the episode memory state. ", "id": 436, "question": "Does the DMN+ model establish state-of-the-art ?", "title": "Dynamic Memory Networks for Visual and Textual Question Answering" }, { "answers": [ "" ], "context": "All text has style, whether it be formal or informal, polite or aggressive, colloquial, persuasive, or even robotic. Despite the success of style transfer in image processing BIBREF0, BIBREF1, there has been limited progress in the text domain, where disentangling style from content is particularly difficult.", "id": 437, "question": "Is this style generator compared to some baseline?", "title": "Low-Level Linguistic Controls for Style Transfer and Content Preservation" }, { "answers": [ "" ], "context": "Following in the footsteps of machine translation, style transfer in text has seen success by using parallel data. BIBREF5 use modern translations of Shakespeare plays to build a modern-to-Shakespearan model. BIBREF6 compile parallel data for formal and informal sentences, allowing them to successfully use various machine translation techniques. While parallel data may work for very specific styles, the difficulty of finding parallel texts dramatically limits this approach.", "id": 438, "question": "How they perform manual evaluation, what is criteria?", "title": "Low-Level Linguistic Controls for Style Transfer and Content Preservation" }, { "answers": [ "" ], "context": "There has been a decent amount of work on this approach in the past few years BIBREF7, BIBREF8, mostly focusing on variations of an encoder-decoder framework in which style is modeled as a monolithic style embedding. The main obstacle is often to disentangle style and content. However, it remains a challenging problem.", "id": 439, "question": "What metrics are used for automatic evaluation?", "title": "Low-Level Linguistic Controls for Style Transfer and Content Preservation" }, { "answers": [ "" ], "context": "Several papers have worked on controlling style when generating sentences from restaurant meaning representations BIBREF11, BIBREF12. In each of these cases, the diversity in outputs is quite small given the constraints of the meaning representation, style is often constrained to interjections (like “yeah”), and there is no original style from which to transfer.", "id": 440, "question": "How they know what are content words?", "title": "Low-Level Linguistic Controls for Style Transfer and Content Preservation" }, { "answers": [ "" ], "context": "Style, in literary research, is anything but a stable concept, but it nonetheless has a long tradition of study in the digital humanities. In a remarkably early quantitative study of literature, BIBREF14 charts sentence-level stylistic attributes specific to a number of novelists. Half a century later, BIBREF15 builds on earlier work in information theory by BIBREF16, and defines a literary text as consisting of two “materials\": “the vocabulary, and some structural properties, the style, of its author.\"", "id": 441, "question": "How they model style as a suite of low-level linguistic controls, such as frequency of pronouns, prepositions, and subordinate clause constructions?", "title": "Low-Level Linguistic Controls for Style Transfer and Content Preservation" }, { "answers": [ "" ], "context": "0pt*0*0", "id": 442, "question": "Do they report results only on English data?", "title": "Fusing Visual, Textual and Connectivity Clues for Studying Mental Health" }, { "answers": [ "" ], "context": "Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.", "id": 443, "question": "What insights into the relationship between demographics and mental health are provided?", "title": "Fusing Visual, Textual and Connectivity Clues for Studying Mental Health" }, { "answers": [ "" ], "context": "Mental Health Analysis using Social Media:", "id": 444, "question": "What model is used to achieve 5% improvement on F1 for identifying depressed individuals on Twitter?", "title": "Fusing Visual, Textual and Connectivity Clues for Studying Mental Health" }, { "answers": [ "Demographic information is predicted using weighted lexicon of terms." ], "context": "Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., \"16 years old suicidal girl\"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.", "id": 445, "question": "How do this framework facilitate demographic inference from social media?", "title": "Fusing Visual, Textual and Connectivity Clues for Studying Mental Health" }, { "answers": [ "" ], "context": "We now provide an in-depth analysis of visual and textual content of vulnerable users.", "id": 446, "question": "What types of features are used from each data type?", "title": "Fusing Visual, Textual and Connectivity Clues for Studying Mental Health" }, { "answers": [ "The data are self-reported by Twitter users and then verified by two human experts." ], "context": "We leverage both the visual and textual content for predicting age and gender.", "id": 447, "question": "How is the data annotated?", "title": "Fusing Visual, Textual and Connectivity Clues for Studying Mental Health" }, { "answers": [ "From Twitter profile descriptions of the users." ], "context": "We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .", "id": 448, "question": "Where does the information on individual-level demographics come from?", "title": "Fusing Visual, Textual and Connectivity Clues for Studying Mental Health" }, { "answers": [ "" ], "context": "Chinese definition modeling is the task of generating a definition in Chinese for a given Chinese word. This task can benefit the compilation of dictionaries, especially dictionaries for Chinese as a foreign language (CFL) learners.", "id": 449, "question": "Is there an online demo of their system?", "title": "Incorporating Sememes into Chinese Definition Modeling" }, { "answers": [ "" ], "context": "The definition modeling task is to generate an explanatory sentence for the interpreted word. For example, given the word “旅馆” (hotel), a model should generate a sentence like this: “给旅行者提供食宿和其他服务的地方” (A place to provide residence and other services for tourists). Since distributed representations of words have been shown to capture lexical syntax and semantics, it is intuitive to employ word embeddings to generate natural language definitions.", "id": 450, "question": "Do they perform manual evaluation?", "title": "Incorporating Sememes into Chinese Definition Modeling" }, { "answers": [ "" ], "context": "The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework. Without utilizing the information of sememes, it learns a probabilistic mapping $P(y | x)$ from the word $x$ to be defined to a definition $y = [y_1, \\dots , y_T ]$ , in which $y_t$ is the $t$ -th word of definition $y$ .", "id": 451, "question": "Do they compare against Noraset et al. 2017?", "title": "Incorporating Sememes into Chinese Definition Modeling" }, { "answers": [ "" ], "context": "Our proposed model aims to incorporate sememes into the definition modeling task. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \\dots , s_N ]$ , we define the probability of generating the definition $y=[y_1, \\dots , y_t ]$ as: ", "id": 452, "question": "What is a sememe?", "title": "Incorporating Sememes into Chinese Definition Modeling" }, { "answers": [ "" ], "context": "The advent of neural networks in natural language processing (NLP) has significantly improved state-of-the-art results within the field. While recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) initially dominated the field, recent models started incorporating attention mechanisms and then later dropped the recurrent part and just kept the attention mechanisms in so-called transformer models BIBREF0. This latter type of model caused a new revolution in NLP and led to popular language models like GPT-2 BIBREF1, BIBREF2 and ELMo BIBREF3. BERT BIBREF4 improved over previous transformer models and recurrent networks by allowing the system to learn from input text in a bidirectional way, rather than only from left-to-right or the other way around. This model was later re-implemented, critically evaluated and improved in the RoBERTa model BIBREF5.", "id": 453, "question": "What data did they use?", "title": "RobBERT: a Dutch RoBERTa-based Language Model" }, { "answers": [ "" ], "context": "Transformer models have been successfully used for a wide range of language tasks. Initially, transformers were introduced for use in machine translation, where they vastly improved state-of-the-art results for English to German in an efficient manner BIBREF0. This transformer model architecture resulted in a new paradigm in NLP with the migration from sequence-to-sequence recurrent neural networks to transformer-based models by removing the recurrent component and only keeping attention. This cornerstone was used for BERT, a transformer model that obtained state-of-the-art results for eleven natural language processing tasks, such as question answering and natural language inference BIBREF4. BERT is pre-trained with large corpora of text using two unsupervised tasks. The first task is word masking (also called the Cloze task BIBREF9 or masked language model (MLM)), where the model has to guess which word is masked in certain position in the text. The second task is next sentence prediction. This is done by predicting if two sentences are subsequent in the corpus, or if they are randomly sampled from the corpus. These tasks allowed the model to create internal representations about a language, which could thereafter be reused for different language tasks. This architecture has been shown to be a general language model that could be fine-tuned with little data in a relatively efficient way for a very distinct range of tasks and still outperform previous architectures BIBREF4.", "id": 454, "question": "What is the state of the art?", "title": "RobBERT: a Dutch RoBERTa-based Language Model" }, { "answers": [ "" ], "context": "This section describes the data and training regime we used to train our Dutch RoBERTa-based language model called RobBERT.", "id": 455, "question": "What language tasks did they experiment on?", "title": "RobBERT: a Dutch RoBERTa-based Language Model" }, { "answers": [ "Average reward across 5 seeds show that NLP representations are robust to changes in the environment as well task-nuisances" ], "context": "“The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations.\"", "id": 456, "question": "What result from experiments suggest that natural language based agents are more robust?", "title": "Natural Language State Representation for Reinforcement Learning" }, { "answers": [ "" ], "context": "In Reinforcement Learning the goal is to learn a policy $\\pi (s)$, which is a mapping from state $s$ to a probability distribution over actions $\\mathcal {A}$, with the objective to maximize a reward $r(s)$ that is provided by the environment. This is often solved by formulating the problem as a Markov Decision Process (MDP) BIBREF19. Two common quantities used to estimate the performance in MDPs are the value $v (s)$ and action-value $Q (s, a)$ functions, which are defined as follows: ${v(s) = \\mathbb {E}^{\\pi } [\\sum _t \\gamma ^t r_t | s_0 = s ]}$ and ${Q(s, a) = \\mathbb {E}^{\\pi } [\\sum _t \\gamma ^t r_t | s_0 = s, a_0 = a ]}$. Two prominent algorithms for solving RL tasks, which we use in this paper, are the value-based DQN BIBREF2 and the policy-based PPO BIBREF3.", "id": 457, "question": "How better is performance of natural language based agents in experiments?", "title": "Natural Language State Representation for Reinforcement Learning" }, { "answers": [ "" ], "context": "A word embedding is a mapping from a word $w$ to a vector $\\mathbf {w} \\in \\mathbb {R}^d$. A simple form of word embedding is the Bag of Words (BoW), a vector $\\mathbf {w} \\in \\mathbb {N}^{|D|}$ ($|D|$ is the dictionary size), in which each word receives a unique 1-hot vector representation. Recently, more efficient methods have been proposed, in which the embedding vector is smaller than the dictionary size, $d \\ll |D|$. These methods are also known as distributional embeddings.", "id": 458, "question": "How much faster natural language agents converge in performed experiments?", "title": "Natural Language State Representation for Reinforcement Learning" }, { "answers": [ "" ], "context": "Contemporary methods for semantic representation of states currently follow one of three approaches: (1) raw visual inputs BIBREF2, BIBREF26, in which raw sensory values of pixels are used from one or multiple sources, (2) feature vectors BIBREF27, BIBREF28, in which general features of the problem are chosen, with no specific structure, and (3) semantic segmentation maps BIBREF29, BIBREF30, in which discrete or logical values are used in one or many channels to represent the general features of the state.", "id": 459, "question": "What experiments authors perform?", "title": "Natural Language State Representation for Reinforcement Learning" }, { "answers": [ "" ], "context": "In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26, as described in the previous section. More specifically, we use a semantic natural language parser in order to describe a state, over numerous instances of levels varying in difficulty, task-nuisances, and objectives. Our results show that, though semantic segmentation and feature vector representation techniques express a similar statistic of the state, natural language representation offers better performance, faster convergence, more robust solutions, as well as better transfer.", "id": 460, "question": "How is state to learn and complete tasks represented via natural language?", "title": "Natural Language State Representation for Reinforcement Learning" }, { "answers": [ "" ], "context": "The development of automatic tools for the summarization of large corpora of documents has attracted a widespread interest in recent years. With fields of application ranging from medical sciences to finance and legal science, these summarization systems considerably reduce the time required for knowledge acquisition and decision making, by identifying and formatting the relevant information from a collection of documents. Since most applications involve large corpora rather than single documents, summarization systems developed recently are meant to produce summaries of multiple documents. Similarly, the interest has shifted from generic towards query-oriented summarization, in which a query expresses the user's needs. Moreover, existing summarizers are generally extractive, namely they produce summaries by extracting relevant sentences from the original corpus.", "id": 461, "question": "How does the model compare with the MMR baseline?", "title": "Query-oriented text summarization based on hypergraph transversals" }, { "answers": [ "" ], "context": "People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.", "id": 462, "question": "Does the paper discuss previous models which have been applied to the same task?", "title": "Text-based inference of moral sentiment change" }, { "answers": [ "Google N-grams\nCOHA\nMoral Foundations Dictionary (MFD)\n" ], "context": "An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.", "id": 463, "question": "Which datasets are used in the paper?", "title": "Text-based inference of moral sentiment change" }, { "answers": [ "" ], "context": "Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.", "id": 464, "question": "How does the parameter-free model work?", "title": "Text-based inference of moral sentiment change" }, { "answers": [ "By complementing morally relevant seed words with a set of morally irrelevant seed words based on the notion of valence" ], "context": "To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.", "id": 465, "question": "How do they quantify moral relevance?", "title": "Text-based inference of moral sentiment change" }, { "answers": [ "" ], "context": "We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.", "id": 466, "question": "Which fine-grained moral dimension examples do they showcase?", "title": "Text-based inference of moral sentiment change" }, { "answers": [ "" ], "context": "To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.", "id": 467, "question": "Which dataset sources to they use to demonstrate moral sentiment through history?", "title": "Text-based inference of moral sentiment change" }, { "answers": [ "" ], "context": "Interactive fictions—also called text-adventure games or text-based games—are games in which a player interacts with a virtual world purely through textual natural language—receiving descriptions of what they “see” and writing out how they want to act, an example can be seen in Figure FIGREF2. Interactive fiction games are often structured as puzzles, or quests, set within the confines of given game world. Interactive fictions have been adopted as a test-bed for real-time game playing agents BIBREF0, BIBREF1, BIBREF2. Unlike other, graphical games, interactive fictions test agents' abilities to infer the state of the world through communication and to indirectly affect change in the world through language. Interactive fictions are typically modeled after real or fantasy worlds; commonsense knowledge is an important factor in successfully playing interactive fictions BIBREF3, BIBREF4.", "id": 468, "question": "How well did the system do?", "title": "Bringing Stories Alive: Generating Interactive Fiction Worlds" }, { "answers": [ "" ], "context": "There has been a slew of recent work in developing agents that can play text games BIBREF0, BIBREF5, BIBREF1, BIBREF6. BIBREF7 ammanabrolutransfer,ammanabrolu,ammanabrolu2020graph in particular use knowledge graphs as state representations for game-playing agents. BIBREF8 propose QAit, a set of question answering tasks framed as text-based or interactive fiction games. QAit focuses on helping agents learn procedural knowledge through interaction with a dynamic environment. These works all focus on agents that learn to play a given set of interactive fiction games as opposed to generating them.", "id": 469, "question": "How is the information extracted?", "title": "Bringing Stories Alive: Generating Interactive Fiction Worlds" }, { "answers": [ "" ], "context": "During thousands of years, millions of classical Chinese poems have been written. They contain ancient poets' emotions such as their appreciation for nature, desiring for freedom and concerns for their countries. Among various types of classical poetry, quatrain poems stand out. On the one hand, their aestheticism and terseness exhibit unique elegance. On the other hand, composing such poems is extremely challenging due to their phonological, tonal and structural restrictions.", "id": 470, "question": "What are some guidelines in writing input vernacular so model can generate ", "title": "Generating Classical Chinese Poems from Vernacular Chinese" }, { "answers": [ "Perplexity of the best model is 65.58 compared to best baseline 105.79.\nBleu of the best model is 6.57 compared to best baseline 5.50." ], "context": "Classical Chinese Poem Generation Most previous works in classical Chinese poem generation focus on improving the semantic coherence of generated poems. Based on LSTM, Zhang and Lapata Zhang2014ChinesePG purposed generating poem lines incrementally by taking into account the history of what has been generated so far. Yan Yan2016iPA proposed a polishing generation schema, each poem line is generated incrementally and iteratively by refining each line one-by-one. Wang et al. Wang2016ChinesePG and Yi et al. Yi2018ChinesePG proposed models to keep the generated poems coherent and semantically consistent with the user's intent. There are also researches that focus on other aspects of poem generation. (Yang et al. Yang2018StylisticCP explored increasing the diversity of generated poems using an unsupervised approach. Xu et al. Xu2018HowII explored generating Chinese poems from images. While most previous works generate poems based on topic words, our work targets at a novel task: generating poems from vernacular Chinese paragraphs.", "id": 471, "question": "How much is proposed model better in perplexity and BLEU score than typical UMT models?", "title": "Generating Classical Chinese Poems from Vernacular Chinese" }, { "answers": [ "" ], "context": "We transform our poem generation task as an unsupervised machine translation problem. As illustrated in Figure FIGREF1, based on the recently proposed UMT framework BIBREF2, our model is composed of the following components:", "id": 472, "question": "What dataset is used for training?", "title": "Generating Classical Chinese Poems from Vernacular Chinese" }, { "answers": [ "" ], "context": "Task-oriented dialogue system, which helps users to achieve specific goals with natural language, is attracting more and more research attention. With the success of the sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, several works tried to model the task-oriented dialogue as the Seq2Seq generation of response from the dialogue history BIBREF5, BIBREF6, BIBREF7. This kind of modeling scheme frees the task-oriented dialogue system from the manually designed pipeline modules and heavy annotation labor for these modules.", "id": 473, "question": "What were the evaluation metrics?", "title": "Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever" }, { "answers": [ "" ], "context": "In this section, we will describe the input and output of the end-to-end task-oriented dialogue system, and the definition of Seq2Seq task-oriented dialogue generation.", "id": 474, "question": "What were the baseline systems?", "title": "Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever" }, { "answers": [ "" ], "context": "Given a dialogue between a user ($u$) and a system ($s$), we follow eric:2017:SIGDial and represent the $k$-turned dialogue utterances as $\\lbrace (u_{1}, s_{1} ), (u_{2} , s_{2} ), ... , (u_{k}, s_{k})\\rbrace $. At the $i^{\\text{th}}$ turn of the dialogue, we aggregate dialogue context which consists of the tokens of $(u_{1}, s_{1}, ..., s_{i-1}, u_{i})$ and use $\\mathbf {x} = (x_{1}, x_{2}, ..., x_{m})$ to denote the whole dialogue history word by word, where $m$ is the number of tokens in the dialogue history.", "id": 475, "question": "Which dialog datasets did they experiment with?", "title": "Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever" }, { "answers": [ "" ], "context": "In this paper, we assume to have the access to a relational-database-like KB $B$, which consists of $|\\mathcal {R}|$ rows and $|\\mathcal {C}|$ columns. The value of entity in the $j^{\\text{th}}$ row and the $i^{\\text{th}}$ column is noted as $v_{j, i}$.", "id": 476, "question": "What KB is used?", "title": "Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever" }, { "answers": [ "" ], "context": "Deep neural networks have been successfully applied to several computer vision tasks such as image classification BIBREF0 , object detection BIBREF1 , video action classification BIBREF2 , etc. They have also been successfully applied to natural language processing tasks such as machine translation BIBREF3 , machine reading comprehension BIBREF4 , etc. There has also been an explosion of interest in tasks which combine multiple modalities such as audio, vision, and language together. Some popular multi-modal tasks combining these three modalities, and their differences are highlighted in Table TABREF1 .", "id": 477, "question": "At which interval do they extract video and audio frames?", "title": "From FiLM to Video: Multi-turn Question Answering with Multi-modal Context" }, { "answers": [ "" ], "context": "With the availability of large conversational corpora from sources like Reddit and Twitter, there has been a lot of recent work on end-to-end modelling of dialogue for open domains. BIBREF12 treated dialogue as a machine translation problem where they translate from the stimulus to the response. They observed this to be more challenging than machine translation tasks due the larger diversity of possible responses. Among approaches that just use the previous utterance to generate the current response, BIBREF13 proposed a response generation model based on the encoder decoder framework. BIBREF14 also proposed an encoder-decoder based neural network architecture that uses the previous two utterances to generate the current response. Among discriminative methods (i.e. methods that produce a score for utterances from a set and then rank them), BIBREF15 proposed a neural architecture to select the best next response from a list of responses by measuring their similarity to the dialogue context. BIBREF16 extended prior work on encoder-decoder-based models to multi-turn conversations. They trained a hierarchical model called hred for generating dialogue utterances where a recurrent neural network encoder encodes each utterance. A higher-level recurrent neural network maintains the dialogue state by further encoding the individual utterance encodings. This dialogue state is then decoded by another recurrent decoder to generate the response at that point in time. In followup work, BIBREF17 used a latent stochastic variable to condition the generation process which aided their model in producing longer coherent outputs that better retain the context.", "id": 478, "question": "Do they use pretrained word vectors for dialogue context embedding?", "title": "From FiLM to Video: Multi-turn Question Answering with Multi-modal Context" }, { "answers": [ "Answer with content missing: (list missing) \nScheduled sampling: In our experiments, we found that models trained with scheduled sampling performed better (about 0.004 BLEU-4 on validation set) than the ones trained using teacher-forcing for the AVSD dataset. Hence, we use scheduled sampling for all the results we report in this paper.\n\nYes." ], "context": "The avsd dataset BIBREF28 consists of dialogues collected via amt. Each dialogue is associated with a video from the Charades BIBREF29 dataset and has conversations between two amt workers related to the video. The Charades dataset has multi-action short videos and it provides text descriptions for these videos, which the avsd challenge also distributes as the caption. The avsd dataset has been collected using similar methodology as the visdial dataset. In avsd, each dialogue turn consists of a question and answer pair. One of the amt workers assumes the role of questioner while the other amt worker assumes the role of answerer. The questioner sees three static frames from the video and has to ask questions. The answerer sees the video and answers the questions asked by the questioner. After 10 such qa turns, the questioner wraps up by writing a summary of the video based on the conversation.", "id": 479, "question": "Do they train a different training method except from scheduled sampling?", "title": "From FiLM to Video: Multi-turn Question Answering with Multi-modal Context" }, { "answers": [ "" ], "context": "With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. \"tweets\") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.", "id": 480, "question": "Is the web interface publicly accessible?", "title": "Civique: Using Social Media to Detect Urban Emergencies" }, { "answers": [ "" ], "context": "In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.", "id": 481, "question": "Is the Android application publicly available?", "title": "Civique: Using Social Media to Detect Urban Emergencies" }, { "answers": [ "" ], "context": "We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.", "id": 482, "question": "What classifier is used for emergency categorization?", "title": "Civique: Using Social Media to Detect Urban Emergencies" }, { "answers": [ "" ], "context": "We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .", "id": 483, "question": "What classifier is used for emergency detection?", "title": "Civique: Using Social Media to Detect Urban Emergencies" }, { "answers": [ "" ], "context": "The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques.", "id": 484, "question": "Do the tweets come from any individual?", "title": "Civique: Using Social Media to Detect Urban Emergencies" }, { "answers": [ "" ], "context": "We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 .", "id": 485, "question": "How many categories are there?", "title": "Civique: Using Social Media to Detect Urban Emergencies" }, { "answers": [ "" ], "context": "We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of \"Mumbai\", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.", "id": 486, "question": "What was the baseline?", "title": "Civique: Using Social Media to Detect Urban Emergencies" }, { "answers": [ "" ], "context": "We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map.", "id": 487, "question": "Are the tweets specific to a region?", "title": "Civique: Using Social Media to Detect Urban Emergencies" }, { "answers": [ "" ], "context": "Natural language inference (NLI), also known as recognizing textual entailment (RTE), has been proposed as a benchmark task for natural language understanding. Given a premise $P$ and a hypothesis $H$ , the task is to determine whether the premise semantically entails the hypothesis BIBREF0 . A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing, focusing on various types of lexical inferences BIBREF1 , BIBREF2 , BIBREF3 and logical inferences BIBREF4 , BIBREF5 .", "id": 488, "question": "Do they release MED?", "title": "Can neural networks understand monotonicity reasoning?" }, { "answers": [ "" ], "context": "As an example of a monotonicity inference, consider the example with the determiner every in ( \"Monotonicity\" ); here the premise $P$ entails the hypothesis $H$ .", "id": 489, "question": "What NLI models do they analyze?", "title": "Can neural networks understand monotonicity reasoning?" }, { "answers": [ "Upward reasoning is defined as going from one specific concept to a more general one. Downward reasoning is defined as the opposite, going from a general concept to one that is more specific." ], "context": "To create monotonicity inference problems, we should satisfy three requirements: (a) detect the monotonicity operators and their arguments; (b) based on the syntactic structure, induce the polarity of the argument positions; and (c) replace the phrase in the argument position with a more general or specific phrase in natural and various ways (e.g., by using lexical knowledge or logical connectives). For (a) and (b), we first conduct polarity computation on a syntactic structure for each sentence, and then select premises involving upward/downward expressions.", "id": 490, "question": "How do they define upward and downward reasoning?", "title": "Can neural networks understand monotonicity reasoning?" }, { "answers": [ "" ], "context": "We also collect monotonicity inference problems from previous manually curated datasets and linguistics publications. The motivation is that previous linguistics publications related to monotonicity reasoning are expected to contain well-designed inference problems, which might be challenging problems for NLI models.", "id": 491, "question": "What is monotonicity reasoning?", "title": "Can neural networks understand monotonicity reasoning?" }, { "answers": [ "" ], "context": "With the growing demand for human-computer/robot interaction systems, detecting the emotional state of the user can heavily benefit a conversational agent to respond at an appropriate emotional level. Emotion recognition in conversations has proven important for potential applications such as response recommendation or generation, emotion-based text-to-speech, personalisation, etc. Human emotional states can be expressed verbally and non-verbally BIBREF0, BIBREF1, however, while building an interactive dialogue system, the interface needs dialogue acts. A typical dialogue system consists of a language understanding module which requires to determine the meaning of and intention in the human input utterances BIBREF2, BIBREF3. Also, in discourse or conversational analysis, dialogue acts are the main linguistic features to consider BIBREF4. A dialogue act provides an intention and performative function in an utterance of the dialogue. For example, it can infer a user's intention by distinguishing Question, Answer, Request, Agree/Reject, etc. and performative functions such as Acknowledgement, Conversational-opening or -closing, Thanking, etc. The dialogue act information together with emotional states can be very useful for a spoken dialogue system to produce natural interaction BIBREF5.", "id": 492, "question": "What other relations were found in the datasets?", "title": "Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators." }, { "answers": [ "" ], "context": "There are two emotion taxonomies: (1) discrete emotion categories (DEC) and (2) fined-grained dimensional basis of emotion states (DBE). The DECs are Joy, Sadness, Fear, Surprise, Disgust, Anger and Neutral; identified by Ekman et al. ekman1987universalemos. The DBE of the emotion is usually elicited from two or three dimensions BIBREF1, BIBREF11, BIBREF12. A two-dimensional model is commonly used with Valence and Arousal (also called activation), and in the three-dimensional model, the third dimension is Dominance. IEMOCAP is annotated with all DECs and two additional emotion classes, Frustration and Excited. IEMOCAP is also annotated with three DBE, that includes Valance, Arousal and Dominance BIBREF6. MELD BIBREF8, which is an evolved version of the Emotionlines dataset developed by BIBREF13, is annotated with exactly 7 DECs and sentiments (positive, negative and neutral).", "id": 493, "question": "How does the ensemble annotator extract the final label?", "title": "Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators." }, { "answers": [ "" ], "context": "There have been many taxonomies for dialogue acts: speech acts BIBREF14 refer to the utterance, not only to present information but to the action at is performed. Speech acts were later modified into five classes (Assertive, Directive, Commissive, Expressive, Declarative) BIBREF15. There are many such standard taxonomies and schemes to annotate conversational data, and most of them follow the discourse compositionality. These schemes have proven their importance for discourse or conversational analysis BIBREF16. During the increased development of dialogue systems and discourse analysis, the standard taxonomy was introduced in recent decades, called Dialogue Act Markup in Several Layers (DAMSL) tag set. According to DAMSL, each DA has a forward-looking function (such as Statement, Info-request, Thanking) and a backwards-looking function (such as Accept, Reject, Answer) BIBREF17.", "id": 494, "question": "How were dialogue act labels defined?", "title": "Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators." }, { "answers": [ "" ], "context": "We adopted the neural architectures based on Bothe et al. bothe2018discourse where two variants are: non-context model (classifying at utterance level) and context model (recognizing the dialogue act of the current utterance given a few preceding utterances). From conversational analysis using dialogue acts in Bothe et al. bothe2018interspeech, we learned that the preceding two utterances contribute significantly to recognizing the dialogue act of the current utterance. Hence, we adapt this setting for the context model and create a pool of annotators using recurrent neural networks (RNNs). RNNs can model the contextual information in the sequence of words of an utterance and in the sequence of utterances of a dialogue. Each word in an utterance is represented with a word embedding vector of dimension 1024. We use the word embedding vectors from pre-trained ELMo (Embeddings from Language Models) embeddings BIBREF22. We have a pool of five neural annotators as shown in Figure FIGREF6. Our online tool called Discourse-Wizard is available to practice automated dialogue act labeling. In this tool we use the same neural architectures but model-trained embeddings (while, in this work we use pre-trained ELMo embeddings, as they are better performant but computationally and size-wise expensive to be hosted in the online tool). The annotators are:", "id": 495, "question": "How many models were used?", "title": "Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators." }, { "answers": [ "" ], "context": "Ultrasound tongue imaging (UTI) is a non-invasive way of observing the vocal tract during speech production BIBREF0 . Instrumental speech therapy relies on capturing ultrasound videos of the patient's tongue simultaneously with their speech audio in order to provide a diagnosis, design treatments, and measure therapy progress BIBREF1 . The two modalities must be correctly synchronised, with a minimum shift of INLINEFORM0 45ms if the audio leads and INLINEFORM1 125ms if the audio lags, based on synchronisation standards for broadcast audiovisual signals BIBREF2 . Errors beyond this range can render the data unusable – indeed, synchronisation errors do occur, resulting in significant wasted effort if not corrected. No mechanism currently exists to automatically correct these errors, and although manual synchronisation is possible in the presence of certain audiovisual cues such as stop consonants BIBREF3 , it is time consuming and tedious.", "id": 496, "question": "Do they compare their neural network against any other model?", "title": "Synchronising audio and ultrasound by learning cross-modal embeddings" }, { "answers": [ "Use an existing one" ], "context": "Ultrasound and audio are recorded using separate components, and hardware synchronisation is achieved by translating information from the visual signal into audio at recording time. Specifically, for every ultrasound frame recorded, the ultrasound beam-forming unit releases a pulse signal, which is translated by an external hardware synchroniser into an audio pulse signal and captured by the sound card BIBREF6 , BIBREF7 . Synchronisation is achieved by aligning the ultrasound frames with the audio pulse signal, which is already time-aligned with the speech audio BIBREF8 .", "id": 497, "question": "Do they annotate their own dataset or use an existing one?", "title": "Synchronising audio and ultrasound by learning cross-modal embeddings" }, { "answers": [ "" ], "context": "Speech audio is generated by articulatory movement and is therefore fundamentally correlated with other manifestations of this movement, such as lip or tongue videos BIBREF10 . An alternative to the hardware approach is to exploit this correlation to find the offset. Previous approaches have investigated the effects of using different representations and feature extraction techniques on finding dimensions of high correlation BIBREF11 , BIBREF12 , BIBREF13 . More recently, neural networks, which learn features directly from input, have been employed for the task. SyncNet BIBREF4 uses a two-stream neural network and self-supervision to learn cross-modal embeddings, which are then used to synchronise audio with lip videos. It achieves near perfect accuracy ( INLINEFORM0 99 INLINEFORM1 ) using manual evaluation where lip-sync error is not detectable to a human. It has since been extended to use different sample creation methods for self-supervision BIBREF5 , BIBREF14 and different training objectives BIBREF14 . We adopt the original approach BIBREF4 , as it is both simpler and significantly less expensive to train than the more recent variants.", "id": 498, "question": "Does their neural network predict a single offset in a recording?", "title": "Synchronising audio and ultrasound by learning cross-modal embeddings" }, { "answers": [ "CNN" ], "context": "Videos of lip movement can be obtained from various sources including TV, films, and YouTube, and are often cropped to include only the lips BIBREF4 . UTI data, on the other hand, is recorded in clinics by trained therapists BIBREF15 . An ultrasound probe placed under the chin of the patient captures the midsaggital view of their oral cavity as they speak. UTI data consists of sequences of 2D matrices of raw ultrasound reflection data, which can be interpreted as greyscale images BIBREF15 . There are several challenges specifically associated with UTI data compared with lip videos, which can potentially lower the performance of models relative to results reported on lip video data. These include:", "id": 499, "question": "What kind of neural network architecture do they use?", "title": "Synchronising audio and ultrasound by learning cross-modal embeddings" }, { "answers": [ "" ], "context": "School of Computer Science and Engineering, Nanyang Technological University, Singapore", "id": 500, "question": "How are aspects identified in aspect extraction?", "title": "Basic tasks of sentiment analysis" }, { "answers": [ "MUC, CoNLL, ACE, OntoNotes, MSM, Ritter, UMBC" ], "context": "Named entity recognition and classification (NERC, short NER), the task of recognising and assigning a class to mentions of proper names (named entities, NEs) in text, has attracted many years of research BIBREF0 , BIBREF1 , analyses BIBREF2 , starting from the first MUC challenge in 1995 BIBREF3 . Recognising entities is key to many applications, including text summarisation BIBREF4 , search BIBREF5 , the semantic web BIBREF6 , topic modelling BIBREF7 , and machine translation BIBREF8 , BIBREF9 .", "id": 501, "question": "What web and user-generated NER datasets are used for the analysis?", "title": "Generalisation in Named Entity Recognition: A Quantitative Analysis" }, { "answers": [ "1000 hours of WSJ audio data" ], "context": "Current state of the art models for speech recognition require large amounts of transcribed audio data to attain good performance BIBREF1 . Recently, pre-training of neural networks has emerged as an effective technique for settings where labeled data is scarce. The key idea is to learn general representations in a setup where substantial amounts of labeled or unlabeled data is available and to leverage the learned representations to improve performance on a downstream task for which the amount of data is limited. This is particularly interesting for tasks where substantial effort is required to obtain labeled data, such as speech recognition.", "id": 502, "question": "Which unlabeled data do they pretrain with?", "title": "wav2vec: Unsupervised Pre-training for Speech Recognition" }, { "answers": [ "wav2vec has 12 convolutional layers" ], "context": "Given an audio signal as input, we optimize our model (§ SECREF3 ) to predict future samples from a given signal context. A common problem with these approaches is the requirement to accurately model the data distribution INLINEFORM0 , which is challenging. We avoid this problem by first encoding raw speech samples INLINEFORM1 into a feature representation INLINEFORM2 at a lower temporal frequency and then implicitly model a density function INLINEFORM3 similar to BIBREF15 .", "id": 503, "question": "How many convolutional layers does their model have?", "title": "wav2vec: Unsupervised Pre-training for Speech Recognition" }, { "answers": [ "" ], "context": "Our model takes raw audio signal as input and then applies two networks. The encoder network embeds the audio signal in latent space and the context network combines multiple time-steps of the encoder to obtain contextualized representations (Figure FIGREF2 ). Both networks are then used to compute the objective function (§ SECREF4 ).", "id": 504, "question": "Do they explore how much traning data is needed for which magnitude of improvement for WER? ", "title": "wav2vec: Unsupervised Pre-training for Speech Recognition" }, { "answers": [ "" ], "context": "State-of-the-art morphological taggers require thousands of annotated sentences to train. For the majority of the world's languages, however, sufficient, large-scale annotation is not available and obtaining it would often be infeasible. Accordingly, an important road forward in low-resource NLP is the development of methods that allow for the training of high-quality tools from smaller amounts of data. In this work, we focus on transfer learning—we train a recurrent neural tagger for a low-resource language jointly with a tagger for a related high-resource language. Forcing the models to share character-level features among the languages allows large gains in accuracy when tagging the low-resource languages, while maintaining (or even improving) accuracy on the high-resource language.", "id": 505, "question": "How are character representations from various languages joint?", "title": "Cross-lingual, Character-Level Neural Morphological Tagging" }, { "answers": [ "" ], "context": "Many languages in the world exhibit rich inflectional morphology: the form of individual words mutates to reflect the syntactic function. For example, the Spanish verb soñar will appear as sueño in the first person present singular, but soñáis in the second person present plural, depending on the bundle of syntaco-semantic attributes associated with the given form (in a sentential context). For concreteness, we list a more complete table of Spanish verbal inflections in tab:paradigm. [author=Ryan,color=purple!40,size=,fancyline,caption=,]Notation in table is different. Note that some languages, e.g. the Northeastern Caucasian language Archi, display a veritable cornucopia of potential forms with the size of the verbal paradigm exceeding 10,000 BIBREF10 .", "id": 506, "question": "On which dataset is the experiment conducted?", "title": "Cross-lingual, Character-Level Neural Morphological Tagging" }, { "answers": [ "" ], "context": "Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like “What is the capital city of the United States?\"", "id": 507, "question": "Do they train their own RE model?", "title": "Neural Cross-Lingual Relation Extraction Based on Bilingual Word Embedding Mapping" }, { "answers": [ "In-house dataset consists of 3716 documents \nACE05 dataset consists of 1635 documents" ], "context": "We summarize the main steps of our neural cross-lingual RE model transfer approach as follows.", "id": 508, "question": "How big are the datasets?", "title": "Neural Cross-Lingual Relation Extraction Based on Bilingual Word Embedding Mapping" }, { "answers": [ "" ], "context": "In recent years, vector representations of words, known as word embeddings, become ubiquitous for many NLP applications BIBREF12, BIBREF13, BIBREF14.", "id": 509, "question": "What languages do they experiment on?", "title": "Neural Cross-Lingual Relation Extraction Based on Bilingual Word Embedding Mapping" }, { "answers": [ "" ], "context": "To build monolingual word embeddings for the source and target languages, we use a variant of the Continuous Bag-of-Words (CBOW) word2vec model BIBREF13.", "id": 510, "question": "What datasets are used?", "title": "Neural Cross-Lingual Relation Extraction Based on Bilingual Word Embedding Mapping" }, { "answers": [ "" ], "context": "This work focuses on the problem of finding objects in an image based on natural language descriptions. Existing solutions take into account both the image and the query BIBREF0, BIBREF1, BIBREF2. In our problem formulation, rather than having the entire text, we are given only a prefix of the text which requires completing the text based on a language model and the image, and finding a relevant object in the image. We decompose the problem into three components: (i) completing the query from text prefix and an image; (ii) estimating probabilities of objects based on the completed text, and (iii) segmenting and classifying all instances in the image. We combine, extend, and modify state of the art components: (i) we extend a FactorCell LSTM BIBREF3, BIBREF4 which conditionally completes text to complete a query from both a text prefix and an image; (ii) we fine tune a BERT embedding to compute instance probabilities from a complete sentence, and (iii) we use Mask-RCNN BIBREF5 for instance segmentation.", "id": 511, "question": "How better does auto-completion perform when using both language and vision than only language?", "title": "Visual Natural Language Query Auto-Completion for Estimating Instance Probabilities" }, { "answers": [ "" ], "context": "Figure FIGREF2 shows the architecture of our approach. First, we extract image features with a pre-trained CNN. We incorporate the image features into a modified FactorCell LSTM language model along with the user query prefix to complete the query. The completed query is then fed into a fine-tuned BERT embedding to estimate instance probabilities, which in turn are used for instance selection.", "id": 512, "question": "How big is data provided by this research?", "title": "Visual Natural Language Query Auto-Completion for Estimating Instance Probabilities" }, { "answers": [ "" ], "context": "We utilize the FactorCell (FC) adaptation of an LSTM with coupled input and forget gates BIBREF4 to autocomplete queries. The FactorCell is an LSTM with a context-dependent weight matrix $\\mathbf {W^{\\prime }} = \\mathbf {W} + \\mathbf {A}$ in place of $\\mathbf {W}$. Given a character embedding $w_t \\in \\mathbb {R}^e$, a previous hidden state $h_{t-1} \\in \\mathbb {R}^h$ , the adaptation matrix, $\\mathbf {A}$, is formed by taking the product of the context, c, with two basis tensors $\\mathbf {Z_L} \\in \\mathbb {R}^{m\\times (e+h)\\times r}$ and $\\mathbf {Z_R} \\in \\mathbb {R}^{r\\times h \\times m}$.", "id": 513, "question": "How they complete a user query prefix conditioned upon an image?", "title": "Visual Natural Language Query Auto-Completion for Estimating Instance Probabilities" }, { "answers": [ "" ], "context": "Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .", "id": 514, "question": "Did the collection process use a WoZ method?", "title": "Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation" }, { "answers": [ "" ], "context": "This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .", "id": 515, "question": "By how much did their model outperform the baseline?", "title": "Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation" }, { "answers": [ "the baseline where path generation uses a standard sequence-to-sequence model augmented with attention mechanism and path verification uses depth-first search" ], "context": "Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.", "id": 516, "question": "What baselines did they compare their model with?", "title": "Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation" }, { "answers": [ "For test-repeated set, EM score of 61.17, F1 of 93.54, ED of 0.75 and GM of 61.36. For test-new set, EM score of 41.71, F1 of 91.02, ED of 1.22 and GM of 41.81" ], "context": "We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.", "id": 517, "question": "What was the performance of their model?", "title": "Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation" }, { "answers": [ "exact match, f1 score, edit distance and goal match" ], "context": "We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).", "id": 518, "question": "What evaluation metrics are used?", "title": "Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation" }, { "answers": [ "" ], "context": "We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.", "id": 519, "question": "Did the authors use a crowdsourcing platform?", "title": "Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation" }, { "answers": [ "using Amazon Mechanical Turk using simulated environments with topological maps" ], "context": "This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results.", "id": 520, "question": "How were the navigation instructions collected?", "title": "Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation" }, { "answers": [ "english language" ], "context": "While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3\" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor\", “cf\", “lt\", “cf\", “iol\"). In this plan, “R-1\",“C-1\", “C-0\", and “O-3\" are symbols for locations (nodes) in the graph.", "id": 521, "question": "What language is the experiment done in?", "title": "Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation" }, { "answers": [ "distinguishing between clinically positive and negative phenomena within each risk factor domain and accounting for structured data collected on the target cohort" ], "context": "Psychotic disorders typically emerge in late adolescence or early adulthood BIBREF0 , BIBREF1 and affect approximately 2.5-4% of the population BIBREF2 , BIBREF3 , making them one of the leading causes of disability worldwide BIBREF4 . A substantial proportion of psychiatric inpatients are readmitted after discharge BIBREF5 . Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF6 , BIBREF7 . Reducing readmission risk is therefore a major unmet need of psychiatric care. Developing clinically implementable machine learning tools to enable accurate assessment of risk factors associated with readmission offers opportunities to inform the selection of treatment interventions and implement appropriate preventive measures.", "id": 522, "question": "What additional features are proposed for future work?", "title": "Analysis of Risk Factor Domains in Psychosis Patient Health Records" }, { "answers": [ "Achieved the highest per-domain scores on Substance (F1 ≈ 0.8) and the lowest scores on Interpersonal and Mood (F1 ≈ 0.5), and show consistency in per-domain performance rankings between MLP and RBF models." ], "context": "McCoy et al. mccoy2015clinical constructed a corpus of web data based on the Research Domain Criteria (RDoC) BIBREF15 , and used this corpus to create a vector space document similarity model for topic extraction. They found that the `negative valence' and `social' RDoC domains were associated with readmission. Using web data (in this case data retrieved from the Bing API) to train a similarity model for EHR texts is problematic since it differs from the target data in both structure and content. Based on reconstruction of the procedure, we conclude that many of the informative MWEs critical to understanding the topics of paragraphs in EHRs are not captured in the web data. Additionally, RDoC is by design a generalized research construct to describe the entire spectrum of mental disorders and does not include domains that are based on observation or causes of symptoms. Important indicators within EHRs of patient health, like appearance or occupation, are not included in the RDoC constructs.", "id": 523, "question": "What are their initial results on this task?", "title": "Analysis of Risk Factor Domains in Psychosis Patient Health Records" }, { "answers": [ "" ], "context": "[2]The vast majority of patients in our target cohort are", "id": 524, "question": "What datasets did the authors use?", "title": "Analysis of Risk Factor Domains in Psychosis Patient Health Records" }, { "answers": [ "" ], "context": "Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0, BIBREF1, BIBREF2. However, in consideration of time cost and space capacity, the NMT model generally employs a limited-size vocabulary that only contains the top-N highest frequency words (commonly in the range of 30K to 80K) BIBREF3, which leads to the Out-of-Vocabulary (OOV) problem following with inaccurate and terrible translation results. Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words. For the low-resource and source-side morphologically-rich machine translation tasks, such as Turkish-English and Uyghur-Chinese, all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus.", "id": 525, "question": "How many linguistic and semantic features are learned?", "title": "Morphological Word Segmentation on Agglutinative Languages for Neural Machine Translation" }, { "answers": [ "A BPE model is applied to the stem after morpheme segmentation." ], "context": "We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1.", "id": 526, "question": "How is morphology knowledge implemented in the method?", "title": "Morphological Word Segmentation on Agglutinative Languages for Neural Machine Translation" }, { "answers": [ "" ], "context": "The words of Turkish and Uyghur are formed by a stem followed with unlimited number of suffixes. Both of the stem and suffix are called morphemes, and they are the smallest functional unit in agglutinative languages. Study indicated that modeling language based on the morpheme units can provide better performance BIBREF6. Morpheme segmentation can segment the complex word into morpheme units of stem and suffix. This representation maintains a full description of the morphological properties of subwords while minimizing the data sparseness caused by inflection and allomorphy phenomenon in highly-inflected languages.", "id": 527, "question": "How does the word segmentation method work?", "title": "Morphological Word Segmentation on Agglutinative Languages for Neural Machine Translation" }, { "answers": [ "" ], "context": "In this segmentation strategy, each word is segmented into a stem unit and a combined suffix unit. We add “##” behind the stem unit and add “$$” behind the combined suffix unit. We denote this method as SCS. The segmented word can be denoted as two parts of “stem##” and “suffix1suffix2...suffixN$$”. If the original word has no suffix unit, the word is treated as its stem unit. All the following segmentation strategies will follow this rule.", "id": 528, "question": "Is the word segmentation method independently evaluated?", "title": "Morphological Word Segmentation on Agglutinative Languages for Neural Machine Translation" }, { "answers": [ "" ], "context": "In this paper, we propose the processing of features not only in the input layer of a deep network, but in the intermediate layers as well. We are motivated by a desire to enable a neural network acoustic model to adaptively process the features depending on partial hypotheses and noise conditions. Many previous methods for adaptation have operated by linearly transforming either input features or intermediate layers in a two pass process where the transform is learned to maximize the likelihood of some adaptation data BIBREF0, BIBREF1, BIBREF2. Other methods have involved characterizing the input via factor analysis or i-vectors BIBREF3, BIBREF4. Here, we suggest an alternative approach in which adaptation can be achieved by re-presenting the feature stream at an intermediate layer of the network that is constructed to be correlated with the ultimate graphemic or phonetic output of the system.", "id": 529, "question": "Do they normalize the calculated intermediate output hypotheses to compensate for the incompleteness?", "title": "Deja-vu: Double Feature Presentation and Iterated Loss in Deep Transformer Networks" }, { "answers": [ "" ], "context": "A transformer network BIBREF5 is a powerful approach to learning and modeling sequential data. A transformer network is itself constructed with a series of transformer modules that each perform some processing. Each module has a self-attention mechanism and several feed-forward layers, enabling easy parallelization over time-steps compared to recurrent models such as RNNs or LSTMs BIBREF10. We use the architecture defined in BIBREF5, and provide only a brief summary below.", "id": 530, "question": "How many layers do they use in their best performing network?", "title": "Deja-vu: Double Feature Presentation and Iterated Loss in Deep Transformer Networks" }, { "answers": [ "" ], "context": "In this section, we present our proposal for allowing the network to (re)-consider the input features in the light of intermediate processing. We do this by again deploying a self-attention mechanism to combine the information present in the original features with the information available in the activations of an intermediate layer. As described earlier, we calculate the output posteriors and auxiliary loss at the intermediate layer as well. The overall architecture is illustrated in Figure FIGREF6. Here, we have used a 24 layer network, with feature re-presentation after the 12th layer.", "id": 531, "question": "Do they just sum up all the loses the calculate to end up with one single loss?", "title": "Deja-vu: Double Feature Presentation and Iterated Loss in Deep Transformer Networks" }, { "answers": [ "" ], "context": "We process the features in the intermediate later by concatenating a projection of the original features with a projection of previous hidden layer activations, and then applying self-attention.", "id": 532, "question": "Does their model take more time to train than regular transformer models?", "title": "Deja-vu: Double Feature Presentation and Iterated Loss in Deep Transformer Networks" }, { "answers": [ "" ], "context": "A widely agreed-on fact in language acquisition research is that learning of a second language (L2) is influenced by a learner's native language (L1) BIBREF0, BIBREF1. A language's morphosyntax seems to be no exception to this rule BIBREF2, but the exact nature of this influence remains unknown. For instance, it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2's morphosyntax.", "id": 533, "question": "Are agglutinative languages used in the prediction of both prefixing and suffixing languages?", "title": "Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge" }, { "answers": [ "" ], "context": "Many of the world's languages exhibit rich inflectional morphology: the surface form of an individual lexical entry changes in order to express properties such as person, grammatical gender, or case. The citation form of a lexical entry is referred to as the lemma. The set of all possible surface forms or inflections of a lemma is called its paradigm. Each inflection within a paradigm can be associated with a tag, i.e., 3rdSgPres is the morphological tag associated with the inflection dances of the English lemma dance. We display the paradigms of dance and eat in Table TABREF1.", "id": 534, "question": "What is an example of a prefixing language?", "title": "Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge" }, { "answers": [ "Comparison of test accuracies of neural network models on an inflection task and qualitative analysis of the errors" ], "context": "Let ${\\cal M}$ be the paradigm slots which are being expressed in a language, and $w$ a lemma in that language. We then define the paradigm $\\pi $ of $w$ as:", "id": 535, "question": "How is the performance on the task evaluated?", "title": "Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge" }, { "answers": [ "" ], "context": "The models we experiment with are based on a pointer–generator network architecture BIBREF10, BIBREF11, i.e., a recurrent neural network (RNN)-based sequence-to-sequence network with attention and a copy mechanism. A standard sequence-to-sequence model BIBREF12 has been shown to perform well for morphological inflection BIBREF13 and has, thus, been subject to cognitively motivated experiments BIBREF14 before. Here, however, we choose the pointer–generator variant of sharma-katrapati-sharma:2018:K18-30, since it performs better in low-resource settings, which we will assume for our target languages. We will explain the model shortly in the following and refer the reader to the original paper for more details.", "id": 536, "question": "What are the tree target languages studied in the paper?", "title": "Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge" }, { "answers": [ "" ], "context": "The Cambridge Handbook of Endangered Languages BIBREF3 estimates that at least half of the 7,000 languages currently spoken worldwide will no longer exist by the end of this century. For these endangered languages, data collection campaigns have to accommodate the challenge that many of them are from oral tradition, and producing transcriptions is costly. This transcription bottleneck problem can be handled by translating into a widely spoken language to ensure subsequent interpretability of the collected recordings, and such parallel corpora have been recently created by aligning the collected audio with translations in a well-resourced language BIBREF1, BIBREF2, BIBREF4. Moreover, some linguists suggested that more than one translation should be collected to capture deeper layers of meaning BIBREF5.", "id": 537, "question": "Is the model evaluated against any baseline?", "title": "How Does Language Influence Documentation Workflow? Unsupervised Word Discovery Using Translations in Multiple Languages" }, { "answers": [ "" ], "context": "In this work we extend the bilingual Mboshi-French parallel corpus BIBREF2, fruit of the documentation process of Mboshi (Bantu C25), an endangered language spoken in Congo-Brazzaville. The corpus contains 5,130 utterances, for which it provides audio, transcriptions and translations in French. We translate the French into four other well-resourced languages through the use of the $DeepL$ translator. The languages added to the dataset are: English, German, Portuguese and Spanish. Table shows some statistics for the produced Multilingual Mboshi parallel corpus.", "id": 538, "question": "Does the paper report the accuracy of the model?", "title": "How Does Language Influence Documentation Workflow? Unsupervised Word Discovery Using Translations in Multiple Languages" }, { "answers": [ "" ], "context": "We use the bilingual neural-based Unsupervised Word Segmentation (UWS) approach from BIBREF6 to discover words in Mboshi. In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence). Due to the attention mechanism present in these networks BIBREF7, posterior to training, it is possible to retrieve soft-alignment probability matrices between source and target sequences. These matrices give us sentence-level source-to-target alignment information, and by using it for clustering neighbor phonemes aligned to the same translation word, we are able to create segmentation in the target side. The product of this approach is a set of (discovered-units, translation words) pairs.", "id": 539, "question": "How is the performance of the model evaluated?", "title": "How Does Language Influence Documentation Workflow? Unsupervised Word Discovery Using Translations in Multiple Languages" }, { "answers": [ "" ], "context": "In this work we apply two simple methods for including multilingual information into the bilingual models from BIBREF6. The first one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries. The voting is performed by applying an agreement threshold $T$ over the output boundaries. This threshold balances between accepting all boundaries from all the bilingual models (zero agreement) and accepting only input boundaries discovered by all these models (total agreement). The second method is ANE Selection. For every language pair and aligned sentence in the dataset, a soft-alignment probability matrix is generated. We use Average Normalized Entropy (ANE) BIBREF8 computed over these matrices for selecting the most confident one for segmenting each phoneme sequence. This exploits the idea that models trained on different language pairs will have language-related behavior, thus differing on the resulting alignment and segmentation over the same phoneme sequence.", "id": 540, "question": "What are the different bilingual models employed?", "title": "How Does Language Influence Documentation Workflow? Unsupervised Word Discovery Using Translations in Multiple Languages" }, { "answers": [ "" ], "context": "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9.", "id": 541, "question": "How does the well-resourced language impact the quality of the output?", "title": "How Does Language Influence Documentation Workflow? Unsupervised Word Discovery Using Translations in Multiple Languages" }, { "answers": [ "" ], "context": "Neural machine translation (NMT) is a challenging task that attracts lots of attention in recent years. Starting from the encoder-decoder framework BIBREF0 , NMT starts to show promising results in many language pairs. The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable. The attention mechanism BIBREF1 added on top of encoder-decoder framework is shown to be very useful to automatically find alignment structure, and single-layer RNN-based structure has evolved into deeper models with more efficient transformation functions BIBREF2 , BIBREF3 , BIBREF4 .", "id": 542, "question": "what are the baselines?", "title": "Dense Information Flow for Neural Machine Translation" }, { "answers": [ "" ], "context": "In this section, we introduce our DenseNMT architecture. In general, compared with residual connected NMT models, DenseNMT allows each layer to provide its information to all subsequent layers directly. Figure FIGREF9 - FIGREF15 show the design of our model structure by parts.", "id": 543, "question": "did they outperform previous methods?", "title": "Dense Information Flow for Neural Machine Translation" }, { "answers": [ "" ], "context": "Different from residual connections, later layers in the dense encoder are able to use features from all previous layers by concatenating them: DISPLAYFORM0 ", "id": 544, "question": "what language pairs are explored?", "title": "Dense Information Flow for Neural Machine Translation" }, { "answers": [ "IWSLT14 German-English, IWSLT14 Turkish-English, WMT14 English-German" ], "context": "Prior works show a trend of designing more expressive attention mechanisms (as discussed in Section 2). However, most of them only use the last encoder layer. In order to pass more abundant information from the encoder side to the decoder side, the attention block needs to be more expressive. Following the recent development of designing attention architectures, we propose DenseAtt as the dense attention block, which serves for the dense connection between the encoder and the decoder side. More specifically, two options are proposed accordingly. For each decoding step in the corresponding decoder layer, the two options both calculate attention using multiple encoder layers. The first option is more compressed, while the second option is more expressive and flexible. We name them as DenseAtt-1 and DenseAtt-2 respectively. Figure FIGREF15 shows the architecture of (a) multi-step attention BIBREF2 , (b) DenseAtt-1, and (c) DenseAtt-2 in order. In general, a popular multiplicative attention module can be written as: DISPLAYFORM0 ", "id": 545, "question": "what datasets were used?", "title": "Dense Information Flow for Neural Machine Translation" }, { "answers": [ "" ], "context": "Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture.", "id": 546, "question": "How is order of binomials tracked across time?", "title": "Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text" }, { "answers": [ "" ], "context": "Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules.", "id": 547, "question": "What types of various community texts have been investigated for exploring global structure of binomials?", "title": "Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text" }, { "answers": [ "" ], "context": "We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments.", "id": 548, "question": "Are there any new finding in analasys of trinomials that was not present binomials?", "title": "Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text" }, { "answers": [ "" ], "context": "In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions.", "id": 549, "question": "What new model is proposed for binomial lists?", "title": "Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text" }, { "answers": [ "" ], "context": "Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8.", "id": 550, "question": "How was performance of previously proposed rules at very large scale?", "title": "Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text" }, { "answers": [ "" ], "context": "Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\\lbrace x,y\\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8.", "id": 551, "question": "What previously proposed rules for predicting binoial ordering are used?", "title": "Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text" }, { "answers": [ "" ], "context": "In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success.", "id": 552, "question": "What online text resources are used to test binomial lists?", "title": "Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text" }, { "answers": [ "" ], "context": "Literary critics form interpretations of meaning in works of literature. Building computational models that can help form and test these interpretations is a fundamental goal of digital humanities research BIBREF0 . Within natural language processing, most previous work that engages with literature relies on “distant reading” BIBREF1 , which involves discovering high-level patterns from large collections of stories BIBREF2 , BIBREF3 . We depart from this trend by showing that computational techniques can also engage with literary criticism at a closer distance: concretely, we use recent advances in text representation learning to test a single literary theory about the novel Invisible Cities by Italo Calvino.", "id": 553, "question": "How do they model a city description using embeddings?", "title": "Casting Light on Invisible Cities: Computationally Engaging with Literary Criticism" }, { "answers": [ "Using crowdsourcing " ], "context": "Before describing our method and results, we first review critical opinions on both sides of whether Calvino's thematic groups meaningfully characterize his city descriptions.", "id": 554, "question": "How do they obtain human judgements?", "title": "Casting Light on Invisible Cities: Computationally Engaging with Literary Criticism" }, { "answers": [ "" ], "context": "We focus on measuring to what extent computers can recover Calvino's thematic groupings when given just raw text of the city descriptions. At a high level, our approach (Figure FIGREF4 ) involves (1) computing a vector representation for every city and (2) performing unsupervised clustering of these representations. The rest of this section describes both of these steps in more detail.", "id": 555, "question": "Which clustering method do they use to cluster city description embeddings?", "title": "Casting Light on Invisible Cities: Computationally Engaging with Literary Criticism" }, { "answers": [ "single-domain setting" ], "context": "A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system BIBREF7 . For each dialogue turn, a DST module takes a user utterance and the dialogue history as input, and outputs a belief estimate of the dialogue state. Then a machine action is decided based on the dialogue state according to a dialogue policy module, after which a machine response is generated.", "id": 556, "question": "Does this approach perform better in the multi-domain or single-domain setting?", "title": "Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation" }, { "answers": [ "" ], "context": "f1 shows a multi-domain dialogue in which the user wants the system to first help book a train and then reserve a hotel. For each turn, the DST will need to track the slot-value pairs (e.g. (arrive by, 20:45)) representing the user goals as well as the domain that the slot-value pairs belongs to (e.g. train, hotel). Instead of representing the belief state via a hierarchical structure, one can also combine the domain and slot together to form a combined slot-value pair (e.g. (train; arrive by, 20:45) where the combined slot is “train; arrive by\"), which ignores the subordination relationship between the domain and the slots.", "id": 557, "question": "What are the performance metrics used?", "title": "Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation" }, { "answers": [ "" ], "context": "Given a dialogue $D$ which consists of $T$ turns of user utterances and system actions, our target is to predict the state at each turn. Different from previous methods which formulate multi-label state prediction as a collection of binary prediction problems, COMER adapts the task into a sequence generation problem via a Seq2Seq framework.", "id": 558, "question": "Which datasets are used to evaluate performance?", "title": "Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation" }, { "answers": [ "" ], "context": "State-of-the-art models for almost all popular natural language processing tasks are based on deep neural networks, trained on massive amounts of data. A key question that has been raised in many different forms is to what extent these models have learned the compositional generalizations that characterize language, and to what extent they rely on storing massive amounts of exemplars and only make `local' generalizations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . This question has led to (sometimes heated) debates between deep learning enthusiasts that are convinced neural networks can do almost anything, and skeptics that are convinced some types of generalization are fundamentally beyond reach for deep learning systems, pointing out that crucial tests distinguishing between generalization and memorization have not been applied.", "id": 559, "question": "How does the automatic theorem prover infer the relation?", "title": "Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization" }, { "answers": [ "" ], "context": "The data generation process is inspired by BIBREF13 : an artificial language is defined, sentences are generated according to its grammar and the entailment relation between pairs of such sentences is established according to a fixed background logic. However, our language is significantly more complex, and instead of natural logic we use FOL.", "id": 560, "question": "If these model can learn the first-order logic on artificial language, why can't it lear for natural language?", "title": "Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization" }, { "answers": [ "70,000" ], "context": "Our main model is a recurrent network, sketched in Figure 4 . It is a so-called `Siamese' network because it uses the same parameters to process the left and the right sentence. The upper part of the model is identical to BIBREF13 's recursive networks. It consists of a comparison layer and a classification layer, after which a softmax function is applied to determine the most probable target class. The comparison layer takes the concatenation of two sentence vectors as input. The number of cells equals the number of words, so it differs per sentence.", "id": 561, "question": "How many samples did they generate for the artificial language?", "title": "Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization" }, { "answers": [ "" ], "context": "Although deep neural networks have achieved remarkable successes (e.g., BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ), their dependence on supervised learning has been challenged as a significant weakness. This dependence prevents deep neural networks from being applied to problems where labeled data is scarce. An example of such problems is common sense reasoning, such as the Winograd Schema Challenge BIBREF0 , where the labeled set is typically very small, on the order of hundreds of examples. Below is an example question from this dataset:", "id": 562, "question": "Which of their training domains improves performance the most?", "title": "A Simple Method for Commonsense Reasoning" }, { "answers": [ "" ], "context": "Unsupervised learning has been used to discover simple commonsense relationships. For example, Mikolov et al. BIBREF15 , BIBREF16 show that by learning to predict adjacent words in a sentence, word vectors can be used to answer analogy questions such as: Man:King::Woman:?. Our work uses a similar intuition that language modeling can naturally capture common sense knowledge. The difference is that Winograd Schema questions require more contextual information, hence our use of LMs instead of just word vectors.", "id": 563, "question": "Do they fine-tune their model on the end task?", "title": "A Simple Method for Commonsense Reasoning" }, { "answers": [ "Because, unlike other languages, English does not mark grammatical genders" ], "context": "One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases. This is because NLP systems depend on language corpora, which are inherently “not objective; they are creations of human design” BIBREF0 . One type of societal bias that has received considerable attention from the NLP community is gender stereotyping BIBREF1 , BIBREF2 , BIBREF3 . Gender stereotypes can manifest in language in overt ways. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women BIBREF4 .", "id": 564, "question": "Why does not the approach from English work on other languages?", "title": "Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology" }, { "answers": [ "by calculating log ratio of grammatical phrase over ungrammatical phrase" ], "context": "Men and women are mentioned at different rates in text BIBREF11 . This problem is exacerbated in certain contexts. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system. Gender stereotypes of this sort have been observed in word embeddings BIBREF5 , BIBREF3 , contextual word embeddings BIBREF12 , and co-reference resolution systems BIBREF13 , BIBREF9 inter alia.", "id": 565, "question": "How do they measure grammaticality?", "title": "Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology" }, { "answers": [ "" ], "context": "In this section, we present a Markov random field BIBREF17 for morpho-syntactic agreement. This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags. Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "id": 566, "question": "Which model do they use to convert between masculine-inflected and feminine-inflected sentences?", "title": "Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology" }, { "answers": [ "" ], "context": "Humans deploy structure-sensitive expectations to guide processing during natural language comprehension BIBREF0. While it has been shown that neural language models show similar structure-sensitivity in their predictions about upcoming material BIBREF1, BIBREF2, previous work has focused on dependencies that are conditioned by features attached to a single word, such as subject number BIBREF3, BIBREF4 or wh-question words BIBREF5. There has been no systematic investigation into models' ability to compute phrase-level features—features that are attached to a set of words—and whether models can deploy these more abstract properties to drive downstream expectations.", "id": 567, "question": "What is the performance achieved by the model described in the paper?", "title": "Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study" }, { "answers": [ "" ], "context": "To determine whether state-of-the-art neural architectures are capable of learning humanlike CoordNP/verb agreement properties, we adopt the psycholinguistics paradigm for model assessment. In this paradigm the models are tested using hand-crafted sentences designed to test underlying network knowledge. The assumption here is that if a model implicitly learns humanlike linguistic knowledge during training, its expectations for upcoming words should qualitatively match human expectations in novel contexts. For example, BIBREF1 and BIBREF6 assessed how well neural models had learned the subject/verb number agreement by feeding them with the prefix The keys to the cabinet .... If the models predicted the grammatical continuation are over the ungrammatical continuation is, they can be said to have learned the number agreement insofar as the number of the head noun and not the number of the distractor noun, cabinet, drives expectations about the number of the matrix verb.", "id": 568, "question": "What is the best performance achieved by supervised models?", "title": "Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study" }, { "answers": [ "" ], "context": "are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256.", "id": 569, "question": "What is the size of the datasets employed?", "title": "Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study" }, { "answers": [ "" ], "context": "models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17.", "id": 570, "question": "What are the baseline models?", "title": "Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study" }, { "answers": [ "" ], "context": "Spoken dialogue systems that can help users to solve complex tasks have become an emerging research topic in artificial intelligence and natural language processing areas BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . With a well-designed dialogue system as an intelligent personal assistant, people can accomplish certain tasks more easily via natural language interactions. Today, there are several virtual intelligent assistants, such as Apple's Siri, Google's Home, Microsoft's Cortana, and Amazon's Alexa, in the market. A typical dialogue system pipeline can be divided into several parts: a recognized result of a user's speech input is fed into a natural language understanding module (NLU) to classify the domain along with domain-specific intents and fill in a set of slots to form a semantic frame BIBREF4 , BIBREF5 , BIBREF6 . A dialogue state tracking (DST) module predicts the current state of the dialogue by means of the semantic frames extracted from multi-turn conversations. Then the dialogue policy determines the system action for the next step given the current dialogue state. Finally the semantic frame of the system action is then fed into a natural language generation (NLG) module to construct a response utterance to the user BIBREF7 , BIBREF8 .", "id": 571, "question": "What evaluation metrics are used?", "title": "Investigating Linguistic Pattern Ordering in Hierarchical Natural Language Generation" }, { "answers": [ "" ], "context": "The framework of the proposed hierarchical NLG model is illustrated in Figure FIGREF2 , where the model architecture is based on an encoder-decoder (seq2seq) structure with attentional hierarchical decoders BIBREF14 , BIBREF15 . In the encoder-decoder architecture, a typical generation process includes encoding and decoding phases: First, a given semantic representation sequence INLINEFORM0 is fed into a RNN-based encoder to capture the temporal dependency and project the input to a latent feature space; the semantic representation sequence is also encoded into an one-hot representation as the initial state of the encoder in order to maintain the temporal-independent condition as shown in the left part of Figure FIGREF2 . The recurrent unit of the encoder is bidirectional gated recurrent unit (GRU) BIBREF14 , DISPLAYFORM0 ", "id": 572, "question": "What datasets did they use?", "title": "Investigating Linguistic Pattern Ordering in Hierarchical Natural Language Generation" }, { "answers": [ "" ], "context": " This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/", "id": 573, "question": "Why does their model do better than prior models?", "title": "Deep Enhanced Representation for Implicit Discourse Relation Recognition" }, { "answers": [ "Between the model and Stanford, Spacy and Flair the differences are 42.91, 25.03, 69.8 with Traditional NERs as reference and 49.88, 43.36, 62.43 with Wikipedia titles as reference." ], "context": "Named-Entity-Recognition(NER) approaches can be categorised broadly in three types. Detecting NER with predefined dictionaries and rulesBIBREF2, with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4.", "id": 574, "question": "What is the difference in recall score between the systems?", "title": "Detecting Potential Topics In News Using BERT, CRF and Wikipedia" }, { "answers": [ "F1 score and Recall are 68.66, 80.08 with Traditional NERs as reference and 59.56, 69.76 with Wikipedia titles as reference." ], "context": "We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows -", "id": 575, "question": "What is their f1 score and recall?", "title": "Detecting Potential Topics In News Using BERT, CRF and Wikipedia" }, { "answers": [ "4 layers" ], "context": "We tried multiple variations of LSTM and GRU layes, with/without CRF layer. There is a marginal gain in using GRU layers over LSTM. Also, we saw gain in using just one layers of GRU instead of more. Finally, we settled on the architecture, shown in Figure 1 for the final training, based on validation set scores with sample training set.", "id": 576, "question": "How many layers does their system have?", "title": "Detecting Potential Topics In News Using BERT, CRF and Wikipedia" }, { "answers": [ "" ], "context": "Trained the topic model on single 32gb NVidia-V100 and it took around 50 hours to train the model with sequence length 512. We had to take 256gb ram machine to accommodate all data in memory for faster read/write. Also, trained model with 64 sequence length in around 17 hours.", "id": 577, "question": "Which news corpus is used?", "title": "Detecting Potential Topics In News Using BERT, CRF and Wikipedia" }, { "answers": [ "" ], "context": "Comparison with existing open-source NER libraries is not exactly fair as they are NOT trained for detecting topics and important n-grams, also NOT trained for case-less text. But they are useful in testing and benchmarking if our model is detecting traditional NERs or not, which it should capture, as Wikipedia titles contains almost all Names, Places and Organisation names. You can check the sample output here", "id": 578, "question": "How large is the dataset they used?", "title": "Detecting Potential Topics In News Using BERT, CRF and Wikipedia" }, { "answers": [ "" ], "context": "There is a classic riddle: A man and his son get into a terrible car crash. The father dies, and the boy is badly injured. In the hospital, the surgeon looks at the patient and exclaims, “I can't operate on this boy, he's my son!” How can this be?", "id": 579, "question": "Which coreference resolution systems are tested?", "title": "Gender Bias in Coreference Resolution" }, { "answers": [ "" ], "context": "Semantic parsing, which translates a natural language sentence into its corresponding executable logic form (e.g. Structured Query Language, SQL), relieves users from the burden of learning techniques behind the logic form. The majority of previous studies on semantic parsing assume that queries are context-independent and analyze them in isolation. However, in reality, users prefer to interact with systems in a dialogue, where users are allowed to ask context-dependent incomplete questions BIBREF0. That arises the task of Semantic Parsing in Context (SPC), which is quite challenging as there are complex contextual phenomena. In general, there are two sorts of contextual phenomena in dialogues: Coreference and Ellipsis BIBREF1. Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2. After the question “What is id of the car with the max horsepower?”, the user poses an elliptical question “How about with the max mpg?”, and a question containing pronouns “Show its Make!”. Only when completely understanding the context, could a parser successfully parse the incomplete questions into their corresponding SQL queries.", "id": 580, "question": "How big is improvement in performances of proposed model over state of the art?", "title": "How Far are We from Effective Context Modeling ? An Exploratory Study on Semantic Parsing in Context" }, { "answers": [ "" ], "context": "In the task of semantic parsing in context, we are given a dataset composed of dialogues. Denoting $\\langle \\mathbf {x}_1,...,\\mathbf {x}_n\\rangle $ a sequence of natural language questions in a dialogue, $\\langle \\mathbf {y}_1,...,\\mathbf {y}_n\\rangle $ are their corresponding SQL queries. Each SQL query is conditioned on a multi-table database schema, and the databases used in test do not appear in training. In this section, we first present a base model without considering context. Then we introduce 6 typical context modeling methods and describe how we equip the base model with these methods. Finally, we present how to augment the model with BERT BIBREF10.", "id": 581, "question": "What two large datasets are used for evaluation?", "title": "How Far are We from Effective Context Modeling ? An Exploratory Study on Semantic Parsing in Context" }, { "answers": [ "Concat\nTurn\nGate\nAction Copy\nTree Copy\nSQL Attn\nConcat + Action Copy\nConcat + Tree Copy\nConcat + SQL Attn\nTurn + Action Copy\nTurn + Tree Copy\nTurn + SQL Attn\nTurn + SQL Attn + Action Copy" ], "context": "We employ the popularly used attention-based sequence-to-sequence architecture BIBREF11, BIBREF12 to build our base model. As shown in Figure FIGREF6, the base model consists of a question encoder and a grammar-based decoder. For each question, the encoder provides contextual representations, while the decoder generates its corresponding SQL query according to a predefined grammar.", "id": 582, "question": "What context modelling methods are evaluated?", "title": "How Far are We from Effective Context Modeling ? An Exploratory Study on Semantic Parsing in Context" }, { "answers": [ "" ], "context": "To capture contextual information within a question, we apply Bidirectional Long Short-Term Memory Neural Network (BiLSTM) as our question encoder BIBREF13, BIBREF14. Specifically, at turn $i$, firstly every token $x_{i,k}$ in $\\mathbf {x}_{i}$ is fed into a word embedding layer $\\mathbf {\\phi }^x$ to get its embedding representation $\\mathbf {\\phi }^x{(x_{i,k})}$. On top of the embedding representation, the question encoder obtains a contextual representation $\\mathbf {h}^{E}_{i,k}=[\\mathop {{\\mathbf {h}}^{\\overrightarrow{E}}_{i,k}}\\,;{\\mathbf {h}}^{\\overleftarrow{E}}_{i,k}]$, where the forward hidden state is computed as following:", "id": 583, "question": "What are two datasets models are tested on?", "title": "How Far are We from Effective Context Modeling ? An Exploratory Study on Semantic Parsing in Context" }, { "answers": [ "" ], "context": "Aspect based sentiment analysis (ABSA) is a fine-grained task in sentiment analysis, which can provide important sentiment information for other natural language processing (NLP) tasks. There are two different subtasks in ABSA, namely, aspect-category sentiment analysis and aspect-term sentiment analysis BIBREF0, BIBREF1. Aspect-category sentiment analysis aims at predicting the sentiment polarity towards the given aspect, which is in predefined several categories and it may not appear in the sentence. For instance, in Table TABREF2, the aspect-category sentiment analysis is going to predict the sentiment polarity towards the aspect “food”, which is not appeared in the sentence. By contrast, the goal of aspect-term sentiment analysis is to predict the sentiment polarity over the aspect term which is a subsequence of the sentence. For instance, the aspect-term sentiment analysis will predict the sentiment polarity towards the aspect term “The appetizers”, which is a subsequence of the sentence. Additionally, the number of categories of the aspect term is more than one thousand in the training corpus.", "id": 584, "question": "How big is the improvement over the state-of-the-art results?", "title": "A Novel Aspect-Guided Deep Transition Model for Aspect Based Sentiment Analysis" }, { "answers": [ "" ], "context": "As shown in Figure FIGREF6, the AGDT model mainly consists of three parts: aspect-guided encoder, aspect-reconstruction and aspect concatenated embedding. The aspect-guided encoder is specially designed to guide the encoding of a sentence from scratch for conducting the aspect-specific feature selection and extraction at the very beginning stage. The aspect-reconstruction aims to guarantee that the aspect-specific information has been fully embedded in the sentence representation for more accurate predictions. The aspect concatenated embedding part is used to concatenate the aspect embedding and the generated sentence representation so as to make the final prediction.", "id": 585, "question": "Is the model evaluated against other Aspect-Based models?", "title": "A Novel Aspect-Guided Deep Transition Model for Aspect Based Sentiment Analysis" }, { "answers": [ "There were hierarchical and non-hierarchical baselines; BERT was one of those baselines" ], "context": "Automatic document summarization is the task of rewriting a document into its shorter form while still retaining its important content. Over the years, many paradigms for document summarization have been explored (see Nenkova:McKeown:2011 for an overview). The most popular two among them are extractive approaches and abstractive approaches. As the name implies, extractive approaches generate summaries by extracting parts of the original document (usually sentences), while abstractive methods may generate new words or phrases which are not in the original document.", "id": 586, "question": "Is the baseline a non-heirarchical model like BERT?", "title": "HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization" }, { "answers": [ "" ], "context": "", "id": 587, "question": "Do they build a model to recognize discourse relations on their dataset?", "title": "Shallow Discourse Annotation for Chinese TED Talks" }, { "answers": [ "" ], "context": "Following the release of the Penn Discourse Treebank (PDTB-2) in 2008 BIBREF7, several remarkable Chinese discourse corpora have since adapted the PDTB framework BIBREF8, including the Chinese Discourse Treebank BIBREF9, HIT Chinese Discourse Treebank (HIT-CDTB) zhou2014cuhk, and the Discourse Treebank for Chinese (DTBC) BIBREF6. Specifically, Xue proposed the Chinese Discourse Treebank (CDTB) Project BIBREF10. From their annotation work, they discussed the matters such as features of Chinese discourse connectives, definition and scope of arguments, and senses disambiguation, and they argued that determining the argument scope is the most challenging part of the annotation. To further promote their research, zhou2012pdtb presented a PDTB-style discourse corpus for Chinese. They also discussed the key characteristics of Chinese text which differs from English, e.g., the parallel connectives, comma-delimited intra-sentential implicit relations etc. Their data set contains 98 documents from the Chinese Treebank BIBREF10. In 2015, Zhou and Xue expanded their corpus to 164 documents, with more than 5000 relations being annotated. huang-chen-2011-chinese constructed a Chinese discourse corpus with 81 articles. They adopted the top-level senses from PDTB sense hierarchy and focused on the annotation of inter-sentential discourse relations. zhang2014chinese analyzed the differences between Chinese and English, and then presented a new Chinese discourse relation hierarchy based on the PDTB system, in which the discourse relations are divided into 6 types: temporal, causal, condition, comparison, expansion and conjunction. And they constructed a Chinese Discourse Relation corpus called HIT-CDTB based on this hierarchy. Then, zhou2014cuhk presented the first open discourse treebank for Chinese, the CUHK Discourse Treebank for Chinese. They adapted the annotation scheme of Penn Discourse Treebank 2 (PDTB-2) to Chinese language and made adjustments to 3 aspects according to the previous study of Chinese linguistics. However, they just reannotated the documents of the Chinese Treebank and did not annotate inter-sentence level discourse relations.", "id": 588, "question": "Which inter-annotator metric do they use?", "title": "Shallow Discourse Annotation for Chinese TED Talks" }, { "answers": [ "" ], "context": "The annotation scheme we adopted in this work is based on the framework of PDTB, incorporating the most recent PDTB (PDTB-3) relational taxonomy and sense hierarchy BIBREF5, shown in Table 1. PDTB follows a lexically grounded approach to the representation of discourse relations BIBREF12. Discourse relations are taken to hold between two abstract object arguments, named Arg1 and Arg2 using syntactic conventions, and are triggered either by explicit connectives or, otherwise, by adjacency between clauses and sentences. As we can see from Table 1, the PDTB-3 sense hierarchy has 4 top-level senses (Expansion, Temporal, Contingency, Contrast) and second- and third-level senses for some cases. With obvious differences ranging from the conventions used in annotation, to differences in senses hierarchy, PDTB-3 gives rigorous attention to achieving as much consistency as possible while annotating discourse relations.", "id": 589, "question": "How high is the inter-annotator agreement?", "title": "Shallow Discourse Annotation for Chinese TED Talks" }, { "answers": [ "" ], "context": "The argument-labelling conventions used in the PDTB-2 had to be modified to deal with the wider variety of discourse relations that needed to be annotated consistently within sentences in the PDTB-3. In particular, in labelling intra-sentential discourse relations, a distinction was made between relations whose arguments were in coordinating syntactic structures and ones whose arguments were in subordinating syntactic structures. For coordinating structures, arguments were labelled by position (Arg1 first, then Arg2), while for subordinating structures, the argument in subordinate position was labelled Arg2, and the other, Arg1, independent of position.", "id": 590, "question": "How are resources adapted to properties of Chinese text?", "title": "Shallow Discourse Annotation for Chinese TED Talks" }, { "answers": [ "F1 score of best authors' model is 55.98 compared to BiLSTM and FastText that have F1 score slighlty higher than 46.61." ], "context": "Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e. ethos) BIBREF0, BIBREF1, BIBREF2; the traits and prior beliefs of the audience BIBREF3, BIBREF4, BIBREF5, BIBREF6; and the pragmatic context in which the argument is presented (i.e. kairos) BIBREF7, BIBREF8.", "id": 591, "question": "How better are results compared to baseline models?", "title": "The Role of Pragmatic and Discourse Context in Determining Argument Impact" }, { "answers": [ "" ], "context": "Recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing BIBREF17, BIBREF18, and argument component classification BIBREF19, BIBREF20. More recently, there is an increased research interest to develop computational methods that can automatically evaluate qualitative characteristic of arguments, such as their impact and persuasive power BIBREF9, BIBREF10, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. Consistent with findings in the social sciences and psychology, some of the work in NLP has shown that the impact and persuasive power of the arguments are not simply related to the linguistic characteristics of the language, but also on characteristics the source (ethos) BIBREF16 and the audience BIBREF12, BIBREF13. These studies suggest that perception of the arguments can be influenced by the credibility of the source, and the background of the audience.", "id": 592, "question": "What models that rely only on claim-specific linguistic features are used as baselines?", "title": "The Role of Pragmatic and Discourse Context in Determining Argument Impact" }, { "answers": [ "" ], "context": "Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented.", "id": 593, "question": "How is pargmative and discourse context added to the dataset?", "title": "The Role of Pragmatic and Discourse Context in Determining Argument Impact" }, { "answers": [ "" ], "context": "Similar to prior work, our aim is to understand the characteristics of impactful claims in argumentation. However, we hypothesize that the qualitative characteristics of arguments is not independent of the context in which they are presented. To understand the relationship between argument context and the impact of a claim, we aim to incorporate the context along with the claim itself in our predictive models.", "id": 594, "question": "What annotations are available in the dataset?", "title": "The Role of Pragmatic and Discourse Context in Determining Argument Impact" }, { "answers": [ "4,261 days for France and 4,748 for the UK" ], "context": "Whether it is in the field of energy, finance or meteorology, accurately predicting the behavior of time series is nowadays of paramount importance for optimal decision making or profit. While the field of time series forecasting is extremely prolific from a research point-of-view, up to now it has narrowed its efforts on the exploitation of regular numerical features extracted from sensors, data bases or stock exchanges. Unstructured data such as text on the other hand remains underexploited for prediction tasks, despite its potentially valuable informative content. Empirical studies have already proven that textual sources such as news articles or blog entries can be correlated to stock exchange time series and have explanatory power for their variations BIBREF0, BIBREF1. This observation has motivated multiple extensive experiments to extract relevant features from textual documents in different ways and use them for prediction, notably in the field of finance. In Lavrenko et al. BIBREF2, language models (considering only the presence of a word) are used to estimate the probability of trends such as surges or falls of 127 different stock values using articles from Biz Yahoo!. Their results show that this text driven approach could be used to make profit on the market. One of the most conventional ways for text representation is the TF-IDF (Term Frequency - Inverse Document Frequency) approach. Authors have included such features derived from news pieces in multiple traditional machine learning algorithms such as support vector machines (SVM) BIBREF3 or logistic regression BIBREF4 to predict the variations of financial series again. An alternative way to encode the text is through latent Dirichlet allocation (LDA) BIBREF5. It assigns topic probabilities to a text, which can be used as inputs for subsequent tasks. This is for instance the case in Wang's aforementioned work (alongside TF-IDF). In BIBREF6, the authors used Reuters news encoded by LDA to predict if NASDAQ and Dow Jones closing prices increased or decreased compared to the opening ones. Their empirical results show that this approach was efficient to improve the prediction of stock volatility. More recently Kanungsukkasem et al. BIBREF7 introduced a variant of the LDA graphical model, named FinLDA, to craft probabilities that are specifically tailored for a financial time series prediction task (although their approach could be generalized to other ones). Their results showed that indeed performance was better when using probabilities from their alternative than those of the original LDA. Deep learning with its natural ability to work with text through word embeddings has also been used for time series prediction with text. Combined with traditional time series features, the authors of BIBREF8 derived sentiment features from a convolutional neural network (CNN) to reduce the prediction error of oil prices. Akita et al. BIBREF9 represented news articles through the use of paragraph vectors BIBREF10 in order to predict 10 closing stock values from the Nikkei 225. While in the case of financial time series the existence of specialized press makes it easy to decide which textual source to use, it is much more tedious in other fields. Recently in Rodrigues et al. BIBREF11, short description of events (such as concerts, sports matches, ...) are leveraged through a word embedding and neural networks in addition to more traditional features. Their experiments show that including the text can bring an improvement of up to 2% of root mean squared error compared to an approach without textual information. Although the presented studies conclude on the usefulness of text to improve predictions, they never thoroughly analyze which aspects of the text are of importance, keeping the models as black-boxes.", "id": 595, "question": "How big is dataset used for training/testing?", "title": "Textual Data for Time Series Forecasting" }, { "answers": [ "" ], "context": "In order to prove the consistency of our work, experiments have been conducted on two data sets, one for France and the other for the UK. In this section details about the text and time series data are given, as well as the major preprocessing steps.", "id": 596, "question": "Is there any example where geometric property is visible for context similarity between words?", "title": "Textual Data for Time Series Forecasting" }, { "answers": [ "Winter and summer words formed two separate clusters. Week day and week-end day words also formed separate clusters." ], "context": "Three types of time series are considered in our work: national net electricity consumption (also referred as load or demand), national temperature and wind speed. The load data sets were retrieved on the websites of the respective grid operators, respectively RTE (Réseau et Transport d'Électricité) for France and National Grid for the UK. For France, the available data ranges from January the 1st 2007 to August the 31st 2018. The default temporal resolution is 30 minutes, but it is averaged to a daily one. For the UK, it is available from January the 1st 2006 to December the 31st 2018 with the same temporal resolution and thus averaging. Due to social factors such as energy policies or new usages of electricity (e.g. Electric Vehicles), the net consumption usually has a long-term trend (fig. FIGREF2). While for France it seems marginal (fig. FIGREF2), there is a strong decreasing trend for the United-Kingdom (fig. FIGREF2). Such a strong non-stationarity of the time series would cause problems for the forecasting process, since the learnt demand levels would differ significantly from the upcoming ones. Therefore a linear regression was used to approximate the decreasing trend of the net consumption in the UK. It is then subtracted before the training of the methods, and then re-added a posteriori for prediction.", "id": 597, "question": "What geometric properties do embeddings display?", "title": "Textual Data for Time Series Forecasting" }, { "answers": [ "Relative error is less than 5%" ], "context": "Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2.", "id": 598, "question": "How accurate is model trained on text exclusively?", "title": "Textual Data for Time Series Forecasting" }, { "answers": [ "F1 score of 66.66%" ], "context": "The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1.", "id": 599, "question": "What was their result on Stance Sentiment Emotion Corpus?", "title": "Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis" }, { "answers": [ "F1 score of 82.10%" ], "context": "A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time.", "id": 600, "question": "What performance did they obtain on the SemEval dataset?", "title": "Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis" }, { "answers": [ "For sentiment analysis UWB, INF-UFRGS-OPINION-MINING, LitisMind, pkudblab and SVM + n-grams + sentiment and for emotion analysis MaxEnt, SVM, LSTM, BiLSTM and CNN" ], "context": "We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections.", "id": 601, "question": "What are the state-of-the-art systems?", "title": "Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis" }, { "answers": [ "" ], "context": "Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow.", "id": 602, "question": "How is multi-tasking performed?", "title": "Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis" }, { "answers": [ "" ], "context": "The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\\in $ V($x_t$):", "id": 603, "question": "What are the datasets used for training?", "title": "Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis" }, { "answers": [ "" ], "context": "The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\\alpha _t$ for each word vector representation and the sentence representation $\\widehat{H}$ are calculated as:", "id": 604, "question": "How many parameters does the model have?", "title": "Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis" }, { "answers": [ "" ], "context": "The final outputs for both sentiment and emotion analysis are computed by feeding $\\widehat{H}$ and $\\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification.", "id": 605, "question": "What is the previous state-of-the-art model?", "title": "Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis" }, { "answers": [ "" ], "context": "Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system.", "id": 606, "question": "What is the previous state-of-the-art performance?", "title": "Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis" }, { "answers": [ "" ], "context": "Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study.", "id": 607, "question": "How can the classifier facilitate the annotation task for human annotators?", "title": "Mapping (Dis-)Information Flow about the MH17 Plane Crash" }, { "answers": [ "" ], "context": "We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role.", "id": 608, "question": "What recommendations are made to improve the performance in future?", "title": "Mapping (Dis-)Information Flow about the MH17 Plane Crash" }, { "answers": [ "" ], "context": "We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available.", "id": 609, "question": "What type of errors do the classifiers use?", "title": "Mapping (Dis-)Information Flow about the MH17 Plane Crash" }, { "answers": [ "" ], "context": "We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists.", "id": 610, "question": "What neural classifiers are used?", "title": "Mapping (Dis-)Information Flow about the MH17 Plane Crash" }, { "answers": [ "" ], "context": "For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016.", "id": 611, "question": "What is the hashtags does the hashtag-based baseline use?", "title": "Mapping (Dis-)Information Flow about the MH17 Plane Crash" }, { "answers": [ "" ], "context": "For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN).", "id": 612, "question": "What languages are included in the dataset?", "title": "Mapping (Dis-)Information Flow about the MH17 Plane Crash" }, { "answers": [ "" ], "context": "Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as", "id": 613, "question": "What dataset is used for this study?", "title": "Mapping (Dis-)Information Flow about the MH17 Plane Crash" }, { "answers": [ "" ], "context": "As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia.", "id": 614, "question": "What proxies for data annotation were used in previous datasets?", "title": "Mapping (Dis-)Information Flow about the MH17 Plane Crash" }, { "answers": [ "" ], "context": "Understanding passenger intents and extracting relevant slots are important building blocks towards developing a contextual dialogue system responsible for handling certain vehicle-passenger interactions in autonomous vehicles (AV). When the passengers give instructions to AMIE (Automated-vehicle Multimodal In-cabin Experience), the agent should parse such commands properly and trigger the appropriate functionality of the AV system. In our AMIE scenarios, we describe usages and support various natural commands for interacting with the vehicle. We collected a multimodal in-cabin data-set with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme. We explored various recent Recurrent Neural Networks (RNN) based techniques and built our own hierarchical models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results achieved F1-score of 0.91 on utterance-level intent recognition and 0.96 on slot extraction models.", "id": 615, "question": "What are the supported natural commands?", "title": "Conversational Intent Understanding for Passengers in Autonomous Vehicles" }, { "answers": [ "3347 unique utterances " ], "context": "Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators.", "id": 616, "question": "What is the size of their collected dataset?", "title": "Conversational Intent Understanding for Passengers in Autonomous Vehicles" }, { "answers": [ "" ], "context": "The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer.", "id": 617, "question": "Did they compare against other systems?", "title": "Conversational Intent Understanding for Passengers in Autonomous Vehicles" }, { "answers": [ "" ], "context": "After exploring various recent Recurrent Neural Networks (RNN) based techniques, we built our own hierarchical joint models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results outperformed certain competitive baselines and achieved overall F1-scores of 0.91 for utterance-level intent recognition and 0.96 for slot extraction tasks.", "id": 618, "question": "What intents does the paper explore?", "title": "Conversational Intent Understanding for Passengers in Autonomous Vehicles" }, { "answers": [ "A continuous emission HMM uses the hidden states of a 2-layer LSTM as features and a discrete emission HMM uses data as features. \nThe interpretability of the model is shown in Figure 2. " ], "context": "Following the recent progress in deep learning, researchers and practitioners of machine learning are recognizing the importance of understanding and interpreting what goes on inside these black box models. Recurrent neural networks have recently revolutionized speech recognition and translation, and these powerful models could be very useful in other applications involving sequential data. However, adoption has been slow in applications such as health care, where practitioners are reluctant to let an opaque expert system make crucial decisions. If we can make the inner workings of RNNs more interpretable, more applications can benefit from their power.", "id": 619, "question": "What kind of features are used by the HMM models, and how interpretable are those?", "title": "Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models" }, { "answers": [ "The HMM can identify punctuation or pick up on vowels." ], "context": "We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data).", "id": 620, "question": "What kind of information do the HMMs learn that the LSTMs don't?", "title": "Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models" }, { "answers": [ "" ], "context": "We use a character-level LSTM with 1 layer and no dropout, based on the Element-Research library. We train the LSTM for 10 epochs, starting with a learning rate of 1, where the learning rate is halved whenever $\\exp (-l_t) > \\exp (-l_{t-1}) + 1$ , where $l_t$ is the log likelihood score at epoch $t$ . The $L_2$ -norm of the parameter gradient vector is clipped at a threshold of 5.", "id": 621, "question": "Which methods do the authors use to reach the conclusion that LSTMs and HMMs learn complementary information?", "title": "Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models" }, { "answers": [ "With similar number of parameters, the log likelihood is about 0.1 lower for LSTMs across datasets. When the number of parameters in LSTMs is increased, their log likelihood is up to 0.7 lower." ], "context": "The HMM training procedure is as follows:", "id": 622, "question": "How large is the gap in performance between the HMMs and the LSTMs?", "title": "Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models" }, { "answers": [ "" ], "context": "The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems.", "id": 623, "question": "Do they report results only on English data?", "title": "Predictive Embeddings for Hate Speech Detection on Twitter" }, { "answers": [ "" ], "context": "Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features.", "id": 624, "question": "Which publicly available datasets are used?", "title": "Predictive Embeddings for Hate Speech Detection on Twitter" }, { "answers": [ "" ], "context": "In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets.", "id": 625, "question": "What embedding algorithm and dimension size are used?", "title": "Predictive Embeddings for Hate Speech Detection on Twitter" }, { "answers": [ "" ], "context": "Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence.", "id": 626, "question": "What data are the embeddings trained on?", "title": "Predictive Embeddings for Hate Speech Detection on Twitter" }, { "answers": [ "" ], "context": "Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand.", "id": 627, "question": "how much was the parameter difference between their model and previous methods?", "title": "Predictive Embeddings for Hate Speech Detection on Twitter" }, { "answers": [ "" ], "context": "We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation.", "id": 628, "question": "how many parameters did their model use?", "title": "Predictive Embeddings for Hate Speech Detection on Twitter" }, { "answers": [ "" ], "context": "We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels.", "id": 629, "question": "which datasets were used?", "title": "Predictive Embeddings for Hate Speech Detection on Twitter" }, { "answers": [ "Proposed model achieves 0.86, 0.924, 0.71 F1 score on SR, HATE, HAR datasets respectively." ], "context": "We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search.", "id": 630, "question": "what was their system's f1 performance?", "title": "Predictive Embeddings for Hate Speech Detection on Twitter" }, { "answers": [ "" ], "context": "The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. ", "id": 631, "question": "what was the baseline?", "title": "Predictive Embeddings for Hate Speech Detection on Twitter" }, { "answers": [ "" ], "context": "Neural machine translation (NMT, § SECREF2 ; kalchbrenner13emnlp, sutskever14nips) is a variant of statistical machine translation (SMT; brown93cl), using neural networks. NMT has recently gained popularity due to its ability to model the translation process end-to-end using a single probabilistic model, and for its state-of-the-art performance on several language pairs BIBREF0 , BIBREF1 .", "id": 632, "question": "What datasets were used?", "title": "Incorporating Discrete Translation Lexicons into Neural Machine Translation" }, { "answers": [ "" ], "context": "The goal of machine translation is to translate a sequence of source words INLINEFORM0 into a sequence of target words INLINEFORM1 . These words belong to the source vocabulary INLINEFORM2 , and the target vocabulary INLINEFORM3 respectively. NMT performs this translation by calculating the conditional probability INLINEFORM4 of the INLINEFORM5 th target word INLINEFORM6 based on the source INLINEFORM7 and the preceding target words INLINEFORM8 . This is done by encoding the context INLINEFORM9 a fixed-width vector INLINEFORM10 , and calculating the probability as follows: DISPLAYFORM0 ", "id": 633, "question": "What language pairs did they experiment with?", "title": "Incorporating Discrete Translation Lexicons into Neural Machine Translation" }, { "answers": [ "278 more annotations" ], "context": "Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role.", "id": 634, "question": "How much more coverage is in the new dataset?", "title": "Crowdsourcing a High-Quality Gold Standard for QA-SRL" }, { "answers": [ "" ], "context": "In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4).", "id": 635, "question": "How was coverage measured?", "title": "Crowdsourcing a High-Quality Gold Standard for QA-SRL" }, { "answers": [ "Inter-annotator agreement, comparison against expert annotation, agreement with PropBank Data annotations." ], "context": "The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others.", "id": 636, "question": "How was quality measured?", "title": "Crowdsourcing a High-Quality Gold Standard for QA-SRL" }, { "answers": [ "" ], "context": "Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations.", "id": 637, "question": "How was the corpus obtained?", "title": "Crowdsourcing a High-Quality Gold Standard for QA-SRL" }, { "answers": [ "" ], "context": "We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix.", "id": 638, "question": "How are workers trained?", "title": "Crowdsourcing a High-Quality Gold Standard for QA-SRL" }, { "answers": [ "a trained worker consolidates existing annotations " ], "context": "We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4).", "id": 639, "question": "What is different in the improved annotation protocol?", "title": "Crowdsourcing a High-Quality Gold Standard for QA-SRL" }, { "answers": [ "" ], "context": "We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense.", "id": 640, "question": "How was the previous dataset annotated?", "title": "Crowdsourcing a High-Quality Gold Standard for QA-SRL" }, { "answers": [ "" ], "context": "Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics.", "id": 641, "question": "How big is the dataset?", "title": "Crowdsourcing a High-Quality Gold Standard for QA-SRL" }, { "answers": [ "" ], "context": "Transfer learning has been shown to work well in Computer Vision where pre-trained components from a model trained on ImageNet BIBREF0 are used to initialize models for other tasks BIBREF1 . In most cases, the other tasks are related to and share architectural components with the ImageNet task, enabling the use of such pre-trained models for feature extraction. With this transfer capability, improvements have been obtained on other image classification datasets, and on other tasks such as object detection, action recognition, image segmentation, etc BIBREF2 . Analogously, we propose a method to transfer a pre-trained component - the multilingual encoder from an NMT system - to other NLP tasks.", "id": 642, "question": "Do the other multilingual baselines make use of the same amount of training data?", "title": "Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation" }, { "answers": [ "" ], "context": "We propose an Encoder-Classifier model, where the Encoder, leveraging the representations learned by a multilingual NMT model, converts an input sequence ${\\mathbf {x}}$ into a set of vectors C, and the Classifier predicts a class label $y$ given the encoding of the input sequence, C.", "id": 643, "question": "How big is the impact of training data size on the performance of the multilingual encoder?", "title": "Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation" }, { "answers": [ "WMT 2014 En-Fr parallel corpus" ], "context": "Although there has been a large body of work in building multilingual NMT models which can translate between multiple languages at the same time BIBREF29 , BIBREF30 , BIBREF31 , BIBREF8 , zero-shot capabilities of such multilingual representations have only been tested for MT BIBREF8 . We propose a simple yet effective solution - reuse the encoder of a multilingual NMT model to initialize the encoder for other NLP tasks. To be able to achieve promising zero-shot classification performance, we consider two factors: (1) The ability to encode multiple source languages with the same encoder and (2) The ability to learn language agnostic representations of the source sequence. Based on the literature, both requirements can be satisfied by training a multilingual NMT model having a shared encoder BIBREF32 , BIBREF8 , and a separate decoder and attention mechanism for each target language BIBREF30 . After training such a multilingual NMT model, the decoder and the corresponding attention mechanisms (which are target-language specific) are discarded, while the multilingual encoder is used to initialize the encoder of our proposed Encoder-Classifier model.", "id": 644, "question": "What data were they used to train the multilingual encoder?", "title": "Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation" }, { "answers": [ "late 2014" ], "context": "In open-ended visual question answering (VQA) an algorithm must produce answers to arbitrary text-based questions about images BIBREF0 , BIBREF1 . VQA is an exciting computer vision problem that requires a system to be capable of many tasks. Truly solving VQA would be a milestone in artificial intelligence, and would significantly advance human computer interaction. However, VQA datasets must test a wide range of abilities for progress to be adequately measured.", "id": 645, "question": "From when are many VQA datasets collected?", "title": "An Analysis of Visual Question Answering Algorithms" }, { "answers": [ "96-97.6% using the objects color or shape and 79% using shape alone" ], "context": "A significant challenge when designing robots to operate in the real world lies in the generation of control policies that can adapt to changing environments. Programming such policies is a labor and time-consuming process which requires substantial technical expertise. Imitation learning BIBREF0, is an appealing methodology that aims at overcoming this challenge – instead of complex programming, the user only provides a set of demonstrations of the intended behavior. These demonstrations are consequently distilled into a robot control policy by learning appropriate parameter settings of the controller. Popular approaches to imitation, such as Dynamic Motor Primitives (DMPs) BIBREF1 or Gaussian Mixture Regression (GMR) BIBREF2 largely focus on motion as the sole input and output modality, i.e., joint angles, forces or positions. Critical semantic and visual information regarding the task, such as the appearance of the target object or the type of task performed, is not taken into account during training and reproduction. The result is often a limited generalization capability which largely revolves around adaptation to changes in the object position. While imitation learning has been successfully applied to a wide range of tasks including table-tennis BIBREF3, locomotion BIBREF4, and human-robot interaction BIBREF5 an important question is how to incorporate language and vision into a differentiable end-to-end system for complex robot control.", "id": 646, "question": "What is task success rate achieved? ", "title": "Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration" }, { "answers": [ "" ], "context": "In order to outline our problem statement, we contrast our approach to Imitation learning BIBREF0 which considers the problem of learning a policy $\\mathbf {\\pi }$ from a given set of demonstrations ${\\cal D}=\\lbrace \\mathbf {d}^0,.., \\mathbf {d}^m\\rbrace $. Each demonstration spans a time horizon $T$ and contains information about the robot's states and actions, e.g., demonstrated sensor values and control inputs at each time step. Robot states at each time step within a demonstration are denoted by $\\mathbf {x}_t$. In contrast to other imitation learning approaches, we assume that we have access to the raw camera images of the robot $_t$ at teach time step, as well as access to a verbal description of the task in natural language. This description may provide critical information about the context, goals or objects involved in the task and is denoted as $\\mathbf {s}$. Given this information, our overall objective is to learn a policy $\\mathbf {\\pi }$ which imitates the demonstrated behavior, while also capturing semantics and important visual features. After training, we can provide the policy $\\mathbf {\\pi }(\\mathbf {s},)$ with a different, new state of the robot and a new verbal description (instruction) as parameters. The policy will then generate the control signals needed to perform the task which takes the new visual input and semantic context int o account.", "id": 647, "question": "What simulations are performed by the authors to validate their approach?", "title": "Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration" }, { "answers": [ "supervised learning" ], "context": "A fundamental challenge in imitation learning is the extraction of policies that do not only cover the trained scenarios, but also generalize to a wide range of other situations. A large body of literature has addressed the problem of learning robot motor skills by imitation BIBREF6, learning functional BIBREF1 or probabilistic BIBREF7 representations. However, in most of these approaches, the state vector has to be carefully designed in order to ensure that all necessary information for adaptation is available. Neural approaches to imitation learning BIBREF8 circumvent this problem by learning suitable feature representations from rich data sources for each task or for a sequence of tasks BIBREF9, BIBREF10, BIBREF11. Many of these approaches assume that either a sufficiently large set of motion primitives is already available or that a taxonomy of the task is available, i.e., semantics and motions are not trained in conjunction. The importance of maintaining this connection has been shown in BIBREF12, allowing the robot to adapt to untrained variations of the same task. To learn entirely new tasks, meta-learning aims at learning policy parameters that can quickly be fine-tuned to new tasks BIBREF13. While very successful in dealing with visual and spatial information, these approaches do not incorporate any semantic or linguistic component into the learning process. Language has shown to successfully generate task descriptions in BIBREF14 and several works have investigated the idea of combining natural language and imitation learning: BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. However, most approaches do not utilize the inherent connection between semantic task descriptions and low-level motions to train a model.", "id": 648, "question": "Does proposed end-to-end approach learn in reinforcement or supervised learning manner?", "title": "Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration" }, { "answers": [ "" ], "context": "NMT systems have achieved better performance compared to statistical machine translation (SMT) systems in recent years not only on available data language pairs BIBREF1, BIBREF2, but also on low-resource language pairs BIBREF3, BIBREF4. Nevertheless, NMT still exists many challenges which have adverse effects on its effectiveness BIBREF0. One of these challenges is that NMT has biased tend in translating high-frequency words, thus words which have lower frequencies are often translated incorrectly. This challenge has also been confirmed again in BIBREF3, and they have proposed two strategies to tackle this problem with modifications on the model's output distribution: one for normalizing some matrices by fixing them to constants after several training epochs and another for adding a direct connection from source embeddings through a simple feed forward neural network (FFNN). These approaches increase the size and the training time of their NMT systems. In this work, we follow their second approach but simplify the computations by replacing FFNN with two single operations.", "id": 649, "question": "Are synonymous relation taken into account in the Japanese-Vietnamese task?", "title": "Overcoming the Rare Word Problem for Low-Resource Language Pairs in Neural Machine Translation" }, { "answers": [ "" ], "context": "Our NMT system use a bidirectional recurrent neural network (biRNN) as an encoder and a single-directional RNN as a decoder with input feeding of BIBREF11 and the attention mechanism of BIBREF5. The Encoder's biRNN are constructed by two RNNs with the hidden units in the LSTM cell, one for forward and the other for backward of the source sentence $\\mathbf {x}=(x_1, ...,x_n)$. Every word $x_i$ in sentence is first encoded into a continuous representation $E_s(x_i)$, called the source embedding. Then $\\mathbf {x}$ is transformed into a fixed-length hidden vector $\\mathbf {h}_i$ representing the sentence at the time step $i$, which called the annotation vector, combined by the states of forward $\\overrightarrow{\\mathbf {h}}_i$ and backward $\\overleftarrow{\\mathbf {h}}_i$:", "id": 650, "question": "Is the supervised morphological learner tested on Japanese?", "title": "Overcoming the Rare Word Problem for Low-Resource Language Pairs in Neural Machine Translation" }, { "answers": [ "" ], "context": "The detection of anomalous trends in the financial domain has focused largely on fraud detection BIBREF0, risk modeling BIBREF1, and predictive analysis BIBREF2. The data used in the majority of such studies is of time-series, transactional, graph or generally quantitative or structured nature. This belies the critical importance of semi-structured or unstructured text corpora that practitioners in the finance domain derive insights from—corpora such as financial reports, press releases, earnings call transcripts, credit agreements, news articles, customer interaction logs, and social data.", "id": 651, "question": "What is the dataset that is used in the paper?", "title": "A framework for anomaly detection using language modeling, and its applications to finance" }, { "answers": [ "" ], "context": "Anomaly detection is a strategy that is often employed in contexts where a deviation from a certain norm is sought to be captured, especially when extreme class imbalance impedes the use of a supervised approach. The implementation of such methods allows for the unveiling of previously hidden or obstructed insights.", "id": 652, "question": "What is the performance of the models discussed in the paper?", "title": "A framework for anomaly detection using language modeling, and its applications to finance" }, { "answers": [ "" ], "context": "Previous studies have used anomaly detection to identify and correct errors in text BIBREF4, BIBREF5. These are often unintentional errors that occur as a result of some form of data transfer, e.g. from audio to text, from image to text, or from one language to another. Such studies have direct applicability to the error-prone process of earnings call or customer call transcription, where audio quality, accents, and domain-specific terms can lead to errors. Consider a scenario where the CEO of a company states in an audio conference, `Now investments will be made in Asia.' However, the system instead transcribes, `No investments will be made in Asia.' There is a meaningful difference in the implication of the two statements that could greatly influence the analysis and future direction of the company. Additionally, with regards to the second scenario, it is highly unlikely that the CEO would make such a strong and negative statement in a public setting thus supporting the use of anomaly detection for error correction.", "id": 653, "question": "Does the paper consider the use of perplexity in order to identify text anomalies?", "title": "A framework for anomaly detection using language modeling, and its applications to finance" }, { "answers": [ "" ], "context": "Anomaly in the semantic space might reflect irregularities that are intentional or emergent, signaling risky behavior or phenomena. A sudden change in the tone and vocabulary of a company's leadership in their earnings calls or financial reports can signal risk. News stories that have abnormal language, or irregular origination or propagation patterns might be unreliable or untrustworthy.", "id": 654, "question": "Does the paper report a baseline for the task?", "title": "A framework for anomaly detection using language modeling, and its applications to finance" }, { "answers": [ "" ], "context": "Explanations are essential for understanding and learning BIBREF0. They can take many forms, ranging from everyday explanations for questions such as why one likes Star Wars, to sophisticated formalization in the philosophy of science BIBREF1, to simply highlighting features in recent work on interpretable machine learning BIBREF2.", "id": 655, "question": "What non-contextual properties do they refer to?", "title": "What Gets Echoed? Understanding the\"Pointers\"in Explanations of Persuasive Arguments" }, { "answers": [ "" ], "context": "To provide background for our study, we first present a brief overview of explanations for the NLP community, and then discuss the connection of our study with pointer networks, linguistic accommodation, and argumentation mining.", "id": 656, "question": "What is the baseline?", "title": "What Gets Echoed? Understanding the\"Pointers\"in Explanations of Persuasive Arguments" }, { "answers": [ "" ], "context": "Our dataset is derived from the /r/ChangeMyView subreddit, which has more than 720K subscribers BIBREF31. /r/ChangeMyView hosts conversations where someone expresses a view and others then try to change that person's mind. Despite being fundamentally based on argument, /r/ChangeMyView has a reputation for being remarkably civil and productive BIBREF32, e.g., a journalist wrote “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise” BIBREF33.", "id": 657, "question": "What are their proposed features?", "title": "What Gets Echoed? Understanding the\"Pointers\"in Explanations of Persuasive Arguments" }, { "answers": [ "" ], "context": "To further investigate how explanations select words from the explanandum, we formulate a word-level prediction task to predict whether words in an OP or PC are echoed in its explanation. Formally, given a tuple of (OP, PC, explanation), we extract the unique stemmed words as $\\mathcal {V}_{\\text{OP}}, \\mathcal {V}_{\\text{PC}}, \\mathcal {V}_{\\text{EXP}}$. We then define the label for each word in the OP or PC, $w \\in \\mathcal {V}_{\\text{OP}} \\cup \\mathcal {V}_{\\text{PC}}$, based on the explanation as follows:", "id": 658, "question": "What are overall baseline results on new this new task?", "title": "What Gets Echoed? Understanding the\"Pointers\"in Explanations of Persuasive Arguments" }, { "answers": [ "" ], "context": "We further examine the effectiveness of our proposed features in a predictive setting. These features achieve strong performance in the word-level classification task, and can enhance neural models in both the word-level task and generating explanations. However, the word-level task remains challenging, especially for content words.", "id": 659, "question": "What metrics are used in evaluation of this task?", "title": "What Gets Echoed? Understanding the\"Pointers\"in Explanations of Persuasive Arguments" }, { "answers": [ "" ], "context": "We consider two classifiers for our word-level classification task: logistic regression and gradient boosting tree (XGBoost) BIBREF39. We hypothesized that XGBoost would outperform logistic regression because our problem is non-linear, as shown in Figure FIGREF8.", "id": 660, "question": "Do authors provide any explanation for intriguing patterns of word being echoed?", "title": "What Gets Echoed? Understanding the\"Pointers\"in Explanations of Persuasive Arguments" }, { "answers": [ "" ], "context": "Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem.", "id": 661, "question": "What features are proposed?", "title": "What Gets Echoed? Understanding the\"Pointers\"in Explanations of Persuasive Arguments" }, { "answers": [ "" ], "context": "Asking relevant and intelligent questions has always been an integral part of human learning, as it can help assess the user's understanding of a piece of text (an article, an essay etc.). However, forming questions manually can be sometimes arduous. Automated question generation (QG) systems can help alleviate this problem by learning to generate questions on a large scale and in lesser time. Such a system has applications in a myriad of areas such as FAQ generation, intelligent tutoring systems, and virtual assistants.", "id": 662, "question": "Which datasets are used to train this model?", "title": "Automating Reading Comprehension by Generating Question and Answer Pairs" }, { "answers": [ "using the BLEU score as a quantitative metric and human evaluation for quality" ], "context": "Increases in life expectancy in the last century have resulted in a large number of people living to old ages and will result in a double number of dementia cases by the middle of the century BIBREF0BIBREF1. The most common form of dementia is Alzheimer disease which contributes to 60–70% of cases BIBREF2. Research focused on identifying treatments to slow down the evolution of Alzheimer's disease is a very active pursuit, but it has been only successful in terms of developing therapies that eases the symptoms without addressing the cause BIBREF3BIBREF4. Besides, people with dementia might have some barriers to access to the therapies, such as cost, availability and displacement to the care home or hospital, where the therapy takes place. We believe that Artificial Intelligence (AI) can contribute in innovative systems to give accessibility and offer new solutions to the patients needs, as well as help relatives and caregivers to understand the illness of their family member or patient and monitor the progress of the dementia.", "id": 663, "question": "How is performance of this system measured?", "title": "Automatic Reminiscence Therapy for Dementia." }, { "answers": [ "" ], "context": "The origin of chatbots goes back to 1966 with the creation of ELIZA BIBREF8 by Joseph Weizenbaum at MIT. Its implementation consisted in pattern matching and substitution methodology. Recently, data driven approaches have drawn significant attention. Existing work along this line includes retrieval-based methods BIBREF9BIBREF10 and generation-based methodsBIBREF11BIBREF12. In this work we focus on generative models, where sequence-to-sequence algorithm that uses RNNs to encode and decode inputs into responses is a current best practice.", "id": 664, "question": "How many questions per image on average are available in dataset?", "title": "Automatic Reminiscence Therapy for Dementia." }, { "answers": [ "" ], "context": "In this section we explain the main two components of our model, as well as how the interaction with the model works. We named it Elisabot and its goal is to mantain a dialog with the patient about her user’s life experiences.", "id": 665, "question": "Is machine learning system underneath similar to image caption ML systems?", "title": "Automatic Reminiscence Therapy for Dementia." }, { "answers": [ "For the question generation model 15,000 images with 75,000 questions. For the chatbot model, around 460k utterances over 230k dialogues." ], "context": "The algorithm behind VQG consists in an Encoder-Decoder architecture with attention. The Encoder takes as input one of the given photos $I$ from the user and learns its information using a CNN. CNNs have been widely studied for computer vision tasks. The CNN provides the image's learned features to the Decoder which generates the question $y$ word by word by using an attention mechanism with a Long Short-Term Memory (LSTM). The model is trained to maximize the likelihood $p(y|I)$ of producing a target sequence of words:", "id": 666, "question": "How big dataset is used for training this system?", "title": "Automatic Reminiscence Therapy for Dementia." }, { "answers": [ "By considering words as vertices and generating directed edges between neighboring words within a sentence" ], "context": "Short text matching plays a critical role in many natural language processing tasks, such as question answering, information retrieval, and so on. However, matching text sequences for Chinese or similar languages often suffers from word segmentation, where there are often no perfect Chinese word segmentation tools that suit every scenario. Text matching usually requires to capture the relatedness between two sequences in multiple granularities. For example, in Figure FIGREF4 , the example phrase is generally tokenized as “China – citizen – life – quality – high”, but when we plan to match it with “Chinese – live – well”, it would be more helpful to have the example segmented into “Chinese – livelihood – live” than its common segmentation. ", "id": 667, "question": "How do they obtain word lattices from words?", "title": "Lattice CNNs for Matching Based Chinese Question Answering" }, { "answers": [ "" ], "context": "Our Lattice CNNs framework is built upon the siamese architecture BIBREF5 , one of the most successful frameworks in text matching, which takes the word lattice format of a pair of sentences as input, and outputs the matching score.", "id": 668, "question": "Which metrics do they use to evaluate matching?", "title": "Lattice CNNs for Matching Based Chinese Question Answering" }, { "answers": [ "" ], "context": "The siamese architecture and its variant have been widely adopted in sentence matching BIBREF6 , BIBREF3 and matching based question answering BIBREF7 , BIBREF0 , BIBREF8 , that has a symmetrical component to extract high level features from different input channels, which share parameters and map inputs to the same vector space. Then, the sentence representations are merged and compared to output the similarities.", "id": 669, "question": "Which dataset(s) do they evaluate on?", "title": "Lattice CNNs for Matching Based Chinese Question Answering" }, { "answers": [ "" ], "context": "The dynamics of language evolution is one of many interdisciplinary fields to which methods and insights from statistical physics have been successfully applied (see BIBREF0 for an overview, and BIBREF1 for a specific comprehensive review).", "id": 670, "question": "What languages do they look at?", "title": "On the coexistence of competing languages" }, { "answers": [ "" ], "context": "Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .", "id": 671, "question": "Do they report results only on English data?", "title": "Speaker-independent classification of phonetic segments from raw ultrasound in child speech" }, { "answers": [ "" ], "context": "There are several challenges associated with the automatic processing of ultrasound tongue images.", "id": 672, "question": "Do they propose any further additions that could be made to improve generalisation to unseen speakers?", "title": "Speaker-independent classification of phonetic segments from raw ultrasound in child speech" }, { "answers": [ "" ], "context": "Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset.", "id": 673, "question": "What are the characteristics of the dataset?", "title": "Speaker-independent classification of phonetic segments from raw ultrasound in child speech" }, { "answers": [ "" ], "context": "We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances.", "id": 674, "question": "What type of models are used for classification?", "title": "Speaker-independent classification of phonetic segments from raw ultrasound in child speech" }, { "answers": [ "" ], "context": "For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.", "id": 675, "question": "Do they compare to previous work?", "title": "Speaker-independent classification of phonetic segments from raw ultrasound in child speech" }, { "answers": [ "" ], "context": "For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.", "id": 676, "question": "How many instances does their dataset have?", "title": "Speaker-independent classification of phonetic segments from raw ultrasound in child speech" }, { "answers": [ "" ], "context": "We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.", "id": 677, "question": "What model do they use to classify phonetic segments? ", "title": "Speaker-independent classification of phonetic segments from raw ultrasound in child speech" }, { "answers": [ "" ], "context": "Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.", "id": 678, "question": "How many speakers do they have in the dataset?", "title": "Speaker-independent classification of phonetic segments from raw ultrasound in child speech" }, { "answers": [ "Perplexity of proposed MEED model is 19.795 vs 19.913 of next best result on test set." ], "context": "Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8.", "id": 679, "question": "How better is proposed method than baselines perpexity wise?", "title": "A Multi-Turn Emotionally Engaging Dialog Model" }, { "answers": [ "" ], "context": "Many early open-domain dialog systems are rule-based and often require expert knowledge to develop. More recent work in response generation seeks data-driven solutions, leveraging on machine learning techniques and the availability of data. Ritter et al. BIBREF14 first applied statistical machine translation (SMT) methods to this area. However, it turns out that bilingual translation and response generation are different. The source and target sentences in translation share the same meaning; thus the words in the two sentences tend to align well with each other. However, for response generation, one could have many equally good responses for a single input. Later studies use the sequence-to-sequence neural framework to model dialogs, followed by various improving work on the quality of the responses, especially the emotional aspects of the conversations.", "id": 680, "question": "How does the multi-turn dialog system learns?", "title": "A Multi-Turn Emotionally Engaging Dialog Model" }, { "answers": [ "" ], "context": "In this paper, we consider the problem of generating response $\\mathbf {y}$ given a context $\\mathbf {X}$ consisting of multiple previous utterances by estimating the probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ from a data set $\\mathcal {D}=\\lbrace (\\mathbf {X}^{(i)},\\mathbf {y}^{(i)})\\rbrace _{i=1}^N$ containing $N$ context-response pairs. Here", "id": 681, "question": "How is human evaluation performed?", "title": "A Multi-Turn Emotionally Engaging Dialog Model" }, { "answers": [ "" ], "context": "The hierarchical attention structure involves two encoders to produce the dialog context vector $\\mathbf {c}_t$, namely the word-level encoder and the utterance-level encoder. The word-level encoder is essentially a bidirectional RNN with gated recurrent units (GRU) BIBREF1. For utterance $\\mathbf {x}_j$ in $\\mathbf {X}$ ($j=1,2,\\dots ,m$), the bidirectional encoder produces two hidden states at each word position $k$, the forward hidden state $\\mathbf {h}^\\mathrm {f}_{jk}$ and the backward hidden state $\\mathbf {h}^\\mathrm {b}_{jk}$. The final hidden state $\\mathbf {h}_{jk}$ is then obtained by concatenating the two,", "id": 682, "question": "Is some other metrics other then perplexity measured?", "title": "A Multi-Turn Emotionally Engaging Dialog Model" }, { "answers": [ "" ], "context": "In order to capture the emotion information carried in the context $\\mathbf {X}$, we utilize an external text analysis program called the Linguistic Inquiry and Word Count (LIWC) BIBREF18. LIWC accepts text files as input, and then compares each word in the input with a user-defined dictionary, assigning it to one or more of the pre-defined psychologically-relevant categories. We make use of five of these categories, related to emotion, namely positive emotion, negative emotion, anxious, angry, and sad. Using the newest version of the program LIWC2015, we are able to map each utterance $\\mathbf {x}_j$ in the context to a six-dimensional indicator vector ${1}(\\mathbf {x}_j)$, with the first five entries corresponding to the five emotion categories, and the last one corresponding to neutral. If any word in $\\mathbf {x}_j$ belongs to one of the five categories, then the corresponding entry in ${1}(\\mathbf {x}_j)$ is set to 1; otherwise, $\\mathbf {x}_j$ is treated as neutral, with the last entry of ${1}(\\mathbf {x}_j)$ set to 1. For example, assuming $\\mathbf {x}_j=$ “he is worried about me”, then", "id": 683, "question": "What two baseline models are used?", "title": "A Multi-Turn Emotionally Engaging Dialog Model" }, { "answers": [ "" ], "context": "Building knowledge graphs (KG) over Web corpora is an important problem that has galvanized effort from multiple communities over two decades BIBREF0 , BIBREF1 . Automated knowledge graph construction from Web resources involves several different phases. The first phase involves domain discovery, which constitutes identification of sources, followed by crawling and scraping of those sources BIBREF2 . A contemporaneous ontology engineering phase is the identification and design of key classes and properties in the domain of interest (the domain ontology) BIBREF3 .", "id": 684, "question": "Do they evaluate on relation extraction?", "title": "Information Extraction in Illicit Domains" }, { "answers": [ "" ], "context": "A learner language (interlanguage) is an idiolect developed by a learner of a second or foreign language which may preserve some features of his/her first language. Previously, encouraging results of automatically building the syntactic analysis of learner languages were reported BIBREF0 , but it is still unknown how semantic processing performs, while parsing a learner language (L2) into semantic representations is the foundation of a variety of deeper analysis of learner languages, e.g., automatic essay scoring. In this paper, we study semantic parsing for interlanguage, taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.", "id": 685, "question": "What is the baseline model for the agreement-based mode?", "title": "Semantic Role Labeling for Learner Chinese: the Importance of Syntactic Parsing and L2-L1 Parallel Data" }, { "answers": [ "" ], "context": "An L2-L1 parallel corpus can greatly facilitate the analysis of a learner language BIBREF9 . Following mizumoto:2011, we collected a large dataset of L2-L1 parallel texts of Mandarin Chinese by exploring “language exchange\" social networking services (SNS), i.e., Lang-8, a language-learning website where native speakers can freely correct the sentences written by foreign learners. The proficiency levels of the learners are diverse, but most of the learners, according to our judgment, is of intermediate or lower level.", "id": 686, "question": "Do the authors suggest why syntactic parsing is so important for semantic role labelling for interlanguages?", "title": "Semantic Role Labeling for Learner Chinese: the Importance of Syntactic Parsing and L2-L1 Parallel Data" }, { "answers": [ "Authors" ], "context": "Semantic role labeling (SRL) is the process of assigning semantic roles to constituents or their head words in a sentence according to their relationship to the predicates expressed in the sentence. Typical semantic roles can be divided into core arguments and adjuncts. The core arguments include Agent, Patient, Source, Goal, etc, while the adjuncts include Location, Time, Manner, Cause, etc.", "id": 687, "question": "Who manually annotated the semantic roles for the set of learner texts?", "title": "Semantic Role Labeling for Learner Chinese: the Importance of Syntactic Parsing and L2-L1 Parallel Data" }, { "answers": [ "" ], "context": "We are interested in the problem of visual question answering (VQA), where an algorithm is presented with an image and a question that is formulated in natural language and relates to the contents of the image. The goal of this task is to get the algorithm to correctly answer the question. The VQA task has recently received significant attention from the computer vision community, in particular because obtaining high accuracies would presumably require precise understanding of both natural language as well as visual stimuli. In addition to serving as a milestone towards visual intelligence, there are practical applications such as development of tools for the visually impaired.", "id": 688, "question": "By how much do they outperform existing state-of-the-art VQA models?", "title": "Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining" }, { "answers": [ "" ], "context": "Since its introduction BIBREF0 , BIBREF1 , BIBREF2 , the VQA problem has attracted an increasing interest BIBREF3 . Its multimodal nature and more precise evaluation protocol than alternative multimodal scenarios, such as image captioning, help to explain this interest. Furthermore, the proliferation of suitable datasets and potential applications, are also key elements behind this increasing activity. Most state-of-the-art methods follow a joint embedding approach, where deep models are used to project the textual question and visual input to a joint feature space that is then used to build the answer. Furthermore, most modern approaches pose VQA as a classification problem, where classes correspond to a set of pre-defined candidate answers. As an example, most entries to the VQA challenge BIBREF2 select as output classes the most common 3000 answers in this dataset, which account for 92% of the instances in the validation set.", "id": 689, "question": "How do they measure the correlation between manual groundings and model generated ones?", "title": "Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining" }, { "answers": [ "they are available in the Visual Genome dataset" ], "context": "Figure FIGREF2 shows the main pipeline of our VQA model. We mostly build upon the MCB model in BIBREF5 , which exemplifies current state-of-the-art techniques for this problem. Our main innovation to this model is the addition of an Attention Supervision Module that incorporates visual grounding as an auxiliary task. Next we describe the main modules behind this model.", "id": 690, "question": "How do they obtain region descriptions and object annotations?", "title": "Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining" }, { "answers": [ "MultiNLI" ], "context": "Natural Language Inference (NLI) has attracted considerable interest in the NLP community and, recently, a large number of neural network-based systems have been proposed to deal with the task. One can attempt a rough categorization of these systems into: a) sentence encoding systems, and b) other neural network systems. Both of them have been very successful, with the state of the art on the SNLI and MultiNLI datasets being 90.4%, which is our baseline with BERT BIBREF0 , and 86.7% BIBREF0 respectively. However, a big question with respect to these systems is their ability to generalize outside the specific datasets they are trained and tested on. Recently, BIBREF1 have shown that state-of-the-art NLI systems break considerably easily when, instead of tested on the original SNLI test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. The results show a very significant drop in accuracy for three of the four systems. The system that was more difficult to break and had the least loss in accuracy was the system by BIBREF2 which utilizes external knowledge taken from WordNet BIBREF3 .", "id": 691, "question": "Which training dataset allowed for the best generalization to benchmark sets?", "title": "Testing the Generalization Power of Neural Network Models Across NLI Benchmarks" }, { "answers": [ "" ], "context": "The ability of NLI systems to generalize and related skepticism has been raised in a number of recent papers. BIBREF1 show that the generalization capabilities of state-of-the-art NLI systems, in cases where some kind of external lexical knowledge is needed, drops dramatically when the SNLI test set is replaced by a test set where the premise and the hypothesis are otherwise identical except for at most one word. The results show a very significant drop in accuracy. BIBREF7 recognize the generalization problem that comes with training on datasets like SNLI, which tend to be homogeneous and with little linguistic variation. In this context, they propose to better train NLI models by making use of adversarial examples.", "id": 692, "question": "Which model generalized the best?", "title": "Testing the Generalization Power of Neural Network Models Across NLI Benchmarks" }, { "answers": [ "BiLSTM-max, HBMP, ESIM, KIM, ESIM + ELMo, and BERT" ], "context": "In this section we describe the datasets and model architectures included in the experiments.", "id": 693, "question": "Which models were compared?", "title": "Testing the Generalization Power of Neural Network Models Across NLI Benchmarks" }, { "answers": [ "" ], "context": "We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.", "id": 694, "question": "Which datasets were used?", "title": "Testing the Generalization Power of Neural Network Models Across NLI Benchmarks" }, { "answers": [ "" ], "context": "Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3.", "id": 695, "question": "What was the baseline?", "title": "VAIS Hate Speech Detection System: A Deep Learning based Approach for System Combination" }, { "answers": [ "" ], "context": "In this section, we present the system architecture. It includes how we pre-process text, what types of text representation we use and models used in our system. In the end, we combine model results by using an ensemble technique.", "id": 696, "question": "Is the data all in Vietnamese?", "title": "VAIS Hate Speech Detection System: A Deep Learning based Approach for System Combination" }, { "answers": [ "" ], "context": "The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.", "id": 697, "question": "What classifier do they use?", "title": "VAIS Hate Speech Detection System: A Deep Learning based Approach for System Combination" }, { "answers": [ "Private dashboard is leaderboard where competitors can see results after competition is finished - on hidden part of test set (private test set)." ], "context": "Content in the dataset that provided in this HSD task is very diverse. Words having the same meaning were written in various types (teen code, non tone, emojis,..) depending on the style of users. Dataset was crawled from various sources with multiple text encodes. In order to make it easy for training, all types of encoding need to be unified. This cleaning module will be used in two processes: cleaning data before training and cleaning input in inferring phase. Following is the data processing steps that we use:", "id": 698, "question": "What is private dashboard?", "title": "VAIS Hate Speech Detection System: A Deep Learning based Approach for System Combination" }, { "answers": [ "Public dashboard where competitors can see their results during competition, on part of the test set (public test set)." ], "context": "Social comment dataset has high variety, the core idea is using multiple model architectures to handle data in many viewpoints. In our system, we use five different model architectures combining many types of CNN, and RNN. Each model will use some types of word embedding or handle directly sentence embedding to achieve the best general result. Source code of five models is extended from the GitHub repository", "id": 699, "question": "What is public dashboard?", "title": "VAIS Hate Speech Detection System: A Deep Learning based Approach for System Combination" }, { "answers": [ "They used Wiki Vietnamese language and Vietnamese newspapers to pretrain embeddings and dataset provided in HSD task to train model (details not mentioned in paper)." ], "context": "Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class.", "id": 700, "question": "What dataset do they use?", "title": "VAIS Hate Speech Detection System: A Deep Learning based Approach for System Combination" }, { "answers": [ "" ], "context": "The main motivation of this work has been started with a question \"What do people do to maintain their health?\"– some people do balanced diet, some do exercise. Among diet plans some people maintain vegetarian diet/vegan diet, among exercises some people do swimming, cycling or yoga. There are people who do both. If we want to know the answers of the following questions– \"How many people follow diet?\", \"How many people do yoga?\", \"Does yogi follow vegetarian/vegan diet?\", may be we could ask our acquainted person but this will provide very few intuition about the data. Nowadays people usually share their interests, thoughts via discussions, tweets, status in social media (i.e. Facebook, Twitter, Instagram etc.). It's huge amount of data and it's not possible to go through all the data manually. We need to mine the data to get overall statistics and then we will also be able to find some interesting correlation of data.", "id": 701, "question": "Do the authors report results only on English data?", "title": "Yoga-Veganism: Correlation Mining of Twitter Health Data" }, { "answers": [ "" ], "context": "Tweet messages are retrieved from the Twitter source by utilizing the Twitter API and stored in Kafka topics. The Producer API is used to connect the source (i.e. Twitter) to any Kafka topic as a stream of records for a specific category. We fetch data from a source (Twitter), push it to a message queue, and consume it for further analysis. Fig. FIGREF2 shows the overview of Twitter data collection using Kafka.", "id": 702, "question": "What other interesting correlations are observed?", "title": "Yoga-Veganism: Correlation Mining of Twitter Health Data" }, { "answers": [ "" ], "context": "Let us consider the goal of building machine reasoning systems based on knowledge from fulltext data like encyclopedic articles, scientific papers or news articles. Such machine reasoning systems, like humans researching a problem, must be able to recover evidence from large amounts of retrieved but mostly irrelevant information and judge the evidence to decide the answer to the question at hand.", "id": 703, "question": "what were the baselines?", "title": "Joint Learning of Sentence Embeddings for Relevance and Entailment" }, { "answers": [ "" ], "context": "Formally, the Hypothesis Evaluation task is to build a function INLINEFORM0 , where INLINEFORM1 is a binary label (no towards yes) and INLINEFORM2 is a hypothesis instance in the form of question text INLINEFORM3 and a set of INLINEFORM4 evidence texts INLINEFORM5 as extracted from an evidence-carrying corpus.", "id": 704, "question": "what is the state of the art for ranking mc test answers?", "title": "Joint Learning of Sentence Embeddings for Relevance and Entailment" }, { "answers": [ "2427" ], "context": "Our main aim is to propose a solution to the Argus Task, where the Argus system BIBREF7 BIBREF5 is to automatically analyze and answer questions in the context of the Augur prediction market platform. In a prediction market, users pose questions about future events whereas others bet on the yes or no answer, with the assumption that the bet price reflects the real probability of the event. At a specified moment (e.g. after the date of a to-be-predicted sports match), the correct answer is retroactively determined and the bets are paid off. At a larger volume of questions, determining the bet results may present a significant overhead for running of the market. This motivates the Argus system, which should partially automate this determination — deciding questions related to recent events based on open news sources.", "id": 705, "question": "what is the size of the introduced dataset?", "title": "Joint Learning of Sentence Embeddings for Relevance and Entailment" }, { "answers": [ "" ], "context": "The AI2 Elementary School Science Questions (no-diagrams variant) released by the Allen Institute cover 855 basic four-choice questions regarding high school science and follows up to the Allen AI Science Kaggle challenge. The vocabulary includes scientific jargon and named entities, and many questions are not factoid, requiring real-world reasoning or thought experiments.", "id": 706, "question": "what datasets did they use?", "title": "Joint Learning of Sentence Embeddings for Relevance and Entailment" }, { "answers": [ "" ], "context": "In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation BIBREF0, BIBREF1, text classification BIBREF2, BIBREF3, language modeling BIBREF4, BIBREF5, etc. It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely. The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations.", "id": 707, "question": "What evaluation metric is used?", "title": "MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning" }, { "answers": [ "" ], "context": "Like other sequence-to-sequence models, MUSE also adopts an encoder-decoder framework. The encoder takes a sequence of word embeddings $(x_1, \\cdots , x_n)$ as input where $n$ is the length of input. It transfers word embeddings to a sequence of hidden representation ${z} = (z_1, \\cdots , z_n)$. Given ${z}$, the decoder is responsible for generating a sequence of text $(y_1, \\cdots , y_m)$ token by token.", "id": 708, "question": "What datasets are used?", "title": "MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning" }, { "answers": [ "" ], "context": "Self-attention is responsible for learning representations of global context. For a given input sequence $X$, it first projects $X$ into three representations, key $K$, query $Q$, and value $V$. Then, it uses a self-attention mechanism to get the output representation:", "id": 709, "question": "What are three main machine translation tasks?", "title": "MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning" }, { "answers": [ "" ], "context": "We introduce convolution operations into MUSE to capture local context. To learn contextual sequence representations in the same hidden space, we choose depth-wise convolution BIBREF9 (we denote it as DepthConv in the experiments) as the convolution operation because it includes two separate transformations, namely, point-wise projecting transformation and contextual transformation. It is because that original convolution operator is not separable, but DepthConv can share the same point-wise projecting transformation with self-attention mechanism. We choose dynamic convolution BIBREF10, the best variant of DepthConv, as our implementation.", "id": 710, "question": "How big is improvement in performance over Transformers?", "title": "MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning" }, { "answers": [ "" ], "context": "Aspect-Based Sentiment Analysis (ABSA) involves detecting opinion targets and locating opinion indicators in sentences in product review texts BIBREF0 . The first sub-task, called Aspect Term Extraction (ATE), is to identify the phrases targeted by opinion indicators in review sentences. For example, in the sentence “I love the operating system and preloaded software”, the words “operating system” and “preloaded software” should be extracted as aspect terms, and the sentiment on them is conveyed by the opinion word “love”. According to the task definition, for a term/phrase being regarded as an aspect, it should co-occur with some “opinion words” that indicate a sentiment polarity on it BIBREF1 .", "id": 711, "question": "How do they determine the opinion summary?", "title": "Aspect Term Extraction with History Attention and Selective Transformation" }, { "answers": [ "" ], "context": "Given a sequence INLINEFORM0 of INLINEFORM1 words, the ATE task can be formulated as a token/word level sequence labeling problem to predict an aspect label sequence INLINEFORM2 , where each INLINEFORM3 comes from a finite label set INLINEFORM4 which describes the possible aspect labels. As shown in the example below:", "id": 712, "question": "Do they explore how useful is the detection history and opinion summary?", "title": "Aspect Term Extraction with History Attention and Selective Transformation" }, { "answers": [ "" ], "context": "As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step.", "id": 713, "question": "Which dataset(s) do they use to train the model?", "title": "Aspect Term Extraction with History Attention and Selective Transformation" }, { "answers": [ "" ], "context": "All the components in the proposed framework are differentiable. Thus, our framework can be efficiently trained with gradient methods. We use the token-level cross-entropy error between the predicted distribution INLINEFORM0 ( INLINEFORM1 ) and the gold distribution INLINEFORM2 as the loss function: DISPLAYFORM0 ", "id": 714, "question": "By how much do they outperform state-of-the-art methods?", "title": "Aspect Term Extraction with History Attention and Selective Transformation" }, { "answers": [ "" ], "context": "Voice-based “personal assistants\" such as Apple's SIRI, Microsoft's Cortana, Amazon Alexa, and the Google Assistant have finally entered the mainstream. This development is generally attributed to major breakthroughs in speech recognition and text-to-speech (TTS) technologies aided by recent progress in deep learning BIBREF0, exponential gains in compute power BIBREF1, BIBREF2, and the ubiquity of powerful mobile devices. The accuracy of machine learned speech recognizers BIBREF3 and speech synthesizers BIBREF4 are good enough to be deployed in real-world products and this progress has been driven by publicly available labeled datasets. However, conspicuously absent from this list is equal progress in machine learned conversational natural language understanding (NLU) and generation (NLG). The NLU and NLG components of dialog systems starting from the early research work BIBREF5 to the present commercially available personal assistants largely rely on rule-based systems. The NLU and NLG systems are often carefully programmed for very narrow and specific cases BIBREF6, BIBREF7. General understanding of natural spoken behaviors across multiple dialog turns, even in single task-oriented situations, is by most accounts still a long way off. In this way, most of these products are very much hand crafted, with inherent constraints on what users can say, how the system responds and the order in which the various subtasks can be completed. They are high precision but relatively low coverage. Not only are such systems unscalable, but they lack the flexibility to engage in truly natural conversation.", "id": 715, "question": "What is the average number of turns per dialog?", "title": "Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset" }, { "answers": [ "" ], "context": "BIBREF14 discuss the major features and differences among the existing offerings in an exhaustive and detailed survey of available corpora for data driven learning of dialog systems. One important distinction covered is that of human-human vs. human-machine dialog data, each having its advantages and disadvantages. Many of the existing task-based datasets have been generated from deployed dialog systems such as the Let’s Go Bus Information System BIBREF15 and the various Dialog State Tracking Challenges (DSTCs) BIBREF16. However, it is doubtful that new data-driven systems built with this type of corpus would show much improvement since they would be biased by the existing system and likely mimic its limitations BIBREF17. Since the ultimate goal is to be able to handle complex human language behaviors, it would seem that human-human conversational data is the better choice for spoken dialog system development BIBREF13. However, learning from purely human-human based corpora presents challenges of its own. In particular, human conversation has a different distribution of understanding errors and exhibits turn-taking idiosyncrasies which may not be well suited for interaction with a dialog system BIBREF17, BIBREF14.", "id": 716, "question": "What baseline models are offered?", "title": "Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset" }, { "answers": [ "" ], "context": "The WOz framework, first introduced by BIBREF12 as a methodology for iterative design of natural language interfaces, presents a more effective approach to human-human dialog collection. In this setup, users are led to believe they are interacting with an automated assistant but in fact it is a human behind the scenes that controls the system responses. Given the human-level natural language understanding, users quickly realize they can comfortably and naturally express their intent rather than having to modify behaviors as is normally the case with a fully automated assistant. At the same time, the machine-oriented context of the interaction, i.e. the use of TTS and slower turn taking cadence, prevents the conversation from becoming fully fledged, overly complex human discourse. This creates an idealized spoken environment, revealing how users would openly and candidly express themselves with an automated assistant that provided superior natural language understanding.", "id": 717, "question": "Which six domains are covered in the dataset?", "title": "Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset" }, { "answers": [ "" ], "context": "The ability to construct complex and diverse linguistic structures is one of the main features that set us apart from all other species. Despite its ubiquity, some language aspects remain unknown. Topics such as language origin and evolution have been studied by researchers from diverse disciplines, including Linguistic, Computer Science, Physics and Mathematics BIBREF0, BIBREF1, BIBREF2. In order to better understand the underlying language mechanisms and universal linguistic properties, several models have been developed BIBREF3, BIBREF4. A particular language representation regards texts as complex systems BIBREF5. Written texts can be considered as complex networks (or graphs), where nodes could represent syllables, words, sentences, paragraphs or even larger chunks BIBREF5. In such models, network edges represent the proximity between nodes, e.g. the frequency of the co-occurrence of words. Several interesting results have been obtained from networked models, such as the explanation of Zipf's Law as a consequence of the least effort principle and theories on the nature of syntactical relationships BIBREF6, BIBREF7.", "id": 718, "question": "What other natural processing tasks authors think could be studied by using word embeddings?", "title": "Using word embeddings to improve the discriminability of co-occurrence text networks" }, { "answers": [ "" ], "context": "Complex networks have been used in a wide range of fields, including in Social Sciences BIBREF13, Neuroscience BIBREF14, Biology BIBREF15, Scientometry BIBREF16 and Pattern Recognition BIBREF17, BIBREF18, BIBREF19, BIBREF20. In text analysis, networks are used to uncover language patterns, including the origins of the ever present Zipf's Law BIBREF21 and the analysis of linguistic properties of natural and unknown texts BIBREF22, BIBREF23. Applications of network science in text mining and text classification encompasses applications in semantic analysis BIBREF24, BIBREF25, BIBREF26, BIBREF27, authorship attribution BIBREF28, BIBREF29 and stylometry BIBREF28, BIBREF30, BIBREF31. Here we focus in the stylometric analysis of texts using complex networks.", "id": 719, "question": "What is the reason that traditional co-occurrence networks fail in establishing links between similar words whenever they appear distant in the text?", "title": "Using word embeddings to improve the discriminability of co-occurrence text networks" }, { "answers": [ "They use it as addition to previous model - they add new edge between words if word embeddings are similar." ], "context": "To represent texts as networks, we used the so-called word adjacency network representation BIBREF35, BIBREF28, BIBREF32. Typically, before creating the networks, the text is pre-processed. An optional pre-processing step is the removal of stopwords. This step is optional because such words include mostly article and prepositions, which may be artlessly represented by network edges. However, in some applications – including the authorship attribution task – stopwords (or function words) play an important role in the stylistic characterization of texts BIBREF32. A list of stopwords considered in this study is available in the Supplementary Information.", "id": 720, "question": "Do the use word embeddings alone or they replace some previous features of the model with word embeddings?", "title": "Using word embeddings to improve the discriminability of co-occurrence text networks" }, { "answers": [ "" ], "context": "In Section SECREF13, we probe whether the inclusion of virtual edges is able to improve the performance of the traditional co-occurrence network-based classification in a usual stylometry task. While the focus of this paper is not to perform a systematic analysis of different methods comprising the adopted network, we consider two variations in the adopted methodology. In Section SECREF19, we consider the use of stopwords and the adoption of a local thresholding process to establish different criteria to create new virtual edges.", "id": 721, "question": "On what model architectures are previous co-occurence networks based?", "title": "Using word embeddings to improve the discriminability of co-occurrence text networks" }, { "answers": [ "" ], "context": "Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people.", "id": 722, "question": "Is model explanation output evaluated, what metric was used?", "title": "e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations" }, { "answers": [ "" ], "context": "The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels:", "id": 723, "question": "How many annotators are used to write natural language explanations to SNLI-VE-2.0?", "title": "e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations" }, { "answers": [ "Totally 6980 validation and test image-sentence pairs have been corrected." ], "context": "In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3).", "id": 724, "question": "How many natural language explanations are human-written?", "title": "e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations" }, { "answers": [ "" ], "context": "Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets.", "id": 725, "question": "How much is performance difference of existing model between original and corrected corpus?", "title": "e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations" }, { "answers": [ "" ], "context": "To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1.", "id": 726, "question": "What is the class with highest error rate in SNLI-VE?", "title": "e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations" }, { "answers": [ "Italian Wikipedia and Google News extraction producing final vocabulary of 618224 words" ], "context": "In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding\" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.", "id": 727, "question": "What is the dataset used as input to the Word2Vec algorithm?", "title": "An Analysis of Word2Vec for the Italian Language" }, { "answers": [ "" ], "context": "The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).", "id": 728, "question": "Are the word embeddings tested on a NLP task?", "title": "An Analysis of Word2Vec for the Italian Language" }, { "answers": [ "" ], "context": "The common words (such as “the\", “of\", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling\" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.", "id": 729, "question": "Are the word embeddings evaluated?", "title": "An Analysis of Word2Vec for the Italian Language" }, { "answers": [ "" ], "context": "Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.", "id": 730, "question": "How big is dataset used to train Word2Vec for the Italian Language?", "title": "An Analysis of Word2Vec for the Italian Language" }, { "answers": [ "" ], "context": "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences.", "id": 731, "question": "How does different parameter settings impact the performance and semantic capacity of resulting model?", "title": "An Analysis of Word2Vec for the Italian Language" }, { "answers": [ "" ], "context": "To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\\,791$ analogies divided into 19 different categories: 6 related to the “semantic\" macro-area (8915 analogies) and 13 to the “syntactic\" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen\"); where $b^{*}$ is the word to be guessed (“queen\"), $b$ is the word coupled to it (“king\"), $a$ is the word for the components to be eliminated (“man\"), and $a^{*}$ is the word for the components to be added (“woman\").", "id": 732, "question": "Are the semantic analysis findings for Italian language similar to English language version?", "title": "An Analysis of Word2Vec for the Italian Language" }, { "answers": [ "" ], "context": "We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.", "id": 733, "question": "What dataset is used for training Word2Vec in Italian language?", "title": "An Analysis of Word2Vec for the Italian Language" }, { "answers": [ "" ], "context": "Morphologically complex words (MCWs) are multi-layer structures which consist of different subunits, each of which carries semantic information and has a specific syntactic role. Table 1 gives a Turkish example to show this type of complexity. This example is a clear indication that word-based models are not suitable to process such complex languages. Accordingly, when translating MRLs, it might not be a good idea to treat words as atomic units as it demands a large vocabulary that imposes extra overhead. Since MCWs can appear in various forms we require a very large vocabulary to $i$ ) cover as many morphological forms and words as we can, and $ii$ ) reduce the number of OOVs. Neural models by their nature are complex, and we do not want to make them more complicated by working with large vocabularies. Furthermore, even if we have quite a large vocabulary set, clearly some words would remain uncovered by that. This means that a large vocabulary not only complicates the entire process, but also does not necessarily mitigate the OOV problem. For these reasons we propose an NMT engine which works at the character level.", "id": 734, "question": "How are the auxiliary signals from the morphology table incorporated in the decoder?", "title": "Improving Character-based Decoding Using Target-Side Morphological Information for Neural Machine Translation" }, { "answers": [ "" ], "context": "There are several models for NMT of MRLs which are designed to deal with morphological complexities. garcia2016factored and sennrich-haddow:2016:WMT adapted the factored machine translation approach to neural models. Morphological annotations can be treated as extra factors in such models. jean-EtAl:2015:ACL-IJCNLP proposed a model to handle very large vocabularies. luong-EtAl:2015:ACL-IJCNLP addressed the problem of rare words and OOVs with the help of a post-translation phase to exchange unknown tokens with their potential translations. sennrich2015neural used subword units for NMT. The model relies on frequent subword units instead of words. costajussa-fonollosa:2016:P16-2 designed a model for translating from MRLs. The model encodes source words with a convolutional module proposed by kim2015character. Each word is represented by a convolutional combination of its characters.", "id": 735, "question": "What type of morphological information is contained in the \"morphology table\"?", "title": "Improving Character-based Decoding Using Target-Side Morphological Information for Neural Machine Translation" }, { "answers": [ "" ], "context": "Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time BIBREF2 , finding correlations between Twitter climate change sentiment and seasonal effects BIBREF3 , and clustering Twitter users based on climate mentalities using network analysis BIBREF4 . Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre- and post- various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows.", "id": 736, "question": "Do they report results only on English data?", "title": "Learning Twitter User Sentiments on Climate Change with Limited Labeled Data" }, { "answers": [ "" ], "context": "We henceforth refer to a tweet affirming climate change as a “positive\" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative\" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint\" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change\" or “global warming\", and further included disaster-specific search terms (e.g., “bomb cyclone,\" “blizzard,\" “snowstorm,\" etc.). We refer to the first data batch as “influential\" tweets, and the second data batch as “event-related\" tweets.", "id": 737, "question": "Do the authors mention any confounds to their study?", "title": "Learning Twitter User Sentiments on Climate Change with Limited Labeled Data" }, { "answers": [ "" ], "context": "Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 .", "id": 738, "question": "Which machine learning models are used?", "title": "Learning Twitter User Sentiments on Climate Change with Limited Labeled Data" }, { "answers": [ "Influential tweeters ( who they define as individuals certain to have a classifiable sentiment regarding the topic at hand) is used to label tweets in bulk in the absence of manually-labeled tweets." ], "context": "Our second goal is to compare the mean values of users' binary sentiments both pre- and post- each natural disaster event. Applying our highest-performing RNN to event-related tweets yields the following breakdown of positive tweets: Bomb Cyclone (34.7%), Mendocino Wildfire (80.4%), Hurricane Florence (57.2%), Hurricane Michael (57.6%), and Camp Fire (70.1%). As sanity checks, we examine the predicted sentiments on a subset with geographic user information and compare results to the prior literature.", "id": 739, "question": "What methodology is used to compensate for limited labelled data?", "title": "Learning Twitter User Sentiments on Climate Change with Limited Labeled Data" }, { "answers": [ "" ], "context": "In Figure FIGREF8 , we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre- and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment. However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre- and post-event at the 1% level. Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event). Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event. Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment. Methodologically, we assert that overall averages are not robust results to use in sentiment analyses.", "id": 740, "question": "Which five natural disasters were examined?", "title": "Learning Twitter User Sentiments on Climate Change with Limited Labeled Data" }, { "answers": [ "" ], "context": "A common social media delivery system such as Twitter supports various media types like video, image and text. This media allows users to share their short posts called Tweets. Users are able to share their tweets with other users that are usually following the source user. Hovewer there are rules to protect the privacy of users from unauthorized access to their timeline BIBREF0. The very nature of user interactions in Twitter micro-blogging social media is oriented towards their daily life, first witness news-reporting and engaging in various events (sports, political stands etc.). According to studies, news in twitter is propagated and reported faster than conventional news media BIBREF1. Thus, extracting first hand news and entities occurring in this fast and versatile online media gives valuable information. However, abridged and noisy content of Tweets makes it even more difficult and challenging for tasks such as named entity recognition and information retrieval BIBREF2.", "id": 741, "question": "Which social media platform is explored?", "title": "A multimodal deep learning approach for named entity recognition from social media" }, { "answers": [ "" ], "context": "Many algorithms and methods have been proposed to detect, classify or extract information from single type of data such as audio, text, image etc. However, in the case of social media, data comes in a variety of types such as text, image, video or audio in a bounded style. Most of the time, it is very common to caption a video or image with textual information. This information about the video or image can refer to a person, location etc. From a multimodal learning perspective, jointly computing such data is considered to be more valuable in terms of representation and evaluation. Named entity recognition task, on the other hand, is the task of recognizing named entities from a sentence or group of sentences in a document format.", "id": 742, "question": "What datasets did they use?", "title": "A multimodal deep learning approach for named entity recognition from social media" }, { "answers": [ "Stanford NER, BiLSTM+CRF, LSTM+CNN+CRF, T-NER and BiLSTM+CNN+Co-Attention" ], "context": "The recognition of named entities from only textual data (unimodal learning approach) is a well studied and explored research criteria. For a prominent example of this category, the Stanford NER is a widely used baseline for many applications BIBREF18. The incorporation of non-local information in information extraction is proposed by the authors using of Gibbs sampling. The conditional random field (CRF) approach used in this article, creates a chain of cliques, where each clique represents the probabilistic relationship between two adjacent states. Also, Viterbi algorithm has been used to infer the most likely state in the CRF output sequence. Equation DISPLAY_FORM5 shows the proposed CRF method.", "id": 743, "question": "What are the baseline state of the art models?", "title": "A multimodal deep learning approach for named entity recognition from social media" }, { "answers": [ "" ], "context": "Sexual violence, including harassment, is a pervasive, worldwide problem with a long history. This global problem has finally become a mainstream issue thanks to the efforts of survivors and advocates. Statistics show that girls and women are put at high risk of experiencing harassment. Women have about a 3 in 5 chance of experiencing sexual harassment, whereas men have slightly less than 1 in 5 chance BIBREF0, BIBREF1, BIBREF2. While women in developing countries are facing distinct challenges with sexual violence BIBREF3, however sexual violence is ubiquitous. In the United States, for example, there are on average >300,000 people who are sexually assaulted every year BIBREF4. Additionally, these numbers could be underestimated, due to reasons like guilt, blame, doubt and fear, which stopped many survivors from reporting BIBREF5. Social media can be a more open and accessible channel for those who have experienced harassment to be empowered to freely share their traumatic experiences and to raise awareness of the vast scale of sexual harassment, which then allows us to understand and actively address abusive behavior as part of larger efforts to prevent future sexual harassment. The deadly gang rape of a medical student on a Delhi bus in 2012 was a catalyst for protest and action, including the development of Safecity, which uses online and mobile technology to work towards ending sexual harassment and assault. More recently, the #MeToo and #TimesUp movements, further demonstrate how reporting personal stories on social media can raise awareness and empower women. Millions of people around the world have come forward and shared their stories. Instead of being bystanders, more and more people become up-standers, who take action to protest against sexual harassment online. The stories of people who experienced harassment can be studied to identify different patterns of sexual harassment, which can enable solutions to be developed to make streets safer and to keep women and girls more secure when navigating city spaces BIBREF6. In this paper, we demonstrated the application of natural language processing (NLP) technologies to uncover harassment patterns from social media data. We made three key contributions:", "id": 744, "question": "What is the size of the dataset?", "title": "Uncover Sexual Harassment Patterns from Personal Stories by Joint Key Element Extraction and Categorization" }, { "answers": [ "" ], "context": "Conventional surveys and reports are often used to study sexual harassment, but harassment on these is usually under-reported BIBREF2, BIBREF5. The high volume of social media data available online can provide us a much larger collection of firsthand stories of sexual harassment. Social media data has already been used to analyze and predict distinct societal and health issues, in order to improve the understanding of wide-reaching societal concerns, including mental health, detecting domestic abuse, and cyberbullying BIBREF11, BIBREF12, BIBREF13, BIBREF14.", "id": 745, "question": "What model did they use?", "title": "Uncover Sexual Harassment Patterns from Personal Stories by Joint Key Element Extraction and Categorization" }, { "answers": [ "" ], "context": "We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser\", “time\", “location\", “trigger\"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below.", "id": 746, "question": "What patterns were discovered from the stories?", "title": "Uncover Sexual Harassment Patterns from Personal Stories by Joint Key Element Extraction and Categorization" }, { "answers": [ "" ], "context": "The key elements can be very informative when categorizing the incidents. For instance, in Figure 1, with identified key elements, one can easily categorize the incident in dimensions of “age of harasser” (adult), “single/multiple harasser(s)” (single), “type of harasser” (unspecified), “type of location” (park) , “time of day” (day time). Therefore, we proposed two joint learning schemes to extract the key elements and categorize the incidents together. In the models' names, “J”, “A”, “SA” stand for joint learning, attention, and supervised attention, respectively.", "id": 747, "question": "Did they use a crowdsourcing platform?", "title": "Uncover Sexual Harassment Patterns from Personal Stories by Joint Key Element Extraction and Categorization" }, { "answers": [ "" ], "context": "Slot filling models are a useful method for simple natural language understanding tasks, where information can be extracted from a sentence and used to perform some structured action. For example, dates, departure cities and destinations represent slots to fill in a flight booking task. This information is extracted from natural language queries leveraging typical context associated with each slot type. Researchers have been exploring data-driven approaches to learning models for automatic identification of slot information since the 90's, and significant advances have been made BIBREF0 . Our paper builds on recent work on slot-filling using recurrent neural networks (RNNs) with a focus on the problem of training from minimal annotated data, taking an approach of sharing data from multiple tasks to reduce the amount of data for developing a new task.", "id": 748, "question": "Does the performance increase using their method?", "title": "Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding" }, { "answers": [ "" ], "context": "Our model has a word embedding layer, followed by a bi-directional LSTM (bi-LSTM), and a softmax output layer. The bi-LSTM allows the model to use information from both the right and left contexts of each word when making predictions. We choose this architecture because similar models have been used in prior work on slot filling and have achieved good results BIBREF16 , BIBREF11 . The LSTM gates are used as defined by Sak et al. including the use of the linear projection layer on the output of the LSTM BIBREF22 . The purpose of the projection layer is to produce a model with fewer parameters without reducing the number of LSTM memory cells. For the multi-task model, the word embeddings and the bi-LSTM parameters are shared across tasks but each task has its own softmax layer. This means that if the multi-task model has half a million parameters, only a couple thousand of them are unique to each task and the other 99.5% are shared between all of the tasks.", "id": 749, "question": "What tasks are they experimenting with in this paper?", "title": "Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding" }, { "answers": [ "" ], "context": "Crowd-sourced data was collected simulating common use cases for four different apps: United Airlines, Airbnb, Greyhound bus service and OpenTable. The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant. In order to elicit natural language, crowd workers were instructed to simulate a conversation with a friend planning an activity as opposed to giving a command to the computer. Workers were prompted with a slot type/value pair and asked to form a reply to their friend using that information. The instructions were to not include any other potential slots in the sentence but this instruction was not always followed by the workers.", "id": 750, "question": "What is the size of the open vocabulary?", "title": "Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding" }, { "answers": [ "" ], "context": "Pre-trained language representation models, including feature-based methods BIBREF0 , BIBREF1 and fine-tuning methods BIBREF2 , BIBREF3 , BIBREF4 , can capture rich language information from text and then benefit many NLP tasks. Bidirectional Encoder Representations from Transformers (BERT) BIBREF4 , as one of the most recently developed models, has produced the state-of-the-art results by simple fine-tuning on various NLP tasks, including named entity recognition (NER) BIBREF5 , text classification BIBREF6 , natural language inference (NLI) BIBREF7 , question answering (QA) BIBREF8 , BIBREF9 , and has achieved human-level performances on several datasets BIBREF8 , BIBREF9 .", "id": 751, "question": "How do they select answer candidates for their QA task?", "title": "Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models" }, { "answers": [ "They identify documents that contain the unigrams 'caused', 'causing', or 'causes'" ], "context": "Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect.", "id": 752, "question": "How do they extract causality from text?", "title": "What we write about when we write about causality: Features of causal statements across large-scale social discourse" }, { "answers": [ "Randomly selected from a Twitter dump, temporally matched to causal documents" ], "context": "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.", "id": 753, "question": "What is the source of the \"control\" corpus?", "title": "What we write about when we write about causality: Features of causal statements across large-scale social discourse" }, { "answers": [ "Presence of only the exact unigrams 'caused', 'causing', or 'causes'" ], "context": "Documents were further studied by annotating their unigrams with Parts-of-Speech (POS) and Named Entities (NE) tags. POS tagging was done using NLTK v3.1 BIBREF29 which implements an averaged perceptron classifier BIBREF32 trained on the Brown Corpus BIBREF33 . (POS tagging is affected by punctuation; we show in the Appendix that our results are relatively robust to the removal of punctuation.) POS tags denote the nouns, verbs, and other grammatical constructs present in a document. Named Entity Recognition (NER) was performed using the 4-class, distributional similarity tagger provided as part of the Stanford CoreNLP v3.6.0 toolkit BIBREF34 . NER aims to identify and classify proper words in a text. The NE classifications considered were: Organization, Location, Person, and Misc. The Stanford NER tagger uses a conditional random field model BIBREF35 trained on diverse sets of manually-tagged English-language data (CoNLL-2003) BIBREF34 . Conditional random fields allow dependencies between words so that `New York' and `New York Times', for example, are classified separately as a location and organization, respectively. These taggers are commonly used and often provide reasonably accurate results, but there is always potential ambiguity in written text and improving upon these methods remains an active area of research.", "id": 754, "question": "What are the selection criteria for \"causal statements\"?", "title": "What we write about when we write about causality: Features of causal statements across large-scale social discourse" }, { "answers": [ "Only automatic methods" ], "context": "For a better understanding of the higher-order language structure present in text phrases, cause-trees were constructed. A cause-tree starts with a root cause word (either `caused', `causing' or `causes'), then the two most probable words following (preceding) the root are identified. Next, the root word plus one of the top probable words is combined into a bigram and the top two most probable words following (preceding) this bigram are found. Repeatedly applying this process builds a binary tree representing the $n$ -grams that begin with (terminate at) the root word. This process can continue until a certain $n$ -gram length is reached or until there are no more documents long enough to search.", "id": 755, "question": "Do they use expert annotations, crowdsourcing, or only automatic methods to analyze the corpora?", "title": "What we write about when we write about causality: Features of causal statements across large-scale social discourse" }, { "answers": [ "Randomly from a Twitter dump" ], "context": "Sentimental analysis was applied to estimate the emotional content of documents. Two levels of analysis were used: a method where individual unigrams were given crowdsourced numeric sentiment scores, and a second method involving a trained classifier that can incorporate document-level phrase information.", "id": 756, "question": "how do they collect the comparable corpus?", "title": "What we write about when we write about causality: Features of causal statements across large-scale social discourse" }, { "answers": [ "Randomly from Twitter" ], "context": "Lastly, we applied topic modeling to the causal corpus to determine what are the topical foci most discussed in causal statements. Topics were built from the causal corpus using Latent Dirichlet Allocation (LDA) BIBREF39 . Under LDA each document is modeled as a bag-of-words or unordered collection of unigrams. Topics are considered as mixtures of unigrams by estimating conditional distributions over unigrams: $P(w|T)$ , the probability of unigram $w$ given topic $T$ and documents are considered as mixtures of topics via $P(T|d)$ , the probability of topic $T$ given document $d$ . These distributions are then found via statistical inference given the observed distributions of unigrams across documents. The total number of topics is a parameter chosen by the practitioner. For this study we used the MALLET v2.0.8RC3 topic modeling toolkit BIBREF40 for model inference. By inspecting the most probable unigrams per topic (according to $P(w|T)$ ), we found 10 topics provided meaningful and distinct topics.", "id": 757, "question": "How do they collect the control corpus?", "title": "What we write about when we write about causality: Features of causal statements across large-scale social discourse" }, { "answers": [ "" ], "context": "Question answering (QA) with neural network, i.e. neural QA, is an active research direction along the road towards the long-term AI goal of building general dialogue agents BIBREF0 . Unlike conventional methods, neural QA does not rely on feature engineering and is (at least nearly) end-to-end trainable. It reduces the requirement for domain specific knowledge significantly and makes domain adaption easier. Therefore, it has attracted intensive attention in recent years.", "id": 758, "question": "What languages do they experiment with?", "title": "Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering" }, { "answers": [ "" ], "context": "In this work, we focus on open-domain factoid QA. Taking Figure FIGREF3 as an example, we formalize the problem as follows: given each question Q, we have one or more evidences E, and the task is to produce the answer A, where an evidence is a piece of text of any length that contains relevant information to answer the question. The advantage of this formalization is that evidences can be retrieved from web or unstructured knowledge base, which can improve system coverage significantly.", "id": 759, "question": "What are the baselines?", "title": "Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering" }, { "answers": [ "" ], "context": "Figure FIGREF4 shows the structure of our model. The model consists of three components: (1) question LSTM for computing question representation; (2) evidence LSTMs for evidence analysis; and (3) a CRF layer for sequence labeling. The question LSTM in a form of a single layer LSTM equipped with a single time attention takes the question as input and generates the question representation INLINEFORM0 . The three-layer evidence LSTMs takes the evidence, question representation INLINEFORM1 and optional features as input and produces “features” for the CRF layer. The CRF layer takes the “features” as input and produces the label sequence. The details will be given in the following sections.", "id": 760, "question": "What was the inter-annotator agreement?", "title": "Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering" }, { "answers": [ "" ], "context": "Following BIBREF19 , we define INLINEFORM0 as a function mapping its input INLINEFORM1 , previous state INLINEFORM2 and output INLINEFORM3 to current state INLINEFORM4 and output INLINEFORM5 : DISPLAYFORM0 ", "id": 761, "question": "Did they use a crowdsourcing platform?", "title": "Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering" }, { "answers": [ "" ], "context": "Entity coreference resolution has become a critical component for many Natural Language Processing (NLP) tasks. Systems requiring deep language understanding, such as information extraction BIBREF2 , semantic event learning BIBREF3 , BIBREF4 , and named entity linking BIBREF5 , BIBREF6 all benefit from entity coreference information.", "id": 762, "question": "Are resolution mode variables hand crafted?", "title": "Unsupervised Ranking Model for Entity Coreference Resolution" }, { "answers": [ "Variables in the set {str, prec, attr} indicating in which mode the mention should be resolved." ], "context": "In the following, $D = \\lbrace m_0, m_1, \\ldots , m_n\\rbrace $ represents a generic input document which is a sequence of coreference mentions, including the artificial root mention (denoted by $m_0$ ). The method to detect and extract these mentions is discussed later in Section \"Mention Detection\" . Let $C = \\lbrace c_1, c_2, \\ldots , c_n\\rbrace $ denote the coreference assignment of a given document, where each mention $m_i$ has an associated random variable $c_i$ taking values in the set $\\lbrace 0, i, \\ldots , i-1\\rbrace $ ; this variable specifies $m_i$ 's selected antecedent ( $c_i \\in \\lbrace 1, 2, \\ldots , i-1\\rbrace $ ), or indicates that it begins a new coreference chain ( $c_i = 0$ ).", "id": 763, "question": "What are resolution model variables?", "title": "Unsupervised Ranking Model for Entity Coreference Resolution" }, { "answers": [ "No, supervised models perform better for this task." ], "context": "The following is a straightforward way to build a generative model for coreference: ", "id": 764, "question": "Is the model presented in the paper state of the art?", "title": "Unsupervised Ranking Model for Entity Coreference Resolution" }, { "answers": [ "" ], "context": "Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc.", "id": 765, "question": "What problems are found with the evaluation scheme?", "title": "The First Evaluation of Chinese Human-Computer Dialogue Technology" }, { "answers": [ "" ], "context": "The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue.", "id": 766, "question": "How is the data annotated?", "title": "The First Evaluation of Chinese Human-Computer Dialogue Technology" }, { "answers": [ "" ], "context": "In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information.", "id": 767, "question": "What collection steps do they mention?", "title": "The First Evaluation of Chinese Human-Computer Dialogue Technology" }, { "answers": [ "" ], "context": "For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following:", "id": 768, "question": "How many intents were classified?", "title": "The First Evaluation of Chinese Human-Computer Dialogue Technology" }, { "answers": [ "For task 1 best F1 score was 0.9391 on closed and 0.9414 on open test.\nFor task2 best result had: Ratio 0.3175 , Satisfaction 64.53, Fluency 0, Turns -1 and Guide 2" ], "context": "In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation.", "id": 769, "question": "What was the result of the highest performing system?", "title": "The First Evaluation of Chinese Human-Computer Dialogue Technology" }, { "answers": [ "" ], "context": "There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper.", "id": 770, "question": "What metrics are used in the evaluation?", "title": "The First Evaluation of Chinese Human-Computer Dialogue Technology" }, { "answers": [ "" ], "context": "Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 .", "id": 771, "question": "How do they measure the quality of summaries?", "title": "Multi-style Generative Reading Comprehension" }, { "answers": [ "" ], "context": "The task considered in this paper, is defined as:", "id": 772, "question": "Does their model also take the expected answer style as input?", "title": "Multi-style Generative Reading Comprehension" }, { "answers": [ "well-formed sentences vs concise answers" ], "context": "Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style.", "id": 773, "question": "What do they mean by answer styles?", "title": "Multi-style Generative Reading Comprehension" }, { "answers": [ "" ], "context": "Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured.", "id": 774, "question": "Is there exactly one \"answer style\" per dataset?", "title": "Multi-style Generative Reading Comprehension" }, { "answers": [ "BiDAF, Deep Cascade QA, S-Net+CES2S, BERT+Multi-PGNet, Selector+CCG, VNET, DECAPROP, MHPGM+NOIC, ConZNet, RMR+A2D" ], "context": "The passage ranker maps the output of the modeling layer, $\\lbrace M^{p_k}\\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: ", "id": 775, "question": "What are the baselines that Masque is compared against?", "title": "Multi-style Generative Reading Comprehension" }, { "answers": [ "Bleu-1: 54.11, Bleu-4: 30.43, METEOR: 26.13, ROUGE-L: 59.87" ], "context": "The answer possibility classifier maps the output of the modeling layer, $\\lbrace M^{p_k}\\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: ", "id": 776, "question": "What is the performance achieved on NarrativeQA?", "title": "Multi-style Generative Reading Comprehension" }, { "answers": [ "well-formed sentences vs concise answers" ], "context": "Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step.", "id": 777, "question": "What is an \"answer style\"?", "title": "Multi-style Generative Reading Comprehension" }, { "answers": [ "" ], "context": "Lip reading, also known as visual speech recognition, aims to predict the sentence being spoken, given a silent video of a talking face. In noisy environments, where speech recognition is difficult, visual speech recognition offers an alternative way to understand speech. Besides, lip reading has practical potential in improved hearing aids, security, and silent dictation in public spaces. Lip reading is essentially a difficult problem, as most lip reading actuations, besides the lips and sometimes tongue and teeth, are latent and ambiguous. Several seemingly identical lip movements can produce different words.", "id": 778, "question": "What was the previous state of the art model for this task?", "title": "A Cascade Sequence-to-Sequence Model for Chinese Mandarin Lip Reading" }, { "answers": [ "" ], "context": "In this section, we present CSSMCM, a lip reading model for Chinese Mandarin. As mention in Section SECREF1 , pinyin and tone are both important for Chinese Mandarin lip reading. Pinyin represents how to pronounce a Chinese character and is related to mouth movement. Tone can alleviate the ambiguity of visemes (several speech sounds that look the same) to some extent and can be inferred from visible movements. Based on this, the lip reading task is defined as follow: DISPLAYFORM0 ", "id": 779, "question": "What syntactic structure is used to model tones?", "title": "A Cascade Sequence-to-Sequence Model for Chinese Mandarin Lip Reading" }, { "answers": [ "" ], "context": "The pinyin prediction sub-network transforms video sequence into pinyin sequence, which corresponds to INLINEFORM0 in Equation ( EQREF6 ). This sub-network is based on the sequence-to-sequence architecture with attention mechanism BIBREF8 . We name the encoder and decoder the video encoder and pinyin decoder, for the encoder process video sequence, and the decoder predicts pinyin sequence. The input video sequence is first fed into the VGG model BIBREF9 to extract visual feature. The output of conv5 of VGG is appended with global average pooling BIBREF10 to get the 512-dim feature vector. Then the 512-dim feature vector is fed into video encoder. The video encoder can be denoted as: DISPLAYFORM0 ", "id": 780, "question": "What visual information characterizes tones?", "title": "A Cascade Sequence-to-Sequence Model for Chinese Mandarin Lip Reading" }, { "answers": [ "" ], "context": "In recent years we have witnessed a great surge in activity in the area of computational argument analysis (e.g. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 ), and the emergence of dedicated venues such as the ACL Argument Mining workshop series starting in 2014 BIBREF4 .", "id": 781, "question": "Do they report results only on English data?", "title": "Dissecting Content and Context in Argumentative Relation Analysis" }, { "answers": [ "" ], "context": "It is well-known that the rhetorical and argumentative structure of texts bear great similarities. For example, BIBREF5 , BIBREF6 , BIBREF0 observe that elementary discourse units (EDUs) in RST BIBREF7 share great similarity with elementary argumentative units (EAUs) in argumentation analysis. BIBREF8 experiment with a modified version of the Microtext corpus BIBREF9 , which is an extensively annotated albeit small corpus. Similar to us, they separate argumentative units from discursive contextual markers. While BIBREF8 conduct a human evaluation to investigate the separation of Logos and Pathos aspects of arguments, our work investigates how (de-)contextualization of argumentative units affects automatic argumentative relation classification models.", "id": 782, "question": "How do they demonstrate the robustness of their results?", "title": "Dissecting Content and Context in Argumentative Relation Analysis" }, { "answers": [ "" ], "context": "In this section, we describe different formulations of the argumentative relation classification task and describe features used by our replicated model. In order to test our hypotheses, we propose to group all features into three distinct types.", "id": 783, "question": "What baseline and classification systems are used in experiments?", "title": "Dissecting Content and Context in Argumentative Relation Analysis" }, { "answers": [ "Answer with content missing: (Data and pre-processing section) The data is suited for our experiments because the annotators were explicitly asked to provide annotations on a clausal level." ], "context": "Now, we introduce a classification of three different prediction models used in the argumentative relation prediction literature. We will inspect all of them and show that all can suffer from severe issues when focusing (too much) on the context.", "id": 784, "question": "How are the EAU text spans annotated?", "title": "Dissecting Content and Context in Argumentative Relation Analysis" }, { "answers": [ "" ], "context": "Our feature implementation follows the feature descriptions for Stance recognition and link identification in BIBREF13 . These features and variations of them have been used successfully in several successive works (cf. BIBREF1 , BIBREF16 , BIBREF15 ).", "id": 785, "question": "How are elementary argumentative units defined?", "title": "Dissecting Content and Context in Argumentative Relation Analysis" }, { "answers": [ "They collected tweets in Russian language using a heuristic query specific to Russian" ], "context": "Word semantic similarity task is an important part of contemporary NLP. It can be applied in many areas, like word sense disambiguation, information retrieval, information extraction and others. It has long history of improvements, starting with simple models, like bag-of-words (often weighted by TF-IDF score), continuing with more complex ones, like LSA BIBREF0 , which attempts to find “latent” meanings of words and phrases, and even more abstract models, like NNLM BIBREF1 . Latest results are based on neural network experience, but are far more simple: various versions of Word2Vec, Skip-gram and CBOW models BIBREF2 , which currently show the State-of-the-Art results and have proven success with morphologically complex languages like Russian BIBREF3 , BIBREF4 .", "id": 786, "question": "Which Twitter corpus was used to train the word vectors?", "title": "Gibberish Semantics: How Good is Russian Twitter in Word Semantic Similarity Task?" }, { "answers": [ "Proposed SG model vs SINDHI FASTTEXT:\nAverage cosine similarity score: 0.650 vs 0.388\nAverage semantic relatedness similarity score between countries and their capitals: 0.663 vs 0.391" ], "context": "Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure BIBREF2 due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people BIBREF1. The research on SNLP was coined in 2002, however, IT grabbed research attention after the development of its Unicode system BIBREF3. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources.", "id": 787, "question": "How does proposed word embeddings compare to Sindhi fastText word representations?", "title": "A New Corpus for Low-Resourced Sindhi Language with Word Embeddings" }, { "answers": [ "" ], "context": "The natural language resources refer to a set of language data and descriptions BIBREF31 in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English BIBREF5, Stanford CoreNLP BIBREF6, LTP for Chinese BIBREF7, TectoMT for German, Russian, Arabic BIBREF8 and multilingual toolkit BIBREF9. But Sindhi language is at an early stage for the development of such resources and software tools.", "id": 788, "question": "Are trained word embeddings used for any other NLP task?", "title": "A New Corpus for Low-Resourced Sindhi Language with Word Embeddings" }, { "answers": [ "908456 unique words are available in collected corpus." ], "context": "This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings.", "id": 789, "question": "How many uniue words are in the dataset?", "title": "A New Corpus for Low-Resourced Sindhi Language with Word Embeddings" }, { "answers": [ "" ], "context": "We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization.", "id": 790, "question": "How is the data collected, which web resources were used?", "title": "A New Corpus for Low-Resourced Sindhi Language with Word Embeddings" }, { "answers": [ "" ], "context": "Until recent times, the research in popular music was mostly bound to a non-computational approach BIBREF0 but the availability of new data, models and algorithms helped the rise of new research trends. Computational analysis of music structure BIBREF1 is focused on parsing and annotate patters in music files; computational music generation BIBREF2 trains systems able to generate songs with specific music styles; computational sociology of music analyzes databases annotated with metadata such as tempo, key, BPMs and similar (generally referred to as sonic features); even psychology of music use data to find new models.", "id": 791, "question": "What trends are found in musical preferences?", "title": "The Wiki Music dataset: A tool for computational analysis of popular music" }, { "answers": [ "" ], "context": "We define ”popular music” as the music which finds appeal out of culturally closed music groups, also thanks to its commercial nature. Non-popular music can be divided into three broad groups: classical music (produced and performed by experts with a specific education), folk/world music (produced and performed by traditional cultures), and utility music (such as hymns and military marches, not primarily intended for commercial purposes). Popular music is a great mean for spreading culture, and a perfect ground where cultural practices and industry processes combine. In particular the cultural processes select novelties, broadly represented by means of underground music genres, and the industry tries to monetize, making them commercially successful. In the following description we include almost all the genres that reach commercial success and few of the underground genres that are related to them.", "id": 792, "question": "Which decades did they look at?", "title": "The Wiki Music dataset: A tool for computational analysis of popular music" }, { "answers": [ "" ], "context": "From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1.", "id": 793, "question": "How many genres did they collect from?", "title": "The Wiki Music dataset: A tool for computational analysis of popular music" }, { "answers": [ "" ], "context": "The study of English influence in the Spanish language has been a hot topic in Hispanic linguistics for decades, particularly concerning lexical borrowing or anglicisms BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6.", "id": 794, "question": "Does the paper mention other works proposing methods to detect anglicisms in Spanish?", "title": "An Annotated Corpus of Emerging Anglicisms in Spanish Newspaper Headlines" }, { "answers": [ "" ], "context": "Corpus-based studies of English borrowings in Spanish media have traditionally relied on manual evaluation of either previously compiled general corpora such as CREA BIBREF10, BIBREF11, BIBREF12, BIBREF13, either new tailor-made corpora designed to analyze specific genres, varieties or phenomena BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20.", "id": 795, "question": "What is the performance of the CRF model on the task described?", "title": "An Annotated Corpus of Emerging Anglicisms in Spanish Newspaper Headlines" }, { "answers": [ "" ], "context": "Linguistic borrowing can be defined as the transference of linguistic elements between two languages. Borrowing and code-switching have frequently been described as a continuum BIBREF33, with a fuzzy frontier between the two. As a result, a precise definition of what borrowing is remains elusive BIBREF34 and some authors prefer to talk about code-mixing in general BIBREF35 or “lone other-language incorporations\" BIBREF36.", "id": 796, "question": "Does the paper motivate the use of CRF as the baseline model?", "title": "An Annotated Corpus of Emerging Anglicisms in Spanish Newspaper Headlines" }, { "answers": [ "" ], "context": "In this subsection we describe the characteristics of the corpus. We first introduce the main corpus, with the usual train/development/test split that was used to train, tune and evaluate the model. We then present an additional test set that was designed to assess the performance of the model on more naturalistic data.", "id": 797, "question": "What are the handcrafted features used?", "title": "An Annotated Corpus of Emerging Anglicisms in Spanish Newspaper Headlines" }, { "answers": [ "" ], "context": "Deep generative models attract a lot of attention in recent years BIBREF0. Such methods as variational autoencoders BIBREF1 or generative adversarial networks BIBREF2 are successfully applied to a variety of machine vision problems including image generation BIBREF3, learning interpretable image representations BIBREF4 and style transfer for images BIBREF5. However, natural language generation is more challenging due to many reasons, such as the discrete nature of textual information BIBREF6, the absence of local information continuity and non-smooth disentangled representations BIBREF7. Due to these difficulties, text generation is mostly limited to specific narrow applications and is usually working in supervised settings.", "id": 798, "question": "What is state of the art method?", "title": "Style Transfer for Texts: to Err is Human, but Error Margins Matter" }, { "answers": [ "" ], "context": "Style of a text is a very general notion that is hard to define in rigorous terms BIBREF15. However, the style of a text can be characterized quantitatively BIBREF16; stylized texts could be generated if a system is trained on a dataset of stylistically similar texts BIBREF17; and author-style could be learned end-to-end BIBREF18, BIBREF19, BIBREF20. A majority of recent works on style transfer focus on the sentiment of text and use it as a target attribute. For example, in BIBREF21, BIBREF22, BIBREF23 estimate the quality of the style transfer with binary sentiment classifier trained on the corpora further used for the training of the style-transfer system. BIBREF24 and especially BIBREF9 generalize this ad-hoc approach defining a style as a set of arbitrary quantitively measurable categorial or continuous parameters. Such parameters could include the 'style of the time' BIBREF16, author-specific attributes (see BIBREF25 or BIBREF26 on 'shakespearization'), politeness BIBREF27, formality of speech BIBREF28, and gender or even political slant BIBREF29.", "id": 799, "question": "By how much do proposed architectures autperform state-of-the-art?", "title": "Style Transfer for Texts: to Err is Human, but Error Margins Matter" }, { "answers": [ "" ], "context": "In this work we experiment with extensions of a model, described in BIBREF6, using Texar BIBREF40 framework. To generate plausible sentences with specific semantic and stylistic features every sentence is conditioned on a representation vector $z$ which is concatenated with a particular code $c$ that specifies desired attribute, see Figure FIGREF8. Under notation introduced in BIBREF6 the base autoencoder (AE) includes a conditional probabilistic encoder $E$ defined with parameters $\\theta _E$ to infer the latent representation $z$ given input $x$", "id": 800, "question": "What are three new proposed architectures?", "title": "Style Transfer for Texts: to Err is Human, but Error Margins Matter" }, { "answers": [ "" ], "context": "We have found that the baseline, as well as the proposed extensions, have noisy outcomes, when retrained from scratch, see Figure FIGREF1. Most of the papers mentioned in Section SECREF2 measure the performance of the methods proposed for the sentiment transfer with two metrics: accuracy of the external sentiment classifier measured on test data, and BLEU between the input and output that is regarded as a coarse metric for semantic similarity.", "id": 801, "question": "How much does the standard metrics for style accuracy vary on different re-runs?", "title": "Style Transfer for Texts: to Err is Human, but Error Margins Matter" }, { "answers": [ "standard parametrized attention and a non-attention baseline" ], "context": "Sequence-to-sequence models BIBREF0 , BIBREF1 have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT) BIBREF2 , BIBREF3 , text summarization BIBREF4 , BIBREF5 , speech recognition BIBREF6 , BIBREF7 , image captioning BIBREF8 , and conversational modeling BIBREF9 , BIBREF10 .", "id": 802, "question": "Which baseline methods are used?", "title": "Efficient Attention using a Fixed-Size Memory Representation" }, { "answers": [ "Ranges from 44.22 to 100.00 depending on K and the sequence length." ], "context": "Our models are based on an encoder-decoder architecture with attention mechanism BIBREF2 , BIBREF11 . An encoder function takes as input a sequence of source tokens $\\mathbf {x} = (x_1, ..., x_m)$ and produces a sequence of states $\\mathbf {s} = (s_1, ..., s_m)$ .The decoder is an RNN that predicts the probability of a target sequence $\\mathbf {y} = (y_1, ..., y_T \\mid \\mathbf {s})$ . The probability of each target token $y_i \\in \\lbrace 1, ... ,|V|\\rbrace $ is predicted based on the recurrent state in the decoder RNN, $h_i$ , the previous words, $y_{<i}$ , and a context vector $c_i$ . The context vector $c_i$ , also referred to as the attention vector, is calculated as a weighted average of the source states. ", "id": 803, "question": "How much is the BLEU score?", "title": "Efficient Attention using a Fixed-Size Memory Representation" }, { "answers": [ "Sequence Copy Task and WMT'17" ], "context": "Our proposed model is shown in Figure 1 . During encoding, we compute an attention matrix $C \\in \\mathbb {R}^{K \\times D}$ , where $K$ is the number of attention vectors and a hyperparameter of our method, and $D$ is the dimensionality of the top-most encoder state. This matrix is computed by predicting a score vector $\\alpha _t \\in \\mathbb {R}^K$ at each encoding time step $t$ . $C$ is then a linear combination of the encoder states, weighted by $\\alpha _t$ : ", "id": 804, "question": "Which datasets are used in experiments?", "title": "Efficient Attention using a Fixed-Size Memory Representation" }, { "answers": [ "" ], "context": "Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9.", "id": 805, "question": "What regularizers were used to encourage consistency in back translation cycles?", "title": "Duality Regularization for Unsupervised Bilingual Lexicon Induction" }, { "answers": [ "New best results of accuracy (P@1) on Vecmap:\nOurs-GeoMMsemi: EN-IT 50.00 IT-EN 42.67 EN-DE 51.60 DE-EN 47.22 FI-EN 39.62 EN-ES 39.47 ES-EN 36.43" ], "context": "UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets.", "id": 806, "question": "What are new best results on standard benchmark?", "title": "Duality Regularization for Unsupervised Bilingual Lexicon Induction" }, { "answers": [ "Proposed method vs best baseline result on Vecmap (Accuracy P@1):\nEN-IT: 50 vs 50\nIT-EN: 42.67 vs 42.67\nEN-DE: 51.6 vs 51.47\nDE-EN: 47.22 vs 46.96\nEN-FI: 35.88 vs 36.24\nFI-EN: 39.62 vs 39.57\nEN-ES: 39.47 vs 39.30\nES-EN: 36.43 vs 36.06" ], "context": "We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\\lbrace x_1,...,x_n\\rbrace $ and $Y=\\lbrace y_1,...,y_m\\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\\mathcal {F}:X\\rightarrow Y$ such that for each $x_i$, $\\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\\mathcal {G}:Y\\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings.", "id": 807, "question": "How better is performance compared to competitive baselines?", "title": "Duality Regularization for Unsupervised Bilingual Lexicon Induction" }, { "answers": [ "" ], "context": "BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\\mathcal {F}$ tries to generate “fake” word embeddings $\\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\\mathcal {F}}$max$_{D{_y}}\\ell _{adv}(\\mathcal {F},D_y,X,Y)$, where", "id": 808, "question": "How big is data used in experiments?", "title": "Duality Regularization for Unsupervised Bilingual Lexicon Induction" }, { "answers": [ "EN<->ES\nEN<->DE\nEN<->IT\nEN<->EO\nEN<->MS\nEN<->FI" ], "context": "We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4.", "id": 809, "question": "What 6 language pairs is experimented on?", "title": "Duality Regularization for Unsupervised Bilingual Lexicon Induction" }, { "answers": [ "" ], "context": "We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores:", "id": 810, "question": "What are current state-of-the-art methods that consider the two tasks independently?", "title": "Duality Regularization for Unsupervised Bilingual Lexicon Induction" }, { "answers": [ "" ], "context": "The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted.", "id": 811, "question": "How big is their training set?", "title": "Team Papelo: Transformer Networks at FEVER" }, { "answers": [ "" ], "context": "The core of our system is an entailment module based on a transformer network. Transformer networks BIBREF6 are deep networks applied to sequential input data, with each layer implementing multiple heads of scaled dot product attention. This attention mechanism allows deep features to be compared across positions in the input.", "id": 812, "question": "What baseline do they compare to?", "title": "Team Papelo: Transformer Networks at FEVER" }, { "answers": [ "" ], "context": "The baseline FEVER system BIBREF0 ran the AllenNLP BIBREF3 implementation of Decomposable Attention BIBREF2 to classify a group of five premise statements concatenated together against the claim. These five premise statements were fixed by the retrieval module and not considered individually. In our system, premise statements are individually evaluated.", "id": 813, "question": "Which pre-trained transformer do they use?", "title": "Team Papelo: Transformer Networks at FEVER" }, { "answers": [ "" ], "context": "Regardless of how strong the entailment classifier is, FEVER score is limited by whether the document and sentence retrieval modules, which produce the input to the entailment classifier, find the right evidence. In Table 3 , we examine the percentage of claims for which correct evidence is retrieved, before filtering with the entailment classifier. For this calculation, we skip any claim with an evidence group with multiple statements, and count a claim as succesfully retrieved if it is not verifiable or if the statement in one of the evidence groups is retrieved. The baseline system retrieves the five articles with the highest TFIDF score, and then extracts the five sentences from that collection with the highest TFIDF score against the claim. It achieves 66.1% evidence retrieval.", "id": 814, "question": "What is the FEVER task?", "title": "Team Papelo: Transformer Networks at FEVER" }, { "answers": [ "" ], "context": "Accurate and efficient computation of derivatives is vital for a wide variety of computing applications, including numerical optimization, solution of nonlinear equations, sensitivity analysis, and nonlinear inverse problems. Virtually every process could be described with a mathematical function, which can be thought of as an association between elements from different sets. Derivatives track how a varying quantity depends on another quantity, for example how the position of a planet varies as time varies.", "id": 815, "question": "How is correctness of automatic derivation proved?", "title": "Automatic Differentiation in ROOT" }, { "answers": [ "" ], "context": "Here, we briefly discuss main algorithmic and implementation principles behind AD. An in-depth overview and more formal description can be found in BIBREF1 and BIBREF2, respectively.", "id": 816, "question": "Is this AD implementation used in any deep learning framework?", "title": "Automatic Differentiation in ROOT" }, { "answers": [ "" ], "context": "The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence.", "id": 817, "question": "Do they conduct any human evaluation?", "title": "Controlling the Output Length of Neural Machine Translation" }, { "answers": [ "" ], "context": "Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization.", "id": 818, "question": "What dataset do they use for experiments?", "title": "Controlling the Output Length of Neural Machine Translation" }, { "answers": [ "They introduce new trigonometric encoding which besides information about position uses additional length information (abs or relative)." ], "context": "Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\\text{PE}$):", "id": 819, "question": "How do they enrich the positional embedding with length information", "title": "Controlling the Output Length of Neural Machine Translation" }, { "answers": [ "They use three groups short/normal/long translation classes to learn length token, which is in inference used to bias network to generate desired length group." ], "context": "Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length.", "id": 820, "question": "How do they condition the output to a given target-source class?", "title": "Controlling the Output Length of Neural Machine Translation" }, { "answers": [ "" ], "context": "We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length.", "id": 821, "question": "Which languages do they focus on?", "title": "Controlling the Output Length of Neural Machine Translation" }, { "answers": [ "" ], "context": "Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group.", "id": 822, "question": "What dataset do they use?", "title": "Controlling the Output Length of Neural Machine Translation" }, { "answers": [ "" ], "context": "Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:", "id": 823, "question": "Do they experiment with combining both methods?", "title": "Controlling the Output Length of Neural Machine Translation" }, { "answers": [ "" ], "context": "The field of autonomous dialog systems is rapidly growing with the spread of smart mobile devices but it still faces many challenges to become the primary user interface for natural interaction through conversations. Indeed, when dialogs are conducted in noisy environments or when utterances themselves are noisy, correctly recognizing and understanding user utterances presents a real challenge. In the context of call-centers, efficient automation has the potential to boost productivity through increasing the probability of a call's success while reducing the overall cost of handling the call. One of the core components of a state-of-the-art dialog system is a dialog state tracker. Its purpose is to monitor the progress of a dialog and provide a compact representation of past user inputs and system outputs represented as a dialog state. The dialog state encapsulates the information needed to successfully finish the dialog, such as users' goals or requests. Indeed, the term “dialog state” loosely denotes an encapsulation of user needs at any point in a dialog. Obviously, the precise definition of the state depends on the associated dialog task. An effective dialog system must include a tracking mechanism which is able to accurately accumulate evidence over the sequence of turns of a dialog, and it must adjust the dialog state according to its observations. In that sense, it is an essential componant of a dialog systems. However, actual user utterances and corresponding intentions are not directly observable due to errors from Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU), making it difficult to infer the true dialog state at any time of a dialog. A common method of modeling a dialog state is through the use of a slot-filling schema, as reviewed in BIBREF0 . In slot-filling, the state is composed of a predefined set of variables with a predefined domain of expression for each of them. The goal of the dialog system is to efficiently instantiate each of these variables thereby performing an associated task and satisfying the corresponding intent of the user.", "id": 824, "question": "What state-of-the-art models are compared against?", "title": "Spectral decomposition method of dialog state tracking via collective matrix factorization" }, { "answers": [ "" ], "context": "Structured prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such linear programming relaxations and greedy search.", "id": 825, "question": "Does API provide ability to connect to models written in some other deep learning framework?", "title": "Torch-Struct: Deep Structured Prediction Library" }, { "answers": [ "It uses deep learning framework (pytorch)" ], "context": "Several software libraries target structured prediction. Optimization tools, such as SVM-struct BIBREF18, focus on parameter estimation. Model libraries, such as CRFSuite BIBREF19 or CRF++ BIBREF20, implement inference for a fixed set of popular models, such as linear-chain CRFs. General-purpose inference libraries, such as PyStruct BIBREF21 or TurboParser BIBREF22, utilize external solvers for (primarily MAP) inference such as integer linear programming solvers and ADMM. Probabilistic programming languages, for example languages that integrate with deep learning such as Pyro BIBREF23, allow for specification and inference over some discrete domains. Most ambitiously, inference libraries such as Dyna BIBREF24 allow for declarative specifications of dynamic programming algorithms to support inference for generic algorithms. Torch-Struct takes a different approach and integrates a library of optimized structured distributions into a vectorized deep learning system. We begin by motivating this approach with a case study.", "id": 826, "question": "Is this library implemented into Torch or is framework agnostic?", "title": "Torch-Struct: Deep Structured Prediction Library" }, { "answers": [ "" ], "context": "While structured prediction is traditionally presented at the output layer, recent applications have deployed structured models broadly within neural networks BIBREF15, BIBREF25, BIBREF16. Torch-Struct aims to encourage this general use case.", "id": 827, "question": "What baselines are used in experiments?", "title": "Torch-Struct: Deep Structured Prediction Library" }, { "answers": [ "" ], "context": "The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\\ell $, the user can request samples $z \\sim \\textsc {CRF}(\\ell )$, probabilities $\\textsc {CRF}(z;\\ell )$, modes $\\arg \\max _z \\textsc {CRF}(\\ell )$, or other distributional properties such as $\\mathbb {H}(\\textsc {CRF}(\\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning.", "id": 828, "question": "What general-purpose optimizations are included?", "title": "Torch-Struct: Deep Structured Prediction Library" }, { "answers": [ "" ], "context": "Opinions are everywhere in our lives. Every time we open a book, read the newspaper, or look at social media, we scan for opinions or form them ourselves. We are cued to the opinions of others, and often use this information to update our own opinions Asch1955,Das2014. This is true on the Internet as much as it is in our face-to-face relationships. In fact, with its wealth of opinionated material available online, it has become feasible and interesting to harness this data in order to automatically identify opinions, which had previously been far more expensive and tedious when the only access to data was offline.", "id": 829, "question": "what baseline do they compare to?", "title": "Embedding Projection for Targeted Cross-Lingual Sentiment: Model Comparisons and a Real-World Study" }, { "answers": [ "No reliability diagrams are provided and no explicit comparison is made between confidence scores or methods." ], "context": "Open information extraction (IE, sekine2006demand, Banko:2007:OIE) aims to extract open-domain assertions represented in the form of $n$ -tuples (e.g., was born in; Barack Obama; Hawaii) from natural language sentences (e.g., Barack Obama was born in Hawaii). Open IE started from rule-based BIBREF0 and syntax-driven systems BIBREF1 , BIBREF2 , and recently has used neural networks for supervised learning BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 .", "id": 830, "question": "How does this compare to traditional calibration methods like Platt Scaling?", "title": "Improving Open Information Extraction via Iterative Rank-Aware Learning" }, { "answers": [ "word embeddings" ], "context": "We briefly revisit the formulation of open IE and the neural network model used in our paper.", "id": 831, "question": "What's the input representation of OpenIE tuples into the model?", "title": "Improving Open Information Extraction via Iterative Rank-Aware Learning" }, { "answers": [ "" ], "context": "Visual storytelling and album summarization tasks have recently been of focus in the domain of computer vision and natural language processing. With the advent of new architectures, solutions for problems like image captioning and language modeling are getting better. Therefore it is only natural to work towards storytelling; deeper visual context yielding a more expressive style language, as it could potentially improve various applications involving tasks using visual descriptions and visual question answering. BIBREF0.", "id": 832, "question": "What statistics on the VIST dataset are reported?", "title": "Character-Centric Storytelling" }, { "answers": [ "" ], "context": "Nowadays speech processing is dominated by deep learning techniques. Deep neural network (DNN) acoustic models (AMs) for the tasks of automatic speech recognition (ASR) and speech synthesis have shown impressive performance for major languages such as English and Mandarin. Typically, training a DNN AM requires large amounts of transcribed data. For a large number of low-resource languages, for which very limited or no transcribed data are available, conventional methods of acoustic modeling are ineffective or even inapplicable.", "id": 833, "question": "What is the performance difference in performance in unsupervised feature learning between adverserial training and FHVAE-based disentangled speech represenation learning?", "title": "Combining Adversarial Training and Disentangled Speech Representation for Robust Zero-Resource Subword Modeling" }, { "answers": [ "" ], "context": "Humour is one of the most complex and intriguing phenomenon of the human language. It exists in various forms, across space and time, in literature and culture, and is a valued part of human interactions. Puns are one of the simplest and most common forms of humour in the English language. They are also one of the most widespread forms of spontaneous humour BIBREF0 and have found their place in casual conversations, literature, online comments, tweets and advertisements BIBREF1 , BIBREF2 . Puns are a hugely versatile and commonly used literary device and it is essential to include them in any comprehensive approach to computational humour.", "id": 834, "question": "What are puns?", "title": "Automatic Target Recovery for Hindi-English Code Mixed Puns" }, { "answers": [ "" ], "context": "Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 . Puns where the two meanings share the same pronunciation are known as homophonic or perfect puns, while those relying on similar but non-identical sounding words are known as heterophonic BIBREF4 or imperfect puns BIBREF5 . In this paper, we study automatic target recoverability of English-Hindi code mixed puns - which are more commonly imperfect puns, but may also be perfect puns in some cases.", "id": 835, "question": "What are the categories of code-mixed puns?", "title": "Automatic Target Recovery for Hindi-English Code Mixed Puns" }, { "answers": [ "" ], "context": "Recent machine learning breakthroughs in dialogue systems and their respective components have been made possible by training on publicly available large scale datasets, such as ConvAI BIBREF0, bAbI BIBREF1 and MultiWoZ BIBREF2, many of which are collected on crowdsourcing services, such as Amazon Mechanical Turk and Figure-eight. These data collection methods have the benefits of being cost-effective, time-efficient to collect and scalable, enabling the collection of large numbers of dialogues.", "id": 836, "question": "How is dialogue guided to avoid interactions that breach procedures and processes only known to experts?", "title": "CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues" }, { "answers": [ "" ], "context": "Table TABREF3 gives an overview of prior work and datasets. We report various factors to compare to the CRWIZ dataset corresponding to columns in Table TABREF3: whether or not the person was aware they were talking to a bot; whether each dialogue had a single or multiple participants per role; whether the data collection was crowdsourced; and the modality of the interaction and the domain. As we see from the bottom row, none of the datasets reported in the table meet all the criteria we are aiming for, exemplifying the need for a new and novel approach.", "id": 837, "question": "What is meant by semiguided dialogue, what part of dialogue is guided?", "title": "CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues" }, { "answers": [ "Yes, CRWIZ has been used for data collection and its initial use resulted in 145 dialogues. The average time taken for the task was close to the estimate of 10 minutes, 14 dialogues (9.66%) resolved the emergency in the scenario, and these dialogues rated consistently higher in subjective and objective ratings than those which did not resolve the emergency. Qualitative results showed that participants believed that they were interacting with an automated assistant." ], "context": "The CRWIZ Intelligent Wizard Interface resides on Slurk BIBREF23, an interaction server built for conducting dialogue experiments and data collections. Slurk handles the pairing of participants and provides a basic chat layout amongst other features. Refer to BIBREF23 for more information on the pairing of participants and the original chat layout. Our chat layout remains similar to Slurk with an important difference. In our scenario, we assign each new participant a role (Operator or Wizard) and, depending on this role, the participant sees different game instructions and chat layout schemes. These are illustrated in Figures FIGREF8 and FIGREF11, for the Operator and Wizard respectively. The main components are described in turn below: 1) The Intelligent Wizard Interface; 2) dialogue structure; and 3) system-changing actions.", "id": 838, "question": "Is CRWIZ already used for data collection, what are the results?", "title": "CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues" }, { "answers": [ "" ], "context": "We set up a crowdsourced data collection through Amazon Mechanical Turk, in which two participants chatted with each other in a setting involving an emergency at an offshore facility. As mentioned above, participants had different roles during the interaction: one of them was an Operator of the offshore facility whereas the other one acted as an Intelligent Emergency Assistant. Both of them had the same goal of resolving the emergency and avoiding evacuation at all costs, but they had different functions in the task:", "id": 839, "question": "How does framework made sure that dialogue will not breach procedures?", "title": "CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues" }, { "answers": [ "" ], "context": "Following a turbulent election season, 2016's cyber world is awash with hate speech. Automatic detection of hate speech has become an urgent need since human supervision is unable to deal with large quantities of emerging texts.", "id": 840, "question": "How do they combine the models?", "title": "Detecting Online Hate Speech Using Context Aware Models" }, { "answers": [ "" ], "context": "Recently, a few datasets with human labeled hate speech have been created, however, most of existing datasets do not contain context information. Due to the sparsity of hate speech in everyday posts, researchers tend to sample candidates from bootstrapping instead of random sampling, in order to increase the chance of seeing hate speech. Therefore, the collected data instances are likely to be from distinct contexts.", "id": 841, "question": "What is their baseline?", "title": "Detecting Online Hate Speech Using Context Aware Models" }, { "answers": [ "" ], "context": "The Fox News User Comments corpus consists of 1528 annotated comments (435 labeled as hateful) that were posted by 678 different users in 10 complete news discussion threads in the Fox News website. The 10 threads were manually selected and represent popular discussion threads during August 2016. All of the comments included in these 10 threads were annotated. The number of comments in each of the 10 threads is roughly equal. Rich context information was kept for each comment, including its user screen name, the comments and their nested structure and the original news article. The data corpus along with annotation guidelines is posted on github.", "id": 842, "question": "What context do they use?", "title": "Detecting Online Hate Speech Using Context Aware Models" }, { "answers": [ "" ], "context": "Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful.", "id": 843, "question": "What is their definition of hate speech?", "title": "Detecting Online Hate Speech Using Context Aware Models" }, { "answers": [ "" ], "context": "We identified two native English speakers for annotating online user comments. The two annotators first discussed and practices before they started annotation. They achieved a surprisingly high Kappa score BIBREF18 of 0.98 on 648 comments from 4 threads. We think that thorough discussions in the training stage is the key for achieving this high inter-agreement. For those comments which annotators disagreed on, we label them as hateful as long as one annotator labeled them as hateful. Then one annotator continued to annotate the remaining 880 comments from the remaining six discussion threads.", "id": 844, "question": "What architecture has the neural network?", "title": "Detecting Online Hate Speech Using Context Aware Models" }, { "answers": [ "" ], "context": "Collaborative human-machine story-writing has had a recent resurgence of attention from the research community BIBREF0 , BIBREF1 . It represents a frontier for AI research; as a research community we have developed convincing NLP systems for some generative tasks like machine translation, but lag behind in creative areas like open-domain storytelling. Collaborative open-domain storytelling incorporates human interactivity for one of two aims: to improve human creativity via the aid of a machine, or to improve machine quality via the aid of a human. Previously existing approaches treat the former aim, and have shown that storytelling systems are not yet developed enough to help human writers. We attempt the latter, with the goal of investigating at what stage human collaboration is most helpful.", "id": 845, "question": "How is human interaction consumed by the model?", "title": "Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation" }, { "answers": [ "" ], "context": "Figure FIGREF3 shows a diagram of the interaction system. The dotted arrows represent optional user interactions.", "id": 846, "question": "How do they evaluate generated stories?", "title": "Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation" }, { "answers": [ "" ], "context": "Figure FIGREF10 shows screenshots for both the cross-model and intra-model modes of interaction. Figure FIGREF10 shows that the cross-model mode makes clear the differences between different model generations for the same topic. Figure FIGREF10 shows the variety of interactions a user can take in intra-model interaction, and is annotated with an example-in-action. User inserted text is underlined in blue, generated text that has been removed by the user is in grey strike-through. The refresh symbol marks areas that the user re-generated to get a different sentence (presumably after being unhappy with the first result). As can be seen in this example, minor user involvement can result in a significantly better story.", "id": 847, "question": "Do they evaluate in other language appart from English?", "title": "Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation" }, { "answers": [ "" ], "context": "All models for both the Storyline Planner and Story Writer modules are conditional language models implemented with LSTMs based on merity2018regularizing. These are 3-stacked LSTMs that include weight-dropping, weight-tying, variable length back propagation with learning rate adjustment, and Averaged Stochastic Gradient Descent (ASGD). They are trained on the ROC dataset BIBREF5 , which after lowercasing and tokenization has a vocabulary of 38k. Storyline Phrases are extracted as in yao2018plan via the RAKE algorithm BIBREF6 which results in a slightly smaller Storyline vocabulary of 31k. The Storyline Planner does decoding via sampling to encourage creative exploration. The Story Writer has an option to use one or all three systems, all of which decode via beamsearch and are detailed below.", "id": 848, "question": "What are the baselines?", "title": "Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation" }, { "answers": [ "" ], "context": "Indicators of Compromise (IOCs) are forensic artifacts that are used as signs when a system has been compromised by an attacker or infected with a particular piece of malware. To be specific, IOCs are composed of some combinations of virus signatures, IPs, URLs or domain names of botnets, MD5 hashes of attack files, etc. They are frequently described in cybersecurity articles, many of which are written in unstructured text, describing attack tactics, technique and procedures. For example, a snippet from a cybersecurity article is shown in Fig. FIGREF1 . From the text , token “INST.exe” is the name of an executable file of a malicious software, and the file “ntdll.exe” downloaded by “INST.exe” is a malicious file as well. Obviously, these kinds of IOCs can be then utilized for early detection of future attack attempts by using intrusion detection systems and antivirus software, and thus, they exert an important role in the field of cybersecurity. However, with the rapid evolvement of cyber threats, the IOC data are produced at a high volume and velocity every day, which makes it increasingly hard for human to gather and manage them.", "id": 849, "question": "What is used a baseline?", "title": "Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling" }, { "answers": [ "The words that can indicate the characteristics of the neighbor words as contextual keywords and generate it from the automatically extracted contextual keywords." ], "context": "Fig. FIGREF2 shows the 3 components (layers) of the proposed neural network architecture.", "id": 850, "question": "What contextual features are used?", "title": "Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling" }, { "answers": [ "" ], "context": "The token embedding layer takes a token as input and outputs its vector representation. As shown in Fig. FIGREF2 , given an input sequence of tokens INLINEFORM0 , the output vector INLINEFORM1 ( INLINEFORM2 ) of each token INLINEFORM3 results from the concatenation of two different types of embeddings: token embedding INLINEFORM4 and the character-based token embeddings INLINEFORM5 , INLINEFORM6 that come from the output of a character-level bi-LSTM encoder.", "id": 851, "question": "Where are the cybersecurity articles used in the model sourced from?", "title": "Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling" }, { "answers": [ "" ], "context": "The Sequence Representation Layer takes the sequence of embeddings INLINEFORM0 ( INLINEFORM1 ) as input, and outputs a sequence INLINEFORM2 , where the INLINEFORM3 element of INLINEFORM4 represents the probability that the INLINEFORM5 token has the label INLINEFORM6 .", "id": 852, "question": "What type of hand-crafted features are used in state of the art IOC detection systems?", "title": "Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling" }, { "answers": [ "" ], "context": "A Question Answering (QA) system is a computer program capable of understanding questions in a natural language, finding answers to them in a knowledge base and providing answers in the same language. So broadly defined task seems very hard; BIBREF0 describes it as AI-Complete, i.e. equivalent to building a general artificial intelligence. Nonetheless, the field has attracted a lot of attention in Natural Language Processing (NLP) community as it provides a way to employ numerous NLP tools in an exploitable end-user system. It has resulted in valuable contributions within TREC competitions BIBREF1 and, quite recently, in a system called IBM Watson BIBREF2 , successfully competing with humans in the task.", "id": 853, "question": "Do they compare DeepER against other approaches?", "title": "Boosting Question Answering by Deep Entity Recognition" }, { "answers": [ "Using a set of annotation tools such as Morfeusz, PANTERA, Spejd, NERF and Liner" ], "context": "As stated in previous chapter, RAFAEL is a computer system solving a task of Polish text-based, open-domain, factoid question answering. It means that provided questions, knowledge base and returned answers are expressed in Polish and may belong to any domain. The system analyses the knowledge base, consisting of a set of plain text documents, and returns answers (as concise as possible, e.g. a person name), supplied with information about supporting sentences and documents.", "id": 854, "question": "How is the data in RAFAEL labelled?", "title": "Boosting Question Answering by Deep Entity Recognition" }, { "answers": [ "" ], "context": "The problem of Question Answering is not new to the Polish NLP community (nor working on other morphologically rich languages), but none of studies presented so far coincides with the notion of plain text-based QA presented above.", "id": 855, "question": "How do they handle polysemous words in their entity library?", "title": "Boosting Question Answering by Deep Entity Recognition" }, { "answers": [ "Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:\n1) Setting N, the size of the neighbor.\n2) Choosing N neighboring words ai in the order whose angle with the vector of the given word w is the smallest.\n3) Computing the surrounding uniformity for ai(0 < i ≤ N) and w.\n4) Computing the mean m and the sample variance σ for the uniformities of ai .\n5) Checking whether the uniformity of w is less than m − 3σ. If the value is less than m − 3σ, we may regard w as a polysemic word." ], "context": "Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. When a word has several senses, it is called a polysemic word. When a word has only one sense, it is called a monosemic word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. We can explain this fact as follows. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.", "id": 856, "question": "How is the fluctuation in the sense of the word and its neighbors measured?", "title": "Polysemy Detection in Distributed Representation of Word Sense" }, { "answers": [ "" ], "context": "Question answering (QA) is the task of retrieving answers to a question given one or more contexts. It has been explored both in the open-domain setting BIBREF0 as well as domain-specific settings, such as BioASQ for the biomedical domain BIBREF1 . The BioASQ challenge provides $\\approx 900$ factoid and list questions, i.e., questions with one and several answers, respectively. This work focuses on answering these questions, for example: Which drugs are included in the FEC-75 regimen? $\\rightarrow $ fluorouracil, epirubicin, and cyclophosphamide.", "id": 857, "question": "Among various transfer learning techniques, which technique yields to the best performance?", "title": "Neural Domain Adaptation for Biomedical Question Answering" }, { "answers": [ "" ], "context": "Quickly making sense of large amounts of linguistic data is an important application of language technology. For example, after the 2011 Japanese tsunami, natural language processing was used to quickly filter social media streams for messages about the safety of individuals, and to populate a person finder database BIBREF0. Japanese text is high-resource, but there are many cases where it would be useful to make sense of speech in low-resource languages. For example, in Uganda, as in many parts of the world, the primary source of news is local radio stations, which broadcast in many languages. A pilot study from the United Nations Global Pulse Lab identified these radio stations as a potentially useful source of information about a variety of urgent topics related to refugees, small-scale disasters, disease outbreaks, and healthcare BIBREF1. With many radio broadcasts coming in simultaneously, even simple classification of speech for known topics would be helpful to decision-makers working on humanitarian projects.", "id": 858, "question": "What is the architecture of the model?", "title": "Classifying topics in speech when all you have is crummy translations." }, { "answers": [ "" ], "context": "We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. As in that study, before training ST, we pre-train the models using English ASR data from the Switchboard Telephone speech corpus BIBREF7, which consists of around 300 hours of English speech and transcripts. This was reported to substantially improve translation quality when the training set for ST was only tens of hours.", "id": 859, "question": "What language do they look at?", "title": "Classifying topics in speech when all you have is crummy translations." }, { "answers": [ "" ], "context": "Neural machine translation (NMT) proposed by Kalchbrenner and Blunsom BIBREF0 and Sutskever et al. BIBREF1 has achieved significant progress in recent years. Unlike traditional statistical machine translation(SMT) BIBREF2 , BIBREF3 , BIBREF4 which contains multiple separately tuned components, NMT builds an end-to-end framework to model the entire translation process. For several language pairs, NMT has already achieved better translation performance than SMT BIBREF5 , BIBREF6 .", "id": 860, "question": "Where does the vocabulary come from?", "title": "Word, Subword or Character? An Empirical Study of Granularity in Chinese-English NMT" }, { "answers": [ "" ], "context": "Our models are based on an encoder-decoder architecture with attention mechanism proposed by Luong et al. BIBREF11 , which utilizes stacked LSTM layers for both encoder and decoder as illustrated in Figure FIGREF1 . In this section, we make a review of NMT framework.", "id": 861, "question": "What is the worst performing translation granularity?", "title": "Word, Subword or Character? An Empirical Study of Granularity in Chinese-English NMT" }, { "answers": [ "" ], "context": "We revisit how the source and target sentences ( INLINEFORM0 and INLINEFORM1 ) are represented in NMT. For the source side of any given training corpus, we scan through the whole corpus to build a vocabulary INLINEFORM2 of unique tokens. A source sentence INLINEFORM3 is then built as a sequence of the integer indices. The target sentence is similarly transformed into a target sequence of integer indices.", "id": 862, "question": "What dataset did they use?", "title": "Word, Subword or Character? An Empirical Study of Granularity in Chinese-English NMT" }, { "answers": [ "" ], "context": "The Semantic Web provides a large number of structured datasets in form of Linked Data. One central obstacle is to make this data available and consumable to lay users without knowledge of formal query languages such as SPARQL. In order to satisfy specific information needs of users, a typical approach are natural language interfaces to allow question answering over the Linked Data (QALD) by translating user queries into SPARQL BIBREF0 , BIBREF1 . As an alternative method, BIBREF2 propose a visual method of QA using an iterative diagrammatic approach. The diagrammatic approach relies on the visual means only, it requires more user interaction than natural language QA, but also provides additional benefits like intuitive insights into dataset characteristics, or a broader understanding of the answer and the potential to further explore the answer context, and finally allows for knowledge sharing by storing and sharing resulting diagrams.", "id": 863, "question": "How do they measure performance?", "title": "A Comparative Evaluation of Visual and Natural Language Question Answering Over Linked Data" }, { "answers": [ "" ], "context": "As introduced in BIBREF2 we understand diagrammatic question answering (DQA) as the process of QA relying solely on visual exploration using diagrams as a representation of the underlying knowledge source. The process includes (i) a model for diagrammatic representation of semantic data which supports data interaction using embedded queries, (ii) a simple method for step-by-step construction of diagrams with respect to cognitive boundaries and a layout that boosts understandability of diagrams, (iii) a library for visual data exploration and sharing based on its internal data model, and (iv) an evaluation of DQA as knowledge understanding and knowledge sharing tool. BIBREF3 propose a framework of five perspectives of knowledge visualization, which can be used to describe certain aspects of the DQA use cases, such as its goal to provide an iterative exploration method, which is accessible to any user, the possibility of knowledge sharing (via saved diagrams), or the general purpose of knowledge understanding and abstraction from technical details.", "id": 864, "question": "Do they measure the performance of a combined approach?", "title": "A Comparative Evaluation of Visual and Natural Language Question Answering Over Linked Data" }, { "answers": [ "" ], "context": "The DQA functionality is part of the Ontodia tool. The initial idea of Ontodia was to enable the exploration of semantic graphs for ordinary users. Data exploration is about efficiently extracting knowledge from data even in situations where it is unclear what is being looked for exactly BIBREF11 .", "id": 865, "question": "Which four QA systems do they use?", "title": "A Comparative Evaluation of Visual and Natural Language Question Answering Over Linked Data" }, { "answers": [ "" ], "context": "Here we present the evaluation of DQA in comparison to four QALD systems.", "id": 866, "question": "How many iterations of visual search are done on average until an answer is found?", "title": "A Comparative Evaluation of Visual and Natural Language Question Answering Over Linked Data" }, { "answers": [ "" ], "context": "As evaluation dataset, we reuse questions from the QALD7 benchmark task 4 “QA over Wikidata”. Question selection from QALD7 is based on the principles of question classification in QA BIBREF13 . Firstly, it is necessary to define question types which correspond to different scenarios of data exploration in DQA, as well as the type of expected answers and the question focus. The question focus refers to the main information in the question which help a user find the answer. We follow the model of BIBREF14 who categorize questions by their question word into WHO, WHICH, WHAT, NAME, and HOW questions. Given the question and answer type categories, we created four questionnaires with nine questions each resulting in 36 questions from the QALD dataset. The questions were picked in equal number for five basic question categories.", "id": 867, "question": "Do they test performance of their approaches using human judgements?", "title": "A Comparative Evaluation of Visual and Natural Language Question Answering Over Linked Data" }, { "answers": [ "" ], "context": "Following previous research on automatic detection and correction of dt-mistakes in Dutch BIBREF0, this paper investigates another stumbling block for both native and non-native speakers of Dutch: the correct use of die and dat. The multiplicity of syntactic functions and the dependency on the antecedent's gender and number make this a challenging task for both human and computer. The grammar concerning die and dat is threefold. Firstly, they can be used as dependent or independent demonstrative pronouns (aanwijzend voornaamwoord), with the first replacing the article before the noun it modifies and the latter being a noun phrase that refers to a preceding/following noun phrase or sentence. The choice between the two pronouns depends on the gender and number of the antecedent: dat refers to neuter, singular nouns and sentences, while die refers to masculine, singular nouns and plural nouns independent of their gender. Secondly, die and dat can be used as relative pronouns introducing relative clauses (betrekkelijk voornaamwoord), which provide additional information about the directly preceding antecedent it modifies. Similar rules as for demonstrative pronouns apply: masculine, singular nouns and plural nouns are followed by relative pronoun die, neuter singular nouns by dat. Lastly, dat can be used as a subordinating conjunction (onderschikkend voegwoord) introducing a subordinating clause. An brief overview of the grammar is given in Table TABREF1.", "id": 868, "question": "What are the sizes of both datasets?", "title": "Binary and Multitask Classification Model for Dutch Anaphora Resolution: Die/Dat Prediction" }, { "answers": [ "" ], "context": "As audio and text features provide complementary layers of information on songs, a combination of both data types has been shown to improve the automatic classification of high-level attributes in music such as genre, mood and emotion BIBREF0, BIBREF1, BIBREF2, BIBREF3. Multi-modal approaches interlinking these features offer insights into possible relations between lyrical and musical information (see BIBREF4, BIBREF5, BIBREF6).", "id": 869, "question": "Why are the scores for predicting perceived musical hardness and darkness extracted only for subsample of 503 songs?", "title": "'Warriors of the Word' -- Deciphering Lyrical Topics in Music and Their Connection to Audio Feature Dimensions Based on a Corpus of Over 100,000 Metal Songs" }, { "answers": [ "" ], "context": "In our sequential research design, the distribution of textual topics within the corpus was analyzed using latent Dirichlet allocation (LDA). This resulted in a topic model, which was used for a probabilistic assignment of topics to each of the song documents. Additionally, for a subset of these songs, audio features were extracted using models for high-level music dimensions. The use of automatic models for the extraction of both text as well as musical features allows for scalability as it enables a large corpus to be studied without depending on the process of manual annotation for each of the songs. The resulting feature vectors were then subjected to a correlation analysis. Figure FIGREF6 outlines the sequence of the steps taken in processing the data. The individual steps are explained in the following subsections.", "id": 870, "question": "How long is the model trained?", "title": "'Warriors of the Word' -- Deciphering Lyrical Topics in Music and Their Connection to Audio Feature Dimensions Based on a Corpus of Over 100,000 Metal Songs" }, { "answers": [ "" ], "context": "For gathering the data corpus, a web crawler was programmed using the Python packages Requests and BeautifulSoup. In total, 152,916 metal music lyrics were extracted from www.darklyrics.com.", "id": 871, "question": "What are lyrical topics present in the metal genre?", "title": "'Warriors of the Word' -- Deciphering Lyrical Topics in Music and Their Connection to Audio Feature Dimensions Based on a Corpus of Over 100,000 Metal Songs" }, { "answers": [ "SPNet vs best baseline:\nROUGE-1: 90.97 vs 90.68\nCIC: 70.45 vs 70.25" ], "context": "Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.", "id": 872, "question": "By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?", "title": "Abstractive Dialog Summarization with Semantic Scaffolds" }, { "answers": [ "" ], "context": "BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.", "id": 873, "question": "What automatic and human evaluation metrics are used to compare SPNet to its counterparts?", "title": "Abstractive Dialog Summarization with Semantic Scaffolds" }, { "answers": [ "" ], "context": "As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.", "id": 874, "question": "Is proposed abstractive dialog summarization dataset open source?", "title": "Abstractive Dialog Summarization with Semantic Scaffolds" }, { "answers": [ "Not at the moment, but summaries can be additionaly extended with this annotations." ], "context": "We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:", "id": 875, "question": "Is it expected to have speaker role, semantic slot and dialog domain annotations in real world datasets?", "title": "Abstractive Dialog Summarization with Semantic Scaffolds" }, { "answers": [ "" ], "context": "Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.", "id": 876, "question": "How does SPNet utilize additional speaker role, semantic slot and dialog domain annotations?", "title": "Abstractive Dialog Summarization with Semantic Scaffolds" }, { "answers": [ "" ], "context": "Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:", "id": 877, "question": "What are previous state-of-the-art document summarization methods used?", "title": "Abstractive Dialog Summarization with Semantic Scaffolds" }, { "answers": [ "Answer with content missing: (formula for CIC) it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities" ], "context": "We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.", "id": 878, "question": "How does new evaluation metric considers critical informative entities?", "title": "Abstractive Dialog Summarization with Semantic Scaffolds" }, { "answers": [ "" ], "context": "We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:", "id": 879, "question": "Is new evaluation metric extension of ROGUE?", "title": "Abstractive Dialog Summarization with Semantic Scaffolds" }, { "answers": [ "" ], "context": "Commonsense reasoning has long been acknowledged as a critical bottleneck of artificial intelligence and especially in natural language processing. It is an ability of combining commonsense facts and logic rules to make new presumptions about ordinary scenes in our daily life. A distinct property of commonsense reasoning problems is that they are generally trivial for human-beings while challenging for machine reasoners.", "id": 880, "question": "What measures were used for human evaluation?", "title": "CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning" }, { "answers": [ "" ], "context": "In this section, we formulate our task with mathematical notations and discuss its inherent challenges. The input to the task is a set of $n$ concepts $x=\\lbrace c_1,c_2,\\dots ,c_n\\rbrace \\in \\mathcal {X}$, where $c_i\\in \\mathcal {C}$ is a common noun or verb. $\\mathcal {X}$ denotes the space of concept-sets and $\\mathcal {C}$ stands for the concept vocabulary. The expected output of this task is a simple, grammatical sentence $y\\in \\mathcal {Y}$, describing a natural scene in our daily-life that covers all given concepts in $x$. Note that other forms of given concepts are also accepted, such as plural forms of nouns and verbs. In addition, we also provide rationales as an optional resource to model the generation process. For each pair of $(x, y)$, a rationale $r$ is a list of sentences that explains the background commonsense knowledge used in the scene recovering process.", "id": 881, "question": "What automatic metrics are used for this task?", "title": "CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning" }, { "answers": [ "" ], "context": "In this section, we present how we build the CommonGen dataset for testing machine commonsense with generative reasoning. The overall data collection process is as follows. 1) We first collect a large amount of high-quality image/video caption sentences from several existing corpora, 2) Then, we compute co-occurrence statistics about concept-sets of different sizes ($3\\sim 5$), such that we can find the concept-sets that are more likely to be present in the same scene. 3) Finally, we ask human crowd-workers from AMT to write scenes with rationales for every given concept-set, which serve as our development and test sets. The training set consists of carefully post-processed human-written caption sentences, which have little overlap with dev/test sets. We present the statistics and show its inherent challenges at the end of this section.", "id": 882, "question": "Are the models required to also generate rationales?", "title": "CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning" }, { "answers": [ "" ], "context": "Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total.", "id": 883, "question": "Are the rationales generated after the sentences were written?", "title": "CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning" }, { "answers": [ "" ], "context": "It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.", "id": 884, "question": "Are the sentences in the dataset written by humans who were shown the concept-sets?", "title": "CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning" }, { "answers": [ "" ], "context": "We present the statistical information of our final dataset. Firstly, we summarize the basic statistics in Table TABREF9, such as the number of unique concept-sets, scene sentences, and sentence lengths. In total, there are 3,706 unique concepts among all concept-sets, and 3,614/1,018/1,207 in the train/dev/test parts respectively. Note that there are 4% of the dev and 6% of the test concepts never appear in the training data, so we can better understand how well trained models can perform with unseen concepts.", "id": 885, "question": "Where do the concept sets come from?", "title": "CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning" }, { "answers": [ "" ], "context": "Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5.", "id": 886, "question": "How big are improvements of MMM over state of the art?", "title": "MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension" }, { "answers": [ "" ], "context": "In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$.", "id": 887, "question": "What out of domain datasets authors used for coarse-tuning stage?", "title": "MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension" }, { "answers": [ "FTLM++, BERT-large, XLNet" ], "context": "Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \\in \\mathbb {R}^{d\\times l}$, which is then projected into a single value $p=C(H)$ ($p\\in \\mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection.", "id": 888, "question": "What are state of the art methods MMM is compared to?", "title": "MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension" }, { "answers": [ "" ], "context": "For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning.", "id": 889, "question": "What four representative datasets are used for bechmark?", "title": "MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension" }, { "answers": [ "" ], "context": "Systematic reviews (SR) of randomized controlled trials (RCTs) are regarded as the gold standard for providing information about the effects of interventions to healthcare practitioners, policy makers and members of the public. The quality of these reviews is ensured through a strict methodology that seeks to include all relevant information on the review topic BIBREF0.", "id": 890, "question": "What baselines did they consider?", "title": "Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks" }, { "answers": [ "Some sentences are associated to ambiguous dimensions in the hidden state output" ], "context": "The website systematicreviewtools.com BIBREF6 lists 36 software tools for study selection to date. Some tools are intended for organisational purposes and do not employ PICO classification, such as Covidence BIBREF7. The tool Rayyan uses support vector machines BIBREF8. RobotReviewer uses neural networks, word embeddings and recently also a transformer for named entity recognition (NER) BIBREF9. Question answering systems for PICO data extraction exist based on matching words from knowledge bases, hand-crafted rules and naïve Bayes classification, both on entity and sentence level BIBREF10, BIBREF11, but commonly focus on providing information to practicing clinicians rather than systematic reviewers BIBREF12.", "id": 891, "question": "What are the problems related to ambiguity in PICO sentence prediction tasks?", "title": "Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks" }, { "answers": [ "" ], "context": "Reasoning about entities and their relations is an important problem for achieving general artificial intelligence. Often such problems are formulated as reasoning over graph-structured representation of knowledge. Knowledge graphs, for example, consist of entities and relations between them BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Representation learning BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 and reasoning BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 with such structured representations is an important and active area of research.", "id": 892, "question": "How is knowledge retrieved in the memory?", "title": "RelNet: End-to-End Modeling of Entities & Relations" }, { "answers": [ "entity memory and relational memory." ], "context": "We describe the RelNet model in this section. Figure 1 provides a high-level view of the model. The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory.", "id": 893, "question": "How is knowledge stored in the memory?", "title": "RelNet: End-to-End Modeling of Entities & Relations" }, { "answers": [ "" ], "context": "There is a long line of work in textual question-answering systems BIBREF21 , BIBREF22 . Recent successful approaches use memory based neural networks for question answering, for example BIBREF23 , BIBREF18 , BIBREF24 , BIBREF19 , BIBREF17 . Our model is also a memory network based model and is also related to the neural turing machine BIBREF25 . As described previously, the model is closely related to the Recurrent Entity Networks model BIBREF17 which describes an end-to-end approach to model entities in text but does not directly model relations. Other approaches to question answering use external knowledge, for instance external knowledge bases BIBREF26 , BIBREF11 , BIBREF27 , BIBREF28 , BIBREF9 or external text like Wikipedia BIBREF29 , BIBREF30 .", "id": 894, "question": "What are the relative improvements observed over existing methods?", "title": "RelNet: End-to-End Modeling of Entities & Relations" }, { "answers": [ "" ], "context": "We evaluate the model's performance on the bAbI tasks BIBREF18 , a collection of 20 question answering tasks which have become a benchmark for evaluating memory-augmented neural networks. We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 . Performance is measured in terms of mean percentage error on the tasks.", "id": 895, "question": "What is the architecture of the neural network?", "title": "RelNet: End-to-End Modeling of Entities & Relations" }, { "answers": [ "" ], "context": "We demonstrated an end-to-end trained neural network augmented with a structured memory representation which can reason about entities and relations for question answering. Future work will investigate the performance of these models on more real world datasets, interpreting what the models learn, and scaling these models to answer questions about entities and relations from reading massive text corpora.", "id": 896, "question": "What methods is RelNet compared to?", "title": "RelNet: End-to-End Modeling of Entities & Relations" }, { "answers": [ "by number of distinct n-grams" ], "context": "Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.", "id": 897, "question": "How do they measure the diversity of inferences?", "title": "Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder" }, { "answers": [ "ON Event2Mind, the accuracy of proposed method is improved by absolute BLUE 2.9, 10.87, 1.79 for xIntent, xReact and oReact respectively.\nOn Atomic dataset, the accuracy of proposed method is improved by absolute BLUE 3.95. 4.11, 4.49 for xIntent, xReact and oReact.respectively." ], "context": "Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:", "id": 898, "question": "By how much do they improve the accuracy of inferences over state-of-the-art methods?", "title": "Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder" }, { "answers": [ "" ], "context": "Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.", "id": 899, "question": "Which models do they use as baselines on the Atomic dataset?", "title": "Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder" }, { "answers": [ " CWVAE is trained on an auxiliary dataset to learn the event background information by using the context-aware latent variable. Then, in finetute stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target." ], "context": "As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\\phi }(z|x,y)$, $q_{\\phi }(z_c|x,c)$ and $q_{\\phi }(z|z_{c^{\\prime }}, x)$, a prior network for modeling $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\\prime }}$ to generate targets.", "id": 900, "question": "How does the context-aware variational autoencoder learn event background information?", "title": "Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder" }, { "answers": [ "" ], "context": "With the incorporation of $z_{c^{\\prime }}$, the original loglikelihood could be decomposed as:", "id": 901, "question": "What is the size of the Atomic dataset?", "title": "Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder" }, { "answers": [ "" ], "context": "Automatic speech recognition (ASR) systems have seen remarkable advances over the last half-decade from the use of deep, convolutional and recurrent neural network architectures, enabled by a combination of modeling advances, available training data, and increased computational resources. Given these advances, our research group recently embarked on an effort to reach human-level transcription accuracy using state-of-the-art ASR techniques on one of the genres of speech that has historically served as a difficult benchmark task: conversational telephone speech (CTS). About a decade ago, CTS recognition had served as an evaluation task for government-sponsored work in speech recognition, predating the take-over of deep learning approaches and still largely in the GMM-HMM modeling framework BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . It had proven to be a hard problem, due to the variable nature of conversational pronunciations, speaking styles, and regional accents. Seide at al. BIBREF6 demonstrated that deep networks as acoustic models could achieve significant improvements over GMM-HMM models on CTS data, and more recently researchers at IBM had achieved results on this task that represented a further significant advance BIBREF7 , BIBREF8 over those from a decade ago.", "id": 902, "question": "what standard speech transcription pipeline was used?", "title": "Comparing Human and Machine Errors in Conversational Speech Transcription" }, { "answers": [ "0.08 points on the 2011 test set, 0.44 points on the 2012 test set, 0.42 points on the 2013 test set for IWSLT-CE." ], "context": "One of the most attractive features of neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 is that it is possible to train an end to end system without the need to deal with word alignments, translation rules and complicated decoding algorithms, which are a characteristic of statistical machine translation (SMT) systems. However, it is reported that NMT works better than SMT only when there is an abundance of parallel corpora. In the case of low resource domains, vanilla NMT is either worse than or comparable to SMT BIBREF3 .", "id": 903, "question": "How much improvement does their method get over the fine tuning baseline?", "title": "An Empirical Comparison of Simple Domain Adaptation Methods for Neural Machine Translation" }, { "answers": [ "" ], "context": "Besides fine tuning and multi domian NMT using tags, another direction for domain adaptation is using in-domain monolingual data. Either training an in-domain recurrent neural language (RNN) language model for the NMT decoder BIBREF13 or generating synthetic data by back translating target in-domain monolingual data BIBREF5 have been studied.", "id": 904, "question": "What kinds of neural networks did they use in this paper?", "title": "An Empirical Comparison of Simple Domain Adaptation Methods for Neural Machine Translation" }, { "answers": [ "" ], "context": "All the methods that we compare are simple and do not need any modifications to the NMT system.", "id": 905, "question": "How did they use the domain tags?", "title": "An Empirical Comparison of Simple Domain Adaptation Methods for Neural Machine Translation" }, { "answers": [ "" ], "context": "The Alexa Prize funded 12 international teams to compete to create a conversational agent that can discuss any topic for at least 20 minutes. UCSC's Slugbot was one of these funded teams. The greatest challenges with the competition arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. This paper describes some of the lessons we learned building SlugBot for the 2017 Alexa Prize, particularly focusing on the challenges of integrating content found via search with content from structured data in order to carry on an ongoing, coherent, open-domain, mixed-initiative conversation. SlugBot's conversations over the semi-finals user evaluation averaged 8:17 minutes.", "id": 906, "question": "Why mixed initiative multi-turn dialogs are the greatest challenge in building open-domain conversational agents?", "title": "Combining Search with Structured Data to Create a More Engaging User Experience in Open Domain Dialogue" }, { "answers": [ "" ], "context": "As the reliance on social media as a source of news increases and the reliability of sources is increasingly debated, it is important to understand how users react to various sources of news. Most studies that investigate misinformation spread in social media focus on individual events and the role of the network structure in the spread BIBREF0 , BIBREF1 , BIBREF2 or detection of false information BIBREF3 . These studies have found that the size and shape of misinformation cascades within a social network depends heavily on the initial reactions of the users. Other work has focused on the language of misinformation in social media BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 to detect types of deceptive news.", "id": 907, "question": "How is speed measured?", "title": "Identifying and Understanding User Reactions to Deceptive and Trusted Social News Sources" }, { "answers": [ "" ], "context": "In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models.", "id": 908, "question": "What is the architecture of their model?", "title": "Identifying and Understanding User Reactions to Deceptive and Trusted Social News Sources" }, { "answers": [ "" ], "context": "We use a manually annotated Reddit dataset from Zhang et al. zhang2017characterizing to train our reaction classification model. Annotations from 25 crowd-workers labelled the primary discourse act for 101,525 comments within 9,131 comment threads on Reddit. The Reddit IDs, but not the text content of the comments themselves, were released with the annotations. So we collected the content of Reddit posts and comments from a public archive of Reddit posts and comments. Some content was deleted prior to archival, so the dataset shown in Table TABREF3 is a subset of the original content. Despite the inability to capture all of the original dataset, Table TABREF3 shows a similar distribution between our dataset and the original.", "id": 909, "question": "What are the nine types?", "title": "Identifying and Understanding User Reactions to Deceptive and Trusted Social News Sources" }, { "answers": [ "" ], "context": "Many speech processing tasks – such as automatic speech recognition or spoken term detection – hinge on associating segments of speech signals with word labels. In most systems developed for such tasks, words are broken down into sub-word units such as phones, and models are built for the individual units. An alternative, which has been considered by some researchers, is to consider each entire word segment as a single unit, without assigning parts of it to sub-word units. One motivation for the use of whole-word approaches is that they avoid the need for sub-word models. This is helpful since, despite decades of work on sub-word modeling BIBREF0 , BIBREF1 , it still poses significant challenges. For example, speech processing systems are still hampered by differences in conversational pronunciations BIBREF2 . A second motivation is that considering whole words at once allows us to consider a more flexible set of features and reason over longer time spans.", "id": 910, "question": "How do they represent input features of their model to train embeddings?", "title": "Discriminative Acoustic Word Embeddings: Recurrent Neural Network-Based Approaches" }, { "answers": [ "" ], "context": "We next briefly describe the most closely related prior work.", "id": 911, "question": "Which dimensionality do they use for their embeddings?", "title": "Discriminative Acoustic Word Embeddings: Recurrent Neural Network-Based Approaches" }, { "answers": [ "" ], "context": "", "id": 912, "question": "Which dataset do they use?", "title": "Discriminative Acoustic Word Embeddings: Recurrent Neural Network-Based Approaches" }, { "answers": [ "Their best average precision tops previous best result by 0.202" ], "context": "We train the RNN-based embedding models using a set of pre-segmented spoken words. We use two main training approaches, inspired by prior work but with some differences in the details. As in BIBREF13 , BIBREF10 , our first approach is to use the word labels of the training segments and train the networks to classify the word. In this case, the final layer of INLINEFORM0 is a log-softmax layer. Here we are limited to the subset of the training set that has a sufficient number of segments per word to train a good classifier, and the output dimensionality is equal to the number of words (but see BIBREF13 for a study of varying the dimensionality in such a classifier-based embedding model by introducing a bottleneck layer). This model is trained end-to-end and is optimized with a cross entropy loss. Although labeled data is necessarily limited, the hope is that the learned models will be useful even when applied to spoken examples of words not previously seen in the training data. For words not seen in training, the embeddings should correspond to some measure of similarity of the word to the training words, measured via the posterior probabilities of the previously seen words. In the experiments below, we examine this assumption by analyzing performance on words that appear in the training data compared to those that do not.", "id": 913, "question": "By how much do they outpeform previous results on the word discrimination task?", "title": "Discriminative Acoustic Word Embeddings: Recurrent Neural Network-Based Approaches" }, { "answers": [ "" ], "context": "“Meaning is, therefore, something that words have in sentences; and it's something that sentences have in a language.” BIBREF0 On the other hand, meaning could also be something that words have on their own, with sentences being compositions and language a collection of words. This is the question of semantic holism versus atomism, which was important in the philosophy of language in the second half of the 20th century and has not been satisfyingly answered yet.", "id": 914, "question": "How does Frege's holistic and functional approach to meaning relates to general distributional hypothesis?", "title": "Semantic Holism and Word Representations in Artificial Neural Networks" }, { "answers": [ "" ], "context": "We have found only one work concerning the philosophical aspects of neural language models BIBREF2. It is, however, concentrating on Self-Organizing Maps and Quine's version of semantic holism.", "id": 915, "question": "What does Frege's holistic and functional approach to meaning states?", "title": "Semantic Holism and Word Representations in Artificial Neural Networks" }, { "answers": [ "" ], "context": "Semantic parsers map sentences onto logical forms that can be used to query databases BIBREF0 , BIBREF1 , instruct robots BIBREF2 , extract information BIBREF3 , or describe visual scenes BIBREF4 . In this paper we consider the problem of semantically parsing questions into Freebase logical forms for the goal of question answering. Current systems accomplish this by learning task-specific grammars BIBREF5 , strongly-typed CCG grammars BIBREF6 , BIBREF7 , or neural networks without requiring any grammar BIBREF8 . These methods are sensitive to the words used in a question and their word order, making them vulnerable to unseen words and phrases. Furthermore, mismatch between natural language and Freebase makes the problem even harder. For example, Freebase expresses the fact that “Czech is the official language of Czech Republic” (encoded as a graph), whereas to answer a question like “What do people in Czech Republic speak?” one should infer people in Czech Republic refers to Czech Republic and What refers to the language and speak refers to the predicate official language.", "id": 916, "question": "Do they evaluate the quality of the paraphrasing model?", "title": "Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing" }, { "answers": [ "10*n paraphrases, where n depends on the number of paraphrases that contain the entity mention spans" ], "context": "Our paraphrase generation algorithm is based on a model in the form of an L-PCFG. L-PCFGs are PCFGs where the nonterminals are refined with latent states that provide some contextual information about each node in a given derivation. L-PCFGs have been used in various ways, most commonly for syntactic parsing BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 .", "id": 917, "question": "How many paraphrases are generated per question?", "title": "Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing" }, { "answers": [ "" ], "context": "We define our paraphrase generation task as a sampling problem from an L-PCFG $G_{\\mathrm {syn}}$ , which is estimated from a large corpus of parsed questions. Once this grammar is estimated, our algorithm follows a pipeline with two major steps.", "id": 918, "question": "What latent variables are modeled in the PCFG?", "title": "Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing" }, { "answers": [ "" ], "context": "As mentioned earlier, one of our lattice types is based on bi-layered PCFGs introduced here.", "id": 919, "question": "What are the baselines?", "title": "Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing" }, { "answers": [ "" ], "context": "The global prevalence of obesity has doubled between 1980 and 2014, with more than 1.9 billion adults considered as overweight and over 600 million adults considered as obese in 2014 BIBREF0 . Since the 1970s, obesity has risen 37 percent affecting 25 percent of the U.S. adults BIBREF1 . Similar upward trends of obesity have been found in youth populations, with a 60% increase in preschool aged children between 1990 and 2010 BIBREF2 . Overweight and obesity are the fifth leading risk for global deaths according to the European Association for the Study of Obesity BIBREF0 . Excess energy intake and inadequate energy expenditure both contribute to weight gain and diabetes BIBREF3 , BIBREF4 .", "id": 920, "question": "Do they evaluate only on English data?", "title": "Characterizing Diabetes, Diet, Exercise, and Obesity Comments on Twitter" }, { "answers": [ "weak correlation with p-value of 0.08" ], "context": "Our approach uses semantic and linguistics analyses for disclosing health characteristics of opinions in tweets containing DDEO words. The present study included three phases: data collection, topic discovery, and topic-content analysis.", "id": 921, "question": "How strong was the correlation between exercise and diabetes?", "title": "Characterizing Diabetes, Diet, Exercise, and Obesity Comments on Twitter" }, { "answers": [ "using topic modeling model Latent Dirichlet Allocation (LDA)" ], "context": "This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research.", "id": 922, "question": "How were topics of interest about DDEO identified?", "title": "Characterizing Diabetes, Diet, Exercise, and Obesity Comments on Twitter" }, { "answers": [ "" ], "context": "Since their early days, representation in random utility behavior models has followed generally quite clear principles. For example, numeric quantities like travel time and cost may be directly used or transformed depending on observed non-linear efects (e.g. using log). Numeric variables that are not “quantities\" per se, such as age or even geographic coordinates tend to be discretized and then transformed into vectors of dummy variables. Similarly, categorical variables such as education level or trip purpose are already discrete, and thus are also usually “dummyfied\". Then, we may interact any subset of the above by combining (typically, multiplying) them, as long as we get in the end a vector of numeric values that can be incorporated in a statistical model, a linear one in the case of the most common logit model.", "id": 923, "question": "What datasets are used for evaluation?", "title": "Rethinking travel behavior modeling representations through embeddings" }, { "answers": [ "The embeddings are learned several times using the training set, then the average is taken." ], "context": "We are generally concerned with random utility maximization (RUM) models, for they have a dominant role in travel behavior modeling. The nature of such models is predominantly numeric, linear, and quite often strictly flat (notwithstanding hierarchical variations, such as nested models BIBREF1, hierarchical Bayes BIBREF2, or non-linear transformations). As a consequence, while numerical variables (e.g. travel time, cost, or income) can be directly used as available, perhaps subject to transformations or segmentation, nominal ones bring about a greater challenge. We tend to enforce a limited set of treatments such as:", "id": 924, "question": "How do their train their embeddings?", "title": "Rethinking travel behavior modeling representations through embeddings" }, { "answers": [ "The data from collected travel surveys is used to model travel behavior." ], "context": "The idea of text embeddings comes from a simple re-representation necessity. A natural-language processing system is itself also a numeric machine, therefore it requires each individual word in a dictionary to match its own numeric representation. Just as in our travel models, a possible solution has been to use dummy variables, and it is quite obvious that the dimensionality of such a one-hot encoding vector, quickly becomes overwhelming. Think for example next word prediction algorithm, like the one we have in our smartphones. It is essentially a skip-gram BIBREF4 model that predicts the next word, given the n words before. The English dictionary has about 300000 words, and if we have about 5 words before for context, the number of independent variables of the model would become 1.5 million!", "id": 925, "question": "How do they model travel behavior?", "title": "Rethinking travel behavior modeling representations through embeddings" }, { "answers": [ "The coefficients are projected back to the dummy variable space." ], "context": "Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair.", "id": 926, "question": "How do their interpret the coefficients?", "title": "Rethinking travel behavior modeling representations through embeddings" }, { "answers": [ "Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.)" ], "context": "Globally, human trafficking is one of the fastest growing crimes and, with annual profits estimated to be in excess of 150 billion USD, it is also among the most lucrative BIBREF0 . Sex trafficking is a form of human trafficking which involves sexual exploitation through coercion. Recent estimates suggest that nearly 4 million adults and 1 million children are being victimized globally on any given day; furthermore, it is estimated that 99 percent of victims are female BIBREF1 . Escort websites are an increasingly popular vehicle for selling the services of trafficking victims. According to a recent survivor survey BIBREF2 , 38% of underage trafficking victims who were enslaved prior to 2004 were advertised online, and that number rose to 75% for those enslaved after 2004. Prior to its shutdown in April 2018, the website Backpage was the most frequently used online advertising platform; other popular escort websites include Craigslist, Redbook, SugarDaddy, and Facebook BIBREF2 . Despite the seizure of Backpage, there were nearly 150,000 new online sex advertisements posted per day in the U.S. alone in late 2018 BIBREF3 ; even with many of these new ads being re-posts of existing ads and traffickers often posting multiple ads for the same victims BIBREF2 , this volume is staggering.", "id": 927, "question": "By how much do they outperform previous state-of-the-art models?", "title": "Sex Trafficking Detection with Ordinal Regression Neural Networks" }, { "answers": [ "" ], "context": "Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon.", "id": 928, "question": "Do they use pretrained word embeddings?", "title": "Sex Trafficking Detection with Ordinal Regression Neural Networks" }, { "answers": [ "" ], "context": "Our proposed ordinal regression model consists of the following three components: Word embeddings pre-trained by a Skip-gram model, a gated-feedback recurrent neural network that constructs summary features from sentences, and a multi-labeled logistic regression layer tailored for ordinal regression. See Figure SECREF3 for a schematic. The details of its components and their respective alternatives are discussed below.", "id": 929, "question": "How is the lexicon of trafficking flags expanded?", "title": "Sex Trafficking Detection with Ordinal Regression Neural Networks" }, { "answers": [ "" ], "context": "In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. Young people are now gaining more frequent access to online, networked media. Although most of the time, their Internet use is harmless, there are some risks associated with these online activities, such as the use of social networking sites (e.g., Twitter, Facebook, Reddit). The anonymity and freedom provided by social networks makes them vulnerable to threatening situations on the Web, such as trolling.", "id": 930, "question": "Do they experiment with the dataset?", "title": "Modeling Trolling in Social Media Conversations" }, { "answers": [ "" ], "context": "In this section, we discuss related work in the areas of trolling, bullying, abusive language detection and politeness, as they intersect in their scope and at least partially address the problem presented in this work.", "id": 931, "question": "Do they use a crowdsourcing platform for annotation?", "title": "Modeling Trolling in Social Media Conversations" }, { "answers": [ "" ], "context": "In this section, we describe our proposal of a comprehensive trolling categorization. While there have been attempts in the realm of psychology to provide a working definition of trolling (e.g., hardaker2010trolling, bishop2014representations), their focus is mostly on modeling the troll's behavior. For instance, bishop2014representations constructed a “trolling magnitude” scale focused on the severity of abuse and misuse of internet mediated communications. bishop2013effect also categorized trolls based on psychological characteristics focused on pathologies and possible criminal behaviors. In contrast, our trolling categorization seeks to model not only the troll's behavior but also the impact on the recipients, as described below.", "id": 932, "question": "What is an example of a difficult-to-classify case?", "title": "Modeling Trolling in Social Media Conversations" }, { "answers": [ "" ], "context": "To enable the reader to better understand this categorization, we present two example excerpts taken from the original (Reddit) conversations. The first comment on each excerpt, generated by author C0, is given as a minimal piece of context. The second comment, written by the author C1 in italics, is the suspected trolling attempt. The rest of the comments comprise all direct responses to the suspected trolling comment.", "id": 933, "question": "What potential solutions are suggested?", "title": "Modeling Trolling in Social Media Conversations" }, { "answers": [ "" ], "context": "Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.", "id": 934, "question": "What is the size of the dataset?", "title": "Modeling Trolling in Social Media Conversations" }, { "answers": [ "" ], "context": "In this section, we make predictions on the four aspects of our task, with the primary goal of identifying the errors our classifier makes (i.e., the hard-to-classify instances) and hence the directions for future work, and the secondary goal of estimating the state of the art on this new task using only shallow (i.e., lexical and wordlist-based) features.", "id": 935, "question": "What Reddit communities do they look at?", "title": "Modeling Trolling in Social Media Conversations" }, { "answers": [ "" ], "context": "Human intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.", "id": 936, "question": "How strong is negative correlation between compound divergence and accuracy in performed experiment?", "title": "Measuring Compositional Generalization: A Comprehensive Method on Realistic Data" }, { "answers": [ "" ], "context": "", "id": 937, "question": "What are results of comparison between novel method to other approaches for creating compositional generalization benchmarks?", "title": "Measuring Compositional Generalization: A Comprehensive Method on Realistic Data" }, { "answers": [ "" ], "context": "We use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:", "id": 938, "question": "How authors justify that question answering dataset presented is realistic?", "title": "Measuring Compositional Generalization: A Comprehensive Method on Realistic Data" }, { "answers": [ "" ], "context": "We present the Compositional Freebase Questions (CFQ) as an example of how to construct a dataset that is specifically designed to measure compositional generalization using the DBCA method introduced above. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding sparql query against the Freebase knowledge base BIBREF10. This means that CFQ can be used for semantic parsing BIBREF11, BIBREF12, which is the task that we focus on in this paper.", "id": 939, "question": "What three machine architectures are analyzed?", "title": "Measuring Compositional Generalization: A Comprehensive Method on Realistic Data" }, { "answers": [ "" ], "context": "BIBREF13 describe a number of benefits for automated rule-based dataset generation, including scalability, control of scope, and avoidance of human errors. Beyond these benefits, however, such an approach is particularly attractive in the context of measuring compositional generalization using the DBCA method, as it allows us to precisely track the atoms (rules) and compounds (rule applications) of each example by recording the sequence of rule applications used to generate it.", "id": 940, "question": "How big is new question answering dataset?", "title": "Measuring Compositional Generalization: A Comprehensive Method on Realistic Data" }, { "answers": [ "" ], "context": "Input and output. While the primary focus of the dataset is semantic parsing (natural language question to sparql query), we also provide natural language answers for each question. This allows the dataset to be used in a text-in-text-out scenario as well (see Appendix SECREF8).", "id": 941, "question": "What are other approaches into creating compositional generalization benchmarks?", "title": "Measuring Compositional Generalization: A Comprehensive Method on Realistic Data" }, { "answers": [ "" ], "context": "Continuous Speech Keyword Spotting (CSKS) aims to detect embedded keywords in audio recordings. These spotted keyword frequencies can then be used to analyze theme of communication, creating temporal visualizations and word clouds BIBREF0 . Another use case is to detect domain specific keywords which ASR (Automatic Speech Recognition) systems trained on public data cannot detect. For example, to detect a TV model number “W884” being mentioned in a recording, we might not have a large number of training sentences containing the model number of a newly launched TV to finetune a speech recognition (ASR) algorithm. A trained CSKS algorithm can be used to quickly extract out all instances of such keywords.", "id": 942, "question": "What problem do they apply transfer learning to?", "title": "Prototypical Metric Transfer Learning for Continuous Speech Keyword Spotting With Limited Training Data" }, { "answers": [ "" ], "context": "In the past, Hidden Markov Models (HMM) BIBREF6 , BIBREF7 , BIBREF8 have been used to solve the CSKS problem. But since the HMM techniques use Viterbi algorithms(computationally expensive) a faster approach is required.", "id": 943, "question": "What are the baselines?", "title": "Prototypical Metric Transfer Learning for Continuous Speech Keyword Spotting With Limited Training Data" }, { "answers": [ "" ], "context": "Our learning data, which was created in-house, has 20 keywords to be spotted about television models of a consumer electronics brand. It was collected by making 40 participants utter each keyword 3 times. Each participant recorded in normal ambient noise conditions. As a result, after collection of learning data we have 120 (3 x 40) instances of each of the 20 keywords. We split the learning data 80:20 into train and validation sets. Train/Validation split was done on speaker level, so as to make sure that all occurrences of a particular speaker is present only on either of two sets. For testing, we used 10 different 5 minutes long simulated conversational recordings of television salesmen and customers from a shopping mall in India. These recordings contain background noise (as is expected in a mall) and have different languages (Indians speak a mixture of English and Hindi). The CSKS algorithm trained on instances of keywords in learning data is supposed to detect keywords embedded in conversations of test set.", "id": 944, "question": "What languages are considered?", "title": "Prototypical Metric Transfer Learning for Continuous Speech Keyword Spotting With Limited Training Data" }, { "answers": [ "" ], "context": "Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\\mathbf {y} = \\lbrace y_1, \\ldots , y_T\\rbrace $ given an input sequence $\\mathbf {x} = \\lbrace x_1, \\ldots , x_{T^{\\prime }}\\rbrace $ using conditional probabilities $P_\\theta (\\mathbf {y}|\\mathbf {x})$ predicted by neural networks (parameterized by $\\theta $).", "id": 945, "question": "Does this model train faster than state of the art models?", "title": "FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow" }, { "answers": [ "Difference is around 1 BLEU score lower on average than state of the art methods." ], "context": "As noted above, incorporating expressive latent variables $\\mathbf {z}$ is essential to decouple the dependencies between tokens in the target sequence in non-autoregressive models. However, in order to model all of the complexities of sequence generation to the point that we can read off all of the words in the output in an independent fashion (as in Eq. (DISPLAY_FORM6)), the prior distribution $p_{\\theta }(\\mathbf {z}|\\mathbf {x})$ will necessarily be quite complex. In this section, we describe generative flows BIBREF10, an effective method for arbitrary modeling of complicated distributions, before describing how we apply them to sequence-to-sequence generation in §SECREF3.", "id": 946, "question": "What is the performance difference between proposed method and state-of-the-arts on these datasets?", "title": "FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow" }, { "answers": [ "" ], "context": "Put simply, flow-based generative models work by transforming a simple distribution (e.g. a simple Gaussian) into a complex one (e.g. the complex prior distribution over $\\mathbf {z}$ that we want to model) through a chain of invertible transformations.", "id": 947, "question": "What non autoregressive NMT models are used for comparison?", "title": "FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow" }, { "answers": [ "" ], "context": "In the context of maximal likelihood estimation (MLE), we wish to minimize the negative log-likelihood of the parameters:", "id": 948, "question": "What are three neural machine translation (NMT) benchmark datasets used for evaluation?", "title": "FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow" }, { "answers": [ "" ], "context": "A number of works have explored integrating the visual modality for Neural Machine Translation (NMT) models, though, there has been relatively modest gains or no gains at all by incorporating the visual modality in the translation pipeline BIBREF0. In particular, BIBREF1 leverage multi-task learning, BIBREF2 use visual adaptive training, while BIBREF3, BIBREF4, BIBREF5 use a number of fusion techniques to incorporate features obtained from the visual modality.", "id": 949, "question": "What is result of their attention distribution analysis?", "title": "On Leveraging the Visual Modality for Neural Machine Translation" }, { "answers": [ "" ], "context": "In this section, we describe three additions to the Seq2Seq model to ensure that the visual context is utilized at different stages, namely when computing context during each step of the decoder, during attention as well as when computing the supervision signal in the Sequence-to-Sequence pipeline. This is done to encourage the Seq2Seq NMT model to make use of the visual features under full linguistic context. In each case, we assume that the visual features are fine-tuned using a visual encoder, which is trained jointly alongside the Seq2Seq model.", "id": 950, "question": "What is result of their Principal Component Analysis?", "title": "On Leveraging the Visual Modality for Neural Machine Translation" }, { "answers": [ "" ], "context": "Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5.", "id": 951, "question": "What are 3 novel fusion techniques that are proposed?", "title": "On Leveraging the Visual Modality for Neural Machine Translation" }, { "answers": [ "" ], "context": "NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators.", "id": 952, "question": "What are two models' architectures in proposed solution?", "title": "Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games" }, { "answers": [ "" ], "context": "Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \\rightarrow e_{1,2} \\rightarrow p_2 \\rightarrow e_{2,3} \\rightarrow \\cdots \\rightarrow e_{n-1,n} \\rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities.", "id": 953, "question": "How do two models cooperate to select the most confident chains?", "title": "Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games" }, { "answers": [ "" ], "context": "The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker.", "id": 954, "question": "How many hand-labeled reasoning chains have been created?", "title": "Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games" }, { "answers": [ "Answer with content missing: (formula) The accuracy is defined as the ratio # of correct chains predicted to # of evaluation samples" ], "context": "The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\\mathcal {P} = \\lbrace p_1, p_2 ... p_K\\rbrace $ from a pool of candidates, and outputs a chain of selected passages.", "id": 955, "question": "What benchmarks are created?", "title": "Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games" }, { "answers": [ "" ], "context": "Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation.", "id": 956, "question": "What empricial investigations do they reference?", "title": "A Set of Recommendations for Assessing Human-Machine Parity in Language Translation" }, { "answers": [ "" ], "context": "We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators.", "id": 957, "question": "What languages do they investigate for machine translation?", "title": "A Set of Recommendations for Assessing Human-Machine Parity in Language Translation" }, { "answers": [ "" ], "context": "The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans.", "id": 958, "question": "What recommendations do they offer?", "title": "A Set of Recommendations for Assessing Human-Machine Parity in Language Translation" }, { "answers": [ "36%" ], "context": "BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation.", "id": 959, "question": "What percentage fewer errors did professional translations make?", "title": "A Set of Recommendations for Assessing Human-Machine Parity in Language Translation" }, { "answers": [ "MT developers to which crowd workers were compared are usually not professional translators, evaluation of sentences in isolation prevents raters from detecting translation errors, used not originally written Chinese test set\n" ], "context": "The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations.", "id": 960, "question": "What was the weakness in Hassan et al's evaluation design?", "title": "A Set of Recommendations for Assessing Human-Machine Parity in Language Translation" }, { "answers": [ "" ], "context": "Traditional approaches to abstractive summarization have relied on interpretable structured representations such as graph based sentence centrality BIBREF0, AMR parses BIBREF1, discourse based compression and anaphora constraints BIBREF2. On the other hand, state of the art neural approaches to single document summarization encode the document as a sequence of tokens and compose them into a document representation BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Albeit being effective, these systems learn to rely significantly on layout bias associated with the source document BIBREF8 and do not lend themselves easily to interpretation via intermediate structures.", "id": 961, "question": "By how much they improve over the previous state-of-the-art?", "title": "StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization" }, { "answers": [ "" ], "context": "Consider a source document $\\mathbf {x}$ consisting of $n$ sentences $\\lbrace \\mathbf {s}\\rbrace $ where each sentence $\\mathbf {s}_i$ is composed of a sequence of words. Document summarization aims to map the source document to a target summary of $m$ words $\\lbrace y\\rbrace $. A typical neural abstractive summarization system is an attentional sequence-to-sequence model that encodes the input sequence $\\mathbf {x}$ as a continuous sequence of tokens $\\lbrace w\\rbrace $ using a BiLSTM. The encoder produces a set of hidden representations $\\lbrace \\mathbf {h}\\rbrace $. An LSTM decoder maps the previously generated token $y_{t-1}$ to a hidden state and computes a soft attention probability distribution $p(\\mathbf {a}_t \\mid \\mathbf {x}, \\mathbf {y}_{1:t-1})$ over encoder hidden states. A distribution $p$ over the vocabulary is computed at every timestep $t$ and the network is trained using negative log likelihood loss : $\\text{loss}_t = - \\mathrm {log}\\:p(y_t) $. The pointer-generator network BIBREF3 augments the standard encoder-decoder architecture by linearly interpolating a pointer based copy mechanism. StructSum uses the pointer-generator network as the base model. Our encoder is a structured hierarchical encoder BIBREF16, which computes hidden representations of the sequence both at the token and sentence level. The model then uses the explicit-structure and implicit-structure attention modules to augment the sentence representations with rich sentence dependency information, leveraging both learned latent structure and additional external structure from other NLP modules. The attended vectors are then passed to the decoder, which produces the output sequence for abstractive summarization. In the rest of this section, we describe our model architecture, shown in Figure FIGREF2, in detail.", "id": 962, "question": "Is there any evidence that encoders with latent structures work well on other tasks?", "title": "StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization" }, { "answers": [ "" ], "context": "Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. However, these models are still better at capturing syntax than they are at more entity-focused aspects like coreference BIBREF6, BIBREF7; moreover, existing state-of-the-art architectures for such tasks often perform well looking at only local entity mentions BIBREF8, BIBREF9, BIBREF10 rather than forming truly global entity representations BIBREF11, BIBREF12. Thus, performance on these tasks does not form sufficient evidence that these representations strongly capture entity semantics. Better understanding the models' capabilities requires testing them in domains involving complex entity interactions over longer texts. One such domain is that of procedural language, which is strongly focused on tracking the entities involved and their interactions BIBREF13, BIBREF14, BIBREF15.", "id": 963, "question": "Do they report results only on English?", "title": "Effective Use of Transformer Networks for Entity Tracking" }, { "answers": [ "Using model gradients with respect to input features they presented that the most important model inputs are verbs associated with entities which shows that the model attends to shallow context clues" ], "context": "Procedural text is a domain of text involved with understanding some kind of process, such as a phenomenon arising in nature or a set of instructions to perform a task. Entity tracking is a core component of understanding such texts.", "id": 964, "question": "What evidence do they present that the model attends to shallow context clues?", "title": "Effective Use of Transformer Networks for Entity Tracking" }, { "answers": [ "In four entity-centric ways - entity-first, entity-last, document-level and sentence-level" ], "context": "The most natural way to use the pre-trained transformer architectures for the entity tracking tasks is to simply encode the text sequence and then attempt to “read off” entity states from the contextual transformer representation. We call this approach post-conditioning: the transformer runs with no knowledge of which entity or entities we are going to make predictions on, but we only condition on the target entity after the transformer stage.", "id": 965, "question": "In what way is the input restructured?", "title": "Effective Use of Transformer Networks for Entity Tracking" }, { "answers": [ "" ], "context": "The increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the uniqueness and the variety of this data BIBREF0 . However, the highly informal and noisy status of these sources makes it difficult to apply techniques proposed by the NLP community for dealing with formal and structured content BIBREF1 .", "id": 966, "question": "What are their results on the entity recognition task?", "title": "Recognizing Musical Entities in User-generated Content" }, { "answers": [ "" ], "context": "Named Entity Recognition (NER), or alternatively Named Entity Recognition and Classification (NERC), is the task of detecting entities in an input text and to assign them to a specific class. It starts to be defined in the early '80, and over the years several approaches have been proposed BIBREF2 . Early systems were based on handcrafted rule-based algorithms, while recently several contributions by Machine Learning scientists have helped in integrating probabilistic models into NER systems.", "id": 967, "question": "What task-specific features are used?", "title": "Recognizing Musical Entities in User-generated Content" }, { "answers": [ "" ], "context": "We propose a hybrid method which recognizes musical entities in UGC using both contextual and linguistic information. We focus on detecting two types of entities: Contributor: person who is related to a musical work (composer, performer, conductor, etc). Musical Work: musical composition or recording (symphony, concerto, overture, etc).", "id": 968, "question": "What kind of corpus-based features are taken into account?", "title": "Recognizing Musical Entities in User-generated Content" }, { "answers": [ "" ], "context": "In May 2018, we crawled Twitter using the Python library Tweepy, creating two datasets on which Contributor and Musical Work entities have been manually annotated, using IOB tags.", "id": 969, "question": "Which machine learning algorithms did the explore?", "title": "Recognizing Musical Entities in User-generated Content" }, { "answers": [ "English" ], "context": "According to the literature reviewed, state-of-the-art NER systems proposed by the NLP community are not tailored to detect musical entities in user-generated content. Consequently, our first objective has been to understand how to adapt existing systems for achieving significant results in this task.", "id": 970, "question": "What language is the Twitter content in?", "title": "Recognizing Musical Entities in User-generated Content" }, { "answers": [ "" ], "context": "One of the challenges of processing real-world spoken content, such as media broadcasts, is the potential presence of different dialects of a language in the material. Dialect identification can be a useful capability to identify which dialect is being spoken during a recording. Dialect identification can be regarded as a special case of language recognition, requiring an ability to discriminate between different members within the same language family, as opposed to across language families (i.e., for language recognition). The dominant approach, based on i-vector extraction, has proven to be very effective for both language and speaker recognition BIBREF0 . Recently, phonetically aware deep neural models have also been found to be effective in combination with i-vectors BIBREF1 , BIBREF2 , BIBREF3 . Phonetically aware models could be beneficial for dialect identification, since they provide a mechanism to focus attention on small phonetic differences between dialects with predominantly common phonetic inventories.", "id": 971, "question": "What is the architecture of the siamese neural network?", "title": "MIT-QCRI Arabic Dialect Identification System for the 2017 Multi-Genre Broadcast Challenge" }, { "answers": [ "" ], "context": "For the MGB-3 ADI task, the challenge organizers provided 13,825 utterances (53.6 hours) for the training (TRN) set, 1,524 utterances (10 hours) for a development (DEV) set, and 1,492 utterances (10.1 hours) for a test (TST) set. Each dataset consisted of five Arabic dialects: Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). Detailed statistics of the ADI dataset can be found in BIBREF23 . Table TABREF3 shows some facts about the evaluation conditions and data properties. Note that the development set is relatively small compared to the training set. However, it is matched with the test set channel domain. Thus, the development set provides valuable information to adapt or compensate the channel (recording) domain mismatch between the train and test sets.", "id": 972, "question": "How do they explore domain mismatch?", "title": "MIT-QCRI Arabic Dialect Identification System for the 2017 Multi-Genre Broadcast Challenge" }, { "answers": [ "" ], "context": "The MGB-3 ADI task asks participants to classify speech as one of five dialects, by specifying one dialect for each audio file for their submission. Performance is evaluated via three indices: overall accuracy, average precision, and average recall for the five dialects.", "id": 973, "question": "How do they explore dialect variability?", "title": "MIT-QCRI Arabic Dialect Identification System for the 2017 Multi-Genre Broadcast Challenge" }, { "answers": [ "" ], "context": "The challenge organizers provided features and code for a baseline ADI system. The features consisted of 400 dimensional i-vector features for each audio file (based on bottleneck feature inputs for their frame-level acoustic representation), as well as lexical features using bigrams generated from transcriptions BIBREF23 . For baseline dialect identification, a multi-class Support Vector Machine (SVM) was used. The baseline i-vector performance was 57.3%, 60.8%, and 58.0% for accuracy, precision and recall respectively. Lexical features achieved 48.4%, 51.0%, and 49.3%, respectively. While the audio-based features achieved better performance than the lexical features, both systems only obtained approximately 50% accuracy, indicating that this ADI task is difficult, considering that there are only five classes to choose from.", "id": 974, "question": "Which are the four Arabic dialects?", "title": "MIT-QCRI Arabic Dialect Identification System for the 2017 Multi-Genre Broadcast Challenge" }, { "answers": [ "" ], "context": "Bias is generally considered to be a negative term: a biased story is seen as one that perverts or subverts the truth by offering a partial or incomplete perspective on the facts. But bias is in fact essential to understanding: one cannot interpret a set of facts—something humans are disposed to try to do even in the presence of data that is nothing but noise [38]—without relying on a bias or hypothesis to guide that interpretation. Suppose someone presents you with the sequence INLINEFORM0 and tells you to guess the next number. To make an educated guess, you must understand this sequence as instantiating a particular pattern; otherwise, every possible continuation of the sequence will be equally probable for you. Formulating a hypothesis about what pattern is at work will allow you to predict how the sequence will play out, putting you in a position to make a reasonable guess as to what comes after 3. Formulating the hypothesis that this sequence is structured by the Fibonacci function (even if you don't know its name), for example, will lead you to guess that the next number is 5; formulating the hypothesis that the sequence is structured by the successor function but that every odd successor is repeated once will lead you to guess that it is 3. Detecting a certain pattern allows you to determine what we will call a history: a set of given entities or eventualities and a set of relations linking those entities together. The sequence of numbers INLINEFORM1 and the set of relation instances that the Fibonacci sequence entails as holding between them is one example of a history. Bias, then, is the set of features, constraints, and assumptions that lead an interpreter to select one history—one way of stitching together a set of observed data—over another.", "id": 975, "question": "What factors contribute to interpretive biases according to this research?", "title": "Bias in Semantic and Discourse Interpretation" }, { "answers": [ "" ], "context": "In this paper, we propose a program for research on bias. We will show how to model various types of bias as well as the way in which bias leads to the selection of a history for a set of data, where the data might be a set of nonlinguistic entities or a set of linguistically expressed contents. In particular, we'll look at what people call “unbiased” histories. For us these also involve a bias, what we call a “truth seeking bias”. This is a bias that gets at the truth or acceptably close to it. Our model can show us what such a bias looks like. And we will examine the question of whether it is possible to find such a truth oriented bias for a set of facts, and if so, under what conditions. Can we detect and avoid biases that don't get at the truth but are devised for some other purpose?", "id": 976, "question": "Which interpretative biases are analyzed in this paper?", "title": "Bias in Semantic and Discourse Interpretation" }, { "answers": [ "" ], "context": "Swiss German refers to any of the German varieties that are spoken in about two thirds of Switzerland BIBREF0. Besides at least one of those dialectal varieties, Swiss German people also master standard (or 'High') German which is taught in school as the official language of communication.", "id": 977, "question": "How many words are coded in the dictionary?", "title": "A Swiss German Dictionary: Variation in Speech and Writing" }, { "answers": [ "" ], "context": "This dictionary complements previously developed resources for Swiss German, which share some common information. Spontaneous noisy writing has already been recorded in text corpora BIBREF1, BIBREF2, BIBREF3, some of which are also normalized. These resources contain relatively large lexicons of words used in context, but they do not contain any information about pronunciation. The features of speech are represented in other resources, such as BIBREF4, BIBREF5, BIBREF6, which, on the other hand, contain relatively small lexicons (small set of words known to vary across dialects). The ArchiMob corpus does contain a large lexicon of speech and writing (Dieth transcription), but the spoken part is available in audio sources only, without phonetic transcription.", "id": 978, "question": "Is the model evaluated on the graphemes-to-phonemes task?", "title": "A Swiss German Dictionary: Variation in Speech and Writing" }, { "answers": [ "" ], "context": "Many natural language tasks require recognizing and reasoning with qualitative relationships. For example, we may read about temperatures rising (climate science), a drug dose being increased (medicine), or the supply of goods being reduced (economics), and want to reason about the effects. Qualitative story problems, of the kind found in elementary exams (e.g., Figure FIGREF1 ), form a natural example of many of these linguistic and reasoning challenges, and is the target of this work.", "id": 979, "question": "How does the QuaSP+Zero model work?", "title": "QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships" }, { "answers": [ "" ], "context": "There has been rapid progress in question-answering (QA), spanning a wide variety of tasks and phenomena, including factoid QA BIBREF3 , entailment BIBREF4 , sentiment BIBREF5 , and ellipsis and coreference BIBREF6 . Our contribution here is the first dataset specifically targeted at qualitative relationships, an important category of language that has been less explored. While questions requiring reasoning about qualitative relations sometimes appear in other datasets, e.g., BIBREF7 , our dataset specifically focuses on them so their challenges can be studied.", "id": 980, "question": "Which off-the-shelf tools do they use on QuaRel?", "title": "QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships" }, { "answers": [ "" ], "context": "We first describe our framework for representing questions and the knowledge to answer them. Our dataset, described later, includes logical forms expressed in this language.", "id": 981, "question": "How do they obtain the logical forms of their questions in their dataset?", "title": "QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships" }, { "answers": [ "" ], "context": "We use a simple representation of qualitative relationships, leveraging prior work in qualitative reasoning BIBREF0 . Let INLINEFORM0 be the set of properties relevant to the question set's domain (e.g., smoothness, friction, speed). Let INLINEFORM1 be a set of qualitative values for property INLINEFORM2 (e.g., fast, slow). For the background knowledge about the domain itself (a qualitative model), following BIBREF0 Forbus1984QualitativePT, we use the following predicates: [vskip=1mm,leftmargin=5mm] q+(property1, property2)", "id": 982, "question": "Do all questions in the dataset allow the answers to pick from 2 options?", "title": "QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships" }, { "answers": [ "" ], "context": "One of the exciting yet challenging areas of research in Intelligent Transportation Systems is developing context-awareness technologies that can enable autonomous vehicles to interact with their passengers, understand passenger context and situations, and take appropriate actions accordingly. To this end, building multi-modal dialogue understanding capabilities situated in the in-cabin context is crucial to enhance passenger comfort and gain user confidence in AV interaction systems. Among many components of such systems, intent recognition and slot filling modules are one of the core building blocks towards carrying out successful dialogue with passengers. As an initial attempt to tackle some of those challenges, this study introduce in-cabin intent detection and slot filling models to identify passengers' intent and extract semantic frames from the natural language utterances in AV. The proposed models are developed by leveraging User Experience (UX) grounded realistic (ecologically valid) in-cabin dataset. This dataset is generated with naturalistic passenger behaviors, multiple passenger interactions, and with presence of a Wizard-of-Oz (WoZ) agent in moving vehicles with noisy road conditions.", "id": 983, "question": "What is shared in the joint model?", "title": "Natural Language Interactions in Autonomous Vehicles: Intent Detection and Slot Filling from Passenger Utterances" }, { "answers": [ "" ], "context": "Long Short-Term Memory (LSTM) networks BIBREF0 are widely-used for temporal sequence learning or time-series modeling in Natural Language Processing (NLP). These neural networks are commonly employed for sequence-to-sequence (seq2seq) and sequence-to-one (seq2one) modeling problems, including slot filling tasks BIBREF1 and utterance-level intent classification BIBREF2 , BIBREF3 which are well-studied for various application domains. Bidirectional LSTMs (Bi-LSTMs) BIBREF4 are extensions of traditional LSTMs which are proposed to improve model performance on sequence classification problems even further. Jointly modeling slot extraction and intent recognition BIBREF2 , BIBREF5 is also explored in several architectures for task-specific applications in NLP. Using Attention mechanism BIBREF6 , BIBREF7 on top of RNNs is yet another recent break-through to elevate the model performance by attending inherently crucial sub-modules of given input. There exist various architectures to build hierarchical learning models BIBREF8 , BIBREF9 , BIBREF10 for document-to-sentence level, and sentence-to-word level classification tasks, which are highly domain-dependent and task-specific.", "id": 984, "question": "Are the intent labels imbalanced in the dataset?", "title": "Natural Language Interactions in Autonomous Vehicles: Intent Detection and Slot Filling from Passenger Utterances" }, { "answers": [ "" ], "context": "The evolution of scientific ideas happens when old ideas are replaced by new ones. Researchers usually conduct scientific experiments based on the previous publications. They either take use of others work as a solution to solve their specific problem, or they improve the results documented in the previous publications by introducing new solutions. I refer to the former as positive citation and the later negative citation. Citation sentence examples with different sentiment polarity are shown in Table TABREF2 .", "id": 985, "question": "What kernels are used in the support vector machines?", "title": "Sentiment Analysis of Citations Using Word2vec" }, { "answers": [ "" ], "context": "Mikolov et al. introduced word2vec technique BIBREF3 that can obtain word vectors by training text corpus. The idea of word2vec (word embeddings) originated from the concept of distributed representation of words BIBREF5 . The common method to derive the vectors is using neural probabilistic language model BIBREF6 . Word embeddings proved to be effective representations in the tasks of sentiment analysis BIBREF4 , BIBREF7 , BIBREF8 and text classification BIBREF9 . Sadeghian and Sharafat BIBREF10 extended word embeddings to sentence embeddings by averaging the word vectors in a sentiment review statement. Their results showed that word embeddings outperformed the bag-of-words model in sentiment classification. In this work, I are aiming at evaluating word embeddings for sentiment analysis of citations. The research questions are:", "id": 986, "question": "What dataset is used?", "title": "Sentiment Analysis of Citations Using Word2vec" }, { "answers": [ "" ], "context": "The SentenceModel provided by LingPipe was used to segment raw text into its constituent sentences . The data I used to train the vectors has noise. For example, there are incomplete sentences mistakenly detected (e.g. Publication Year.). To address this issue, I eliminated sentences with less than three words.", "id": 987, "question": "What metrics are considered?", "title": "Sentiment Analysis of Citations Using Word2vec" }, { "answers": [ "" ], "context": "In the literature, several cache-based translation models have been proposed for conventional statistical machine translation, besides traditional n-gram language models and neural language models. In this section, we will first introduce related work in cache-based language models and then in translation models.", "id": 988, "question": "Did the authors evaluate their system output for coherence?", "title": "Modeling Coherence for Neural Machine Translation with Dynamic and Topic Caches" }, { "answers": [ "BLEU scores, exact matches of words in both translations and topic cache, and cosine similarities of adjacent sentences for coherence." ], "context": "In this section, we briefly describe the NMT model taken as a baseline. Without loss of generality, we adopt the NMT architecture proposed by bahdanau2015neural, with an encoder-decoder neural network.", "id": 989, "question": "What evaluations did the authors use on their system?", "title": "Modeling Coherence for Neural Machine Translation with Dynamic and Topic Caches" }, { "answers": [ "Combined per-pixel accuracy for character line segments is 74.79" ], "context": "The collection and analysis of historical document images is a key component in the preservation of culture and heritage. Given its importance, a number of active research efforts exist across the world BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. In this paper, we focus on palm-leaf and early paper documents from the Indian sub-continent. In contrast with modern or recent era documents, such manuscripts are considerably more fragile, prone to degradation from elements of nature and tend to have a short shelf life BIBREF6, BIBREF7, BIBREF8. More worryingly, the domain experts who can decipher such content are small in number and dwindling. Therefore, it is essential to access the content within these documents before it is lost forever.", "id": 990, "question": "What accuracy does CNN model achieve?", "title": "Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts" }, { "answers": [ "508" ], "context": "A number of research groups have invested significant efforts in the creation and maintenance of annotated, publicly available historical manuscript image datasets BIBREF10, BIBREF11, BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF12. Other collections contain character-level and word-level spatial annotations for South-East Asian palm-leaf manuscripts BIBREF9, BIBREF4, BIBREF13. In these latter set of works, annotations for lines are obtained by considering the polygonal region formed by union of character bounding boxes as a line. While studies on Indic palm-leaf and paper-based manuscripts exist, these are typically conducted on small and often, private collections of documents BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. No publicly available large-scale, annotated dataset of historical Indic manuscripts exists to the best of our knowledge. In contrast with existing collections, our proposed dataset contains a much larger diversity in terms of document type (palm-leaf and early paper), scripts and annotated layout elements (see Tables TABREF5,TABREF8). An additional level of complexity arises from the presence of multiple manuscript pages within a single image (see Fig. FIGREF1).", "id": 991, "question": "How many documents are in the Indiscapes dataset?", "title": "Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts" }, { "answers": [ "" ], "context": "The Indic manuscript document images in our dataset are obtained from two sources. The first source is the publicly available Indic manuscript collection from University of Pennsylvania's Rare Book and Manuscript Library BIBREF39, also referred to as Penn-in-Hand (PIH). From the $2{,}880$ Indic manuscript book-sets, we carefully curated 193 manuscript images for annotation. Our curated selection aims to maximize the diversity of the dataset in terms of various attributes such as the extent of document degradation, script language, presence of non-textual elements (e.g. pictures, tables) and number of lines. Some images contain multiple manuscript pages stacked vertically or horizontally (see bottom-left image in Figure FIGREF1). The second source for manuscript images in our dataset is Bhoomi, an assorted collection of 315 images sourced from multiple Oriental Research Institutes and libraries across India. As with the first collection, we chose a subset intended to maximize the overall diversity of the dataset. However, this latter set of images are characterized by a relatively inferior document quality, presence of multiple languages and from a layout point of view, predominantly contain long, closely and irregularly spaced text lines, binding holes and degradations (Figure FIGREF1). Though some document images contain multiple manuscripts, we do not attempt to split the image into multiple pages. While this poses a challenge for annotation and automatic image parsing, retaining such images in the dataset eliminates manual/semi-automatic intervention. As our results show, our approach can successfully handle such multi-page documents, thereby making it truly an end-to-end system.", "id": 992, "question": "What language(s) are the manuscripts written in?", "title": "Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts" }, { "answers": [ "" ], "context": "It is a common habit for people to keep several versions of documents, which creates duplicate data. A scholarly article is normally revised several times before being published. An academic paper may be listed on personal websites, digital conference libraries, Google Scholar, etc. In major corporations, a document typically goes through several revisions involving multiple editors and authors. Users would benefit from visualizing the entire history of a document. It is worthwhile to develop a system that is able to intelligently identify, manage and represent revisions. Given a collection of text documents, our study identifies revision relationships in a completely unsupervised way. For each document in a corpus we only use its content and the last modified timestamp. We assume that a document can be revised by many users, but that the documents are not merged together. We consider collaborative editing as revising documents one by one.", "id": 993, "question": "What metrics are used to evaluation revision detection?", "title": "Semantic Document Distance Measures and Unsupervised Document Revision Detection" }, { "answers": [ "" ], "context": "The two requirements for a document INLINEFORM0 being a revision of another document INLINEFORM1 are that INLINEFORM2 has been created later than INLINEFORM3 and that the content of INLINEFORM4 is similar to (has been modified from) that of INLINEFORM5 . More specifically, given a corpus INLINEFORM6 , for any two documents INLINEFORM7 , we want to find out the yes/no revision relationship of INLINEFORM8 and INLINEFORM9 , and then output all such revision pairs.", "id": 994, "question": "How large is the Wikipedia revision dump dataset?", "title": "Semantic Document Distance Measures and Unsupervised Document Revision Detection" }, { "answers": [ "There are 6 simulated datasets collected which is initialised with a corpus of size 550 and simulated by generating new documents from Wikipedia extracts and replacing existing documents" ], "context": "In this section, we first introduce the classic VSM model, the word2vec model, DTW and TED. We next demonstrate how to combine the above components to construct our semantic document distance measures: wDTW and wTED. We also discuss the implementation of our revision detection system.", "id": 995, "question": "What are simulated datasets collected?", "title": "Semantic Document Distance Measures and Unsupervised Document Revision Detection" }, { "answers": [ "" ], "context": "VSM represents a set of documents as vectors of identifiers. The identifier of a word used in this work is the tf-idf weight. We represent documents as tf-idf vectors, and thus the similarity of two documents can be described by the cosine distance between their vectors. VSM has low algorithm complexity but cannot represent the semantics of words since it is based on the bag-of-words assumption.", "id": 996, "question": "Which are the state-of-the-art models?", "title": "Semantic Document Distance Measures and Unsupervised Document Revision Detection" }, { "answers": [ "" ], "context": "The proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony BIBREF0. According to the Merriam-Webster online dictionary, irony refers to “the use of word to express something other than and especially the opposite of the literal meaning.\" A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc BIBREF0. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019). We focus our energy on an important angle of the problem–the small size of training data.", "id": 997, "question": "Why is being feature-engineering free an advantage?", "title": "Multi-Task Bidirectional Transformer Representations for Irony Detection" }, { "answers": [ "" ], "context": "", "id": 998, "question": "Where did this model place in the final evaluation of the shared task?", "title": "Multi-Task Bidirectional Transformer Representations for Irony Detection" }, { "answers": [ "" ], "context": "For our baseline, we use gated recurrent units (GRU) BIBREF4, a simplification of long-short term memory (LSTM) BIBREF5, which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following:", "id": 999, "question": "What in-domain data is used to continue pre-training?", "title": "Multi-Task Bidirectional Transformer Representations for Irony Detection" }, { "answers": [ "" ], "context": "BERT BIBREF1 is based on the Transformer BIBREF6, a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on queries, keys, and values. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. Encoder of the Transformer in BIBREF6 has 6 attention layers, each of which is composed of two sub-layers: (1) multi-head attention where queries, keys, and values are projected h times into linear, learned projections and ultimately concatenated; and (2) fully-connected feed-forward network (FFN) that is applied to each position separately and identically. Decoder of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT BIBREF1 is a multi-layer bidirectional Transformer encoder BIBREF6. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task captures context (i.e., sentence relationships). More information about BERT can be found in BIBREF1.", "id": 1000, "question": "What dialect is used in the Google BERT model and what is used in the task data?", "title": "Multi-Task Bidirectional Transformer Representations for Irony Detection" }, { "answers": [ "" ], "context": "In multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task BIBREF2, BIBREF3. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings BIBREF7, BIBREF8. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following BIBREF8). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same afore-mentioned single-task BERT parameters. We now describe our data.", "id": 1001, "question": "What are the tasks used in the mulit-task learning setup?", "title": "Multi-Task Bidirectional Transformer Representations for Irony Detection" }, { "answers": [ "rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context" ], "context": "Posing questions about a document in natural language is a crucial aspect of the effort to automatically process natural language data, enabling machines to ask clarification questions BIBREF0 , become more robust to queries BIBREF1 , and to act as automatic tutors BIBREF2 .", "id": 1002, "question": "What human evaluation metrics were used in the paper?", "title": "Evaluating Rewards for Question Generation Models" }, { "answers": [ "reviews under distinct product categories are considered specific domain knowledge" ], "context": "With the advancement in technology and invention of modern web applications like Facebook and Twitter, users started expressing their opinions and ideologies at a scale unseen before. The growth of e-commerce companies like Amazon, Walmart have created a revolutionary impact in the field of consumer business. People buy products online through these companies and write reviews for their products. These consumer reviews act as a bridge between consumers and companies. Through these reviews, companies polish the quality of their services. Sentiment Classification (SC) is one of the major applications of Natural Language Processing (NLP) which aims to find the polarity of text. In the early stages BIBREF0 of text classification, sentiment classification was performed using traditional feature selection techniques like Bag-of-Words (BoW) BIBREF1 or TF-IDF. These features were further used to train machine learning classifiers like Naive Bayes (NB) BIBREF2 and Support Vector Machines (SVM) BIBREF3 . They are shown to act as strong baselines for text classification BIBREF4 . However, these models ignore word level semantic knowledge and sequential nature of text. Neural networks were proposed to learn distributed representations of words BIBREF5 . Skip-gram and CBOW architectures BIBREF6 were introduced to learn high quality word representations which constituted a major breakthrough in NLP. Several neural network architectures like recursive neural networks BIBREF7 and convolutional neural networks BIBREF8 achieved excellent results in text classification. Recurrent neural networks which were proposed for dealing sequential inputs suffer from vanishing BIBREF9 and exploding gradient problems BIBREF10 . To overcome this problem, Long Short Term Memory (LSTM) was introduced BIBREF11 .", "id": 1003, "question": "For the purposes of this paper, how is something determined to be domain specific knowledge?", "title": "Gated Convolutional Neural Networks for Domain Adaptation" }, { "answers": [ "" ], "context": "Traditionally methods for tackling Domain Adaptation are lexicon based. Blitzer BIBREF19 used a pivot method to select features that occur frequently in both domains. It assumes that the selected pivot features can reliably represent the source domain. The pivots are selected using mutual information between selected features and the source domain labels. SFA BIBREF13 method argues that pivot features selected from source domain cannot attest a representation of target domain. Hence, SFA tries to exploit the relationship between domain-specific and domain independent words via simultaneously co-clustering them in a common latent space. SDA BIBREF14 performs Domain Adaptation by learning intermediate representations through auto-encoders. Yu BIBREF20 used two auxiliary tasks to help induce sentence embeddings that work well across different domains. These embeddings are trained using Convolutional Neural Networks (CNN).", "id": 1004, "question": "Does the fact that GCNs can perform well on this tell us that the task is simpler than previously thought?", "title": "Gated Convolutional Neural Networks for Domain Adaptation" }, { "answers": [ "" ], "context": "In this section, we introduce a model based on Gated Convolutional Neural Networks for Domain Adaptation. We present the problem definition of Domain Adaptation, followed by the architecture of the proposed model.", "id": 1005, "question": "Are there conceptual benefits to using GCNs over more complex architectures like attention?", "title": "Gated Convolutional Neural Networks for Domain Adaptation" }, { "answers": [ "" ], "context": "Sarcastic and ironic expressions are prevalent in social media and, due to the tendency to invert polarity, play an important role in the context of opinion mining, emotion recognition and sentiment analysis BIBREF0 . Sarcasm and irony are two closely related linguistic phenomena, with the concept of meaning the opposite of what is literally expressed at its core. There is no consensus in academic research on the formal definition, both terms are non-static, depending on different factors such as context, domain and even region in some cases BIBREF1 .", "id": 1006, "question": "Do they evaluate only on English?", "title": "Deep contextualized word representations for detecting sarcasm and irony" }, { "answers": [ "" ], "context": "Apart from the relevance for industry applications related to sentiment analysis, sarcasm and irony detection has received great traction within the NLP research community, resulting in a variety of methods, shared tasks and benchmark datasets. Computational approaches for the classification task range from rule-based systems BIBREF4 , BIBREF11 and statistical methods and machine learning algorithms such as Support Vector Machines BIBREF3 , BIBREF12 , Naive Bayes and Decision Trees BIBREF13 leveraging extensive feature sets, to deep learning-based approaches. In this context, BIBREF14 . delivered state-of-the-art results by using an intra-attentional component in addition to a recurrent neural network. Previous work such as the one by BIBREF15 had proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that also achieved excellent results. A comprehensive survey on automatic sarcasm detection was done by BIBREF16 , while computational irony detection was reviewed by BIBREF17 .", "id": 1007, "question": "What are the 7 different datasets?", "title": "Deep contextualized word representations for detecting sarcasm and irony" }, { "answers": [ "" ], "context": "The wide spectrum of linguistic cues that can serve as indicators for sarcastic and ironic expressions has been usually exploited for automatic sarcasm or irony detection by modeling them in the form of binary features in traditional machine learning.", "id": 1008, "question": "What are the three different sources of data?", "title": "Deep contextualized word representations for detecting sarcasm and irony" }, { "answers": [ "A bi-LSTM with max-pooling on top of it" ], "context": "We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.", "id": 1009, "question": "What type of model are the ELMo representations used in?", "title": "Deep contextualized word representations for detecting sarcasm and irony" }, { "answers": [ "" ], "context": "Table TABREF2 summarizes our results. For each dataset, the top row denotes our baseline and the second row shows our best comparable model. Rows with FULL models denote our best single model trained with all the development available data, without any other preprocessing other than mentioned in the previous section. In the case of the Twitter datasets, rows indicated as AUG refer to our the models trained using the augmented version of the corresponding datasets.", "id": 1010, "question": "Which morphosyntactic features are thought to indicate irony or sarcasm?", "title": "Deep contextualized word representations for detecting sarcasm and irony" }, { "answers": [ "" ], "context": "Traditional machine reading comprehension (MRC) tasks share the single-turn setting of answering a single question related to a passage. There is usually no connection between different questions and answers to the same passage. However, the most natural way humans seek answers is via conversation, which carries over context through the dialogue flow.", "id": 1011, "question": "Is the model evaluated on other datasets?", "title": "SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering" }, { "answers": [ "" ], "context": "In this section, we propose the neural model, SDNet, for the conversational question answering task, which is formulated as follows. Given a passage $\\mathcal {C}$ , and history question and answer utterances $Q_1, A_1, Q_2, A_2, ..., Q_{k-1}, A_{k-1}$ , the task is to generate response $A_k$ given the latest question $Q_k$ . The response is dependent on both the passage and history utterances.", "id": 1012, "question": "Does the model incorporate coreference and entailment?", "title": "SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering" }, { "answers": [ "" ], "context": "Encoding layer encodes each token in passage and question into a fixed-length vector, which includes both word embeddings and contextualized embeddings. For contextualized embedding, we utilize the latest result from BERT BIBREF5 . Different from previous work, we fix the parameters in BERT model and use the linear combination of embeddings from different layers in BERT.", "id": 1013, "question": "Is the incorporation of context separately evaluated?", "title": "SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering" }, { "answers": [ "" ], "context": "Typical speech enhancement techniques focus on local criteria for improving speech intelligibility and quality. Time-frequency prediction techniques use local spectral quality estimates as an objective function; time domain methods directly predict clean output with a potential spectral quality metric BIBREF0. Such techniques have been extremely successful in predicting a speech denoising function, but also require parallel clean and noisy speech for training. The trained systems implicitly learn the phonetic patterns of the speech signal in the coordinated output of time-domain or time-frequency units. However, our hypothesis is that directly providing phonetic feedback can be a powerful additional signal for speech enhancement. For example, many local metrics will be more attuned to high-energy regions of speech, but not all phones of a language carry equal energy in production (compare /v/ to /ae/).", "id": 1014, "question": "Which frozen acoustic model do they use?", "title": "Phonetic Feedback for Speech Enhancement With and Without Parallel Speech Data" }, { "answers": [ "Improved AECNN-T by 2.1 and AECNN-T-SM BY 0.9" ], "context": "Speech enhancement is a rich field of work with a huge variety of techniques. Spectral feature based enhancement systems have focused on masking approaches BIBREF2, and have gained popularity with deep learning techniques BIBREF3 for ideal ratio mask and ideal binary mask estimation BIBREF4.", "id": 1015, "question": "By how much does using phonetic feedback improve state-of-the-art systems?", "title": "Phonetic Feedback for Speech Enhancement With and Without Parallel Speech Data" }, { "answers": [ "" ], "context": "Psychotic disorders affect approximately 2.5-4% of the population BIBREF0 BIBREF1. They are one of the leading causes of disability worldwide BIBREF2 and are a frequent cause of inpatient readmission after discharge BIBREF3. Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF4 BIBREF5. Assessing readmission risk is therefore critically needed, as it can help inform the selection of treatment interventions and implement preventive measures.", "id": 1016, "question": "What features are used?", "title": "Assessing the Efficacy of Clinical Sentiment Analysis and Topic Extraction in Psychiatric Readmission Risk Prediction" }, { "answers": [ "" ], "context": "The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission.", "id": 1017, "question": "Do they compare to previous models?", "title": "Assessing the Efficacy of Clinical Sentiment Analysis and Topic Extraction in Psychiatric Readmission Risk Prediction" }, { "answers": [ "" ], "context": "The readmission risk prediction task was performed at the admission level. An admission consists of a collection of all the clinical notes for a given patient written by medical personnel between inpatient admission and discharge. Every admission was labeled as either `readmitted' (i.e. the patient was readmitted within the next 30 days of discharge) or `not readmitted'. Therefore, the classification task consists of creating a single feature representation of all the clinical notes belonging to one admission, plus the past medical history and demographic information of the patient, and establishing whether that admission will be followed by a 30-day readmission or not.", "id": 1018, "question": "How do they incorporate sentiment analysis?", "title": "Assessing the Efficacy of Clinical Sentiment Analysis and Topic Extraction in Psychiatric Readmission Risk Prediction" }, { "answers": [ "" ], "context": "Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance:", "id": 1019, "question": "What is the dataset used?", "title": "Assessing the Efficacy of Clinical Sentiment Analysis and Topic Extraction in Psychiatric Readmission Risk Prediction" }, { "answers": [ "" ], "context": "Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14.", "id": 1020, "question": "How do they extract topics?", "title": "Assessing the Efficacy of Clinical Sentiment Analysis and Topic Extraction in Psychiatric Readmission Risk Prediction" }, { "answers": [ "" ], "context": " This work is licenced under a Creative Commons Attribution 4.0 International Licence.", "id": 1021, "question": "How does this compare to simple interpolation between a word-level and a character-level language model?", "title": "Attending to Characters in Neural Sequence Labeling Models" }, { "answers": [ "" ], "context": "In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations.", "id": 1022, "question": "What translationese effects are seen in the analysis?", "title": "Analysing Coreference in Transformer Outputs" }, { "answers": [ "" ], "context": "Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed.", "id": 1023, "question": "What languages are seen in the news and TED datasets?", "title": "Analysing Coreference in Transformer Outputs" }, { "answers": [ "" ], "context": "Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size).", "id": 1024, "question": "How are the coreference chain translations evaluated?", "title": "Analysing Coreference in Transformer Outputs" }, { "answers": [ "" ], "context": "As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29.", "id": 1025, "question": "How are the (possibly incorrect) coreference chains in the MT outputs annotated?", "title": "Analysing Coreference in Transformer Outputs" }, { "answers": [ "" ], "context": "Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration.", "id": 1026, "question": "Which three neural machine translation systems are analyzed?", "title": "Analysing Coreference in Transformer Outputs" }, { "answers": [ "" ], "context": "is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus.", "id": 1027, "question": "Which coreference phenomena are analyzed?", "title": "Analysing Coreference in Transformer Outputs" }, { "answers": [ "" ], "context": "In recent years, digital libraries have moved towards open science and open access with several large scholarly datasets being constructed. Most popular datasets include millions of papers, authors, venues, and other information. Their large size and heterogeneous contents make it very challenging to effectively manage, explore, and utilize these datasets. The knowledge graph has emerged as a universal data format for representing knowledge about entities and their relationships in such complicated data. The main part of a knowledge graph is a collection of triples, with each triple $ (h, t, r) $ denoting the fact that relation $ r $ exists between head entity $ h $ and tail entity $ t $. This can also be formalized as a labeled directed multigraph where each triple $ (h, t, r) $ represents a directed edge from node $ h $ to node $ t $ with label $ r $. Therefore, it is straightforward to build knowledge graphs for scholarly data by representing natural connections between scholarly entities with triples such as (AuthorA, Paper1, write) and (Paper1, Paper2, cite).", "id": 1028, "question": "What new interesting tasks can be solved based on the uncanny semantic structures of the embedding space?", "title": "Exploring Scholarly Data by Semantic Query on Knowledge Graph Embedding Space" }, { "answers": [ "" ], "context": "Knowledge graph has gradually become the standard data format for heterogeneous and complicated datasets BIBREF4. There have been several attempts to build knowledge graph for scholarly data, either adopting the scholarly network directly BIBREF5, or deriving the knowledge graph from some similarity measures BIBREF6 BIBREF7, or constructing the knowledge graph from survey papers BIBREF8. However, they mostly focus on the data format or graph inference aspects of knowledge graph. In this paper, we instead focus on the knowledge graph embedding methods and especially the application of embedding vectors in data exploration.", "id": 1029, "question": "What are the uncanny semantic structures of the embedding space?", "title": "Exploring Scholarly Data by Semantic Query on Knowledge Graph Embedding Space" }, { "answers": [ "" ], "context": "For a more in depth survey of knowledge graph embedding methods, please refer to BIBREF0, which defines their architecture, categorization, and interaction mechanisms. In this paper, we only focus on the semantic structures of the state-of-the-art model CP$ _h $ BIBREF2, which is an extension of CP BIBREF9.", "id": 1030, "question": "What is the general framework for data exploration by semantic queries?", "title": "Exploring Scholarly Data by Semantic Query on Knowledge Graph Embedding Space" }, { "answers": [ "" ], "context": "The most popular word embedding models in recent years are word2vec variants such as word2vec skipgram BIBREF3, which predicts the context-words $ c_i $ independently given the target-word $ w $, that is:", "id": 1031, "question": "What data exploration is supported by the analysis of these semantic structures?", "title": "Exploring Scholarly Data by Semantic Query on Knowledge Graph Embedding Space" }, { "answers": [ "" ], "context": "Machine reading comprehension (MRC) aims to infer the answer to a question given the document. In recent years, researchers have proposed lots of MRC models BIBREF0, BIBREF1, BIBREF2, BIBREF3 and these models have achieved remarkable results in various public benchmarks such as SQuAD BIBREF4 and RACE BIBREF5. The success of these models is due to two reasons: (1) Multi-layer architectures which allow these models to read the document and the question iteratively for reasoning; (2) Attention mechanisms which would enable these models to focus on the part related to the question in the document.", "id": 1032, "question": "what are the existing models they compared with?", "title": "NumNet: Machine Reading Comprehension with Numerical Reasoning" }, { "answers": [ "" ], "context": "Event temporal relation understanding is a major component of story/narrative comprehension. It is an important natural language understanding (NLU) task with broad applications to downstream tasks such as story understanding BIBREF0 , BIBREF1 , BIBREF2 , question answering BIBREF3 , BIBREF4 , and text summarization BIBREF5 , BIBREF6 .", "id": 1033, "question": "Do they report results only on English data?", "title": "Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding" }, { "answers": [ "" ], "context": "We investigate both neural network-based models and traditional feature-based models. We briefly introduce them in this section.", "id": 1034, "question": "What conclusions do the authors draw from their detailed analyses?", "title": "Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding" }, { "answers": [ "" ], "context": "is created by annotating 1600 sentences of 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. CaTeRS contains both temporal and causal relations in an effort to understand and predict commonsense relations between events.", "id": 1035, "question": "Do the BERT-based embeddings improve results?", "title": "Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding" }, { "answers": [ "" ], "context": "CAEVO consists of both linguistic-rule-based sieves and feature-based trainable sieves. We train CAEVO sieves with our train set and evaluate them on both dev and test sets. CAEVO is an end-to-end system that automatically annotates both events and relations. In order to resolve label annotation mismatch between CAEVO and our gold data, we create our own final input files to CAEVO system. Default parameter settings are used when running the CAEVO system.", "id": 1036, "question": "What were the traditional linguistic feature-based models?", "title": "Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding" }, { "answers": [ "" ], "context": "Table TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset.", "id": 1037, "question": "What type of baseline are established for the two datasets?", "title": "Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding" }, { "answers": [ "" ], "context": "Machine Learning, in general, and affective computing, in particular, rely on good data representations or features that have a good discriminatory faculty in classification and regression experiments, such as emotion recognition from speech. To derive efficient representations of data, researchers have adopted two main strategies: (1) carefully crafted and tailored feature extractors designed for a particular task BIBREF0 and (2) algorithms that learn representations automatically from the data itself BIBREF1 . The latter approach is called Representation Learning (RL), and has received growing attention in the past few years and is highly reliant on large quantities of data. Most approaches for emotion recognition from speech still rely on the extraction of standard acoustic features such as pitch, shimmer, jitter and MFCCs (Mel-Frequency Cepstral Coefficients), with a few notable exceptions BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In this work we leverage RL strategies and automatically learn representations of emotional speech from the spectrogram directly using a deep convolutional neural network (CNN) architecture.", "id": 1038, "question": "What model achieves state of the art performance on this task?", "title": "Learning Representations of Emotional Speech with Deep Convolutional Generative Adversarial Networks" }, { "answers": [ "" ], "context": "The proposed model builds upon previous results in the field of emotion recognition, and leverages prior work in representation learning.", "id": 1039, "question": "Which multitask annotated corpus is used?", "title": "Learning Representations of Emotional Speech with Deep Convolutional Generative Adversarial Networks" }, { "answers": [ "" ], "context": "The investigated multitask model is based upon the DCGAN architecture described in Section SECREF2 and is implemented in TensorFlow. For emotion classification a fully connected layer is attached to the final convolutional layer of the DCGAN's discriminator. The output of this layer is then fed to two separate fully connected layers, one of which outputs a valence label and the other of which outputs an activation label. This setup is shown visually in Figure FIGREF4 . Through this setup, the model is able to take advantage of unlabeled data during training by feeding it through the DCGAN layers in the model, and is also able to take advantage of multitask learning and train the valence and activation outputs simultaneously.", "id": 1040, "question": "What are the tasks in the multitask learning setup?", "title": "Learning Representations of Emotional Speech with Deep Convolutional Generative Adversarial Networks" }, { "answers": [ "" ], "context": "Due to the semi-supervised nature of the proposed Multitask DCGAN model, we utilize both labeled and unlabeled data. For the unlabeled data, we use audio from the AMI BIBREF8 and IEMOCAP BIBREF7 datasets. For the labeled data, we use audio from the IEMOCAP dataset, which comes with labels for activation and valence, both measured on a 5-point Likert scale from three distinct annotators. Although IEMOCAP provides per-word activation and valence labels, in practice these labels do not generally change over time in a given audio file, and so for simplicity we label each audio clip with the average valence and activation. Since valence and activation are both measured on a 5-point scale, the labels are encoded in 5-element one-hot vectors. For instance, a valence of 5 is represented with the vector INLINEFORM0 . The one-hot encoding can be thought of as a probability distribution representing the likelihood of the correct label being some particular value. Thus, in cases where the annotators disagree on the valence or activation label, this can be represented by assigning probabilities to multiple positions in the label vector. For instance, a label of 4.5 conceptually means that the “correct” valence is either 4 or 5 with equal probability, so the corresponding vector would be INLINEFORM1 . These “fuzzy labels” have been shown to improve classification performance in a number of applications BIBREF14 , BIBREF15 . It should be noted here that we had generally greater success with this fuzzy label method than training the neural network model on the valence label directly, i.e. classification task vs. regression.", "id": 1041, "question": "What are the subtle changes in voice which have been previously overshadowed?", "title": "Learning Representations of Emotional Speech with Deep Convolutional Generative Adversarial Networks" }, { "answers": [ "" ], "context": "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/", "id": 1042, "question": "how are rare words defined?", "title": "Subword-augmented Embedding for Cloze Reading Comprehension" }, { "answers": [ "" ], "context": "The concerned reading comprehension task can be roughly categorized as user-query type and cloze-style according to the answer form. Answers in the former are usually a span of texts while in the cloze-style task, the answers are words or phrases which lets the latter be the harder-hit area of OOV issues, inspiring us to select the cloze-style as our testbed for SAW strategies. Our preliminary study shows even for the advanced word-character based GA reader, OOV answers still account for nearly 1/5 in the error results. This also motivates us to explore better representations to further performance improvement.", "id": 1043, "question": "which public datasets were used?", "title": "Subword-augmented Embedding for Cloze Reading Comprehension" }, { "answers": [ "AS Reader, GA Reader, CAS Reader" ], "context": "Word in most languages usually can be split into meaningful subword units despite of the writing form. For example, “indispensable\" could be split into the following subwords: INLINEFORM0 .", "id": 1044, "question": "what are the baselines?", "title": "Subword-augmented Embedding for Cloze Reading Comprehension" }, { "answers": [ "They were able to create a language model from the dataset, but did not test." ], "context": "Kurdish language processing requires endeavor by interested researchers and scholars to overcome with a large gap which it has regarding the resource scarcity. The areas that need attention and the efforts required have been addressed in BIBREF0.", "id": 1045, "question": "What are the results of the experiment?", "title": "Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset" }, { "answers": [ "extracted text from Sorani Kurdish books of primary school and randomly created sentences" ], "context": "The work on Automatic Speech Recognition (ASR) has a long history, but we could not retrieve any literature on Kurdish ASR at the time of compiling this article. However, the literature on ASR for different languages is resourceful. Also, researchers have widely used CMUSphinx for ASR though other technologies have been emerging in recent years BIBREF1.", "id": 1046, "question": "How was the dataset collected?", "title": "Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset" }, { "answers": [ "" ], "context": "To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences.", "id": 1047, "question": "What is the size of the dataset?", "title": "Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset" }, { "answers": [ "" ], "context": "The phoneset includes 34 phones for Sorani Kurdish. A sample of the file content is given below.", "id": 1048, "question": "How many different subjects does the dataset contain?", "title": "Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset" }, { "answers": [ "1" ], "context": "The filler phone file usually contains fillers in spoken sentences. In our basic sentences, we have only considered silence. Therefore it only includes three lines to indicate the possible pauses at the beginning and end of the sentences and also after each word.", "id": 1049, "question": "How many annotators participated?", "title": "Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset" }, { "answers": [ "" ], "context": "This file includes the list of files in which the narrated sentences have been recorded. The recorded files are in wav formats. However, in the file IDs, the extension is omitted. A sample of the file content is given below. The test directory is the directory in which the files are located.", "id": 1050, "question": "How long is the dataset?", "title": "Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset" }, { "answers": [ "" ], "context": "We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective.", "id": 1051, "question": "Do the authors mention any possible confounds in their study?", "title": "Cohesion and Coalition Formation in the European Parliament: Roll-Call Votes and Twitter Activities" }, { "answers": [ "" ], "context": "Social-media activities often reflect phenomena that occur in other complex systems. By observing social networks and the content propagated through these networks, we can describe or even predict the interplay between the observed social-media activities and another complex system that is more difficult, if not impossible, to monitor. There are numerous studies reported in the literature that successfully correlate social-media activities to phenomena like election outcomes BIBREF0 , BIBREF1 or stock-price movements BIBREF2 , BIBREF3 .", "id": 1052, "question": "What is the relationship between the co-voting and retweeting patterns?", "title": "Cohesion and Coalition Formation in the European Parliament: Roll-Call Votes and Twitter Activities" }, { "answers": [ "" ], "context": "In this paper we study and relate two very different aspects of how MEPs behave in policy-making processes. First, we look at their co-voting behavior, and second, we examine their retweeting patterns. Thus, we draw related work from two different fields of science. On one hand, we look at how co-voting behavior is analyzed in the political-science literature and, on the other, we explore how Twitter is used to better understand political and policy-making processes. The latter has been more thoroughly explored in the field of data mining (specifically, text mining and network analysis).", "id": 1053, "question": "Does the analysis find that coalitions are formed in the same way for different policy areas?", "title": "Cohesion and Coalition Formation in the European Parliament: Roll-Call Votes and Twitter Activities" }, { "answers": [ "" ], "context": "In this section we present the methods to quantify cohesion and coalitions from the roll-call votes and Twitter activities.", "id": 1054, "question": "What insights does the analysis give about the cohesion of political groups in the European parliament?", "title": "Cohesion and Coalition Formation in the European Parliament: Roll-Call Votes and Twitter Activities" }, { "answers": [ "" ], "context": "We first show how the co-voting behaviour of MEPs can be quantified by a measure of the agreement between them. We treat individual RCVs as observations, and MEPs as independent observers or raters. When they cast the same vote, there is a high level of agreement, and when they vote differently, there is a high level of disagreement. We define cohesion as the level of agreement within a political group, a coalition as a voting agreement between political groups, and opposition as a disagreement between different groups.", "id": 1055, "question": "Do they authors account for differences in usage of Twitter amongst MPs into their model?", "title": "Cohesion and Coalition Formation in the European Parliament: Roll-Call Votes and Twitter Activities" }, { "answers": [ "" ], "context": "In this section we describe a network-based approach to analyzing the co-voting behavior of MEPs. For each roll-call vote we form a network, where the nodes in the network are MEPs, and an undirected edge between two MEPs is formed when they cast the same vote.", "id": 1056, "question": "Did the authors examine if any of the MEPs used the disclaimer that retweeting does not imply endorsement on their twitter profile?", "title": "Cohesion and Coalition Formation in the European Parliament: Roll-Call Votes and Twitter Activities" }, { "answers": [ "By visualizing syntactic distance estimated by the parsing network" ], "context": "Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BIBREF0 . To generate a proper sentence, tokens are put together with a specific syntactic structure. Understanding a sentence also requires lexical information to provide meanings, and syntactical knowledge to correctly combine meanings. Current neural language models can provide meaningful word represent BIBREF1 , BIBREF2 , BIBREF3 . However, standard recurrent neural networks only implicitly model syntax, thus fail to efficiently use structure information BIBREF4 .", "id": 1057, "question": "How do they show their model discovers underlying syntactic structure?", "title": "Neural Language Modeling by Jointly Learning Syntax and Lexicon" }, { "answers": [ "" ], "context": "The idea of introducing some structures, especially trees, into language understanding to help a downstream task has been explored in various ways. For example, BIBREF5 , BIBREF4 learn a bottom-up encoder, taking as an input a parse tree supplied from an external parser. There are models that are able to infer a tree during test time, while still need supervised signal on tree structure during training. For example, BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , etc. Moreover, BIBREF22 did an in-depth analysis of recursive models that are able to learn tree structure without being exposed to any grammar trees. Our model is also able to infer tree structure in an unsupervised setting, but different from theirs, it is a recurrent network that implicitly models tree structure through attention.", "id": 1058, "question": "Which dataset do they experiment with?", "title": "Neural Language Modeling by Jointly Learning Syntax and Lexicon" }, { "answers": [ "BPC, Perplexity" ], "context": "Suppose we have a sequence of tokens INLINEFORM0 governed by the tree structure showed in Figure FIGREF4 . The leafs INLINEFORM1 are observed tokens. Node INLINEFORM2 represents the meaning of the constituent formed by its leaves INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 stands for the leftmost child and right most child. Root INLINEFORM6 represents the meaning of the whole sequence. Arrows represent the dependency relations between nodes. The underlying assumption is that each node depends only on its parent and its left siblings.", "id": 1059, "question": "How do they measure performance of language model tasks?", "title": "Neural Language Modeling by Jointly Learning Syntax and Lexicon" }, { "answers": [ "they are used as additional features in a supervised classification task" ], "context": "", "id": 1060, "question": "How are content clusters used to improve the prediction of incident severity?", "title": "Extracting information from free text through unsupervised graph-based clustering: an application to patient incident records" }, { "answers": [ "A combination of Minimum spanning trees, K-Nearest Neighbors and Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18" ], "context": "The full dataset includes more than 13 million confidential reports of patient safety incidents reported to the National Reporting and Learning System (NRLS) between 2004 and 2016 from NHS trusts and hospitals in England and Wales. Each record has more than 170 features, including organisational details (e.g., time, trust code and location), anonymised patient information, medication and medical devices, among many other details. In most records, there is also a detailed description of the incident in free text, although the quality of the text is highly variable.", "id": 1061, "question": "What cluster identification method is used in this paper?", "title": "Extracting information from free text through unsupervised graph-based clustering: an application to patient incident records" }, { "answers": [ "" ], "context": "Natural language based consumer products, such as Apple Siri and Amazon Alexa, have found wide spread use in the last few years. A key requirement for these conversational systems is the ability to answer factual questions from the users, such as those about movies, music, and artists.", "id": 1062, "question": "How can a neural model be used for a retrieval if the input is the entire Wikipedia?", "title": "Question Answering from Unstructured Text by Retrieval and Comprehension" }, { "answers": [ "" ], "context": "The shared tasks organized annually at WMT provide important benchmarks used in the MT community. Most of these shared tasks include English data, which contributes to make English the most resource-rich language in MT and NLP. In the most popular WMT shared task for example, the News task, MT systems have been trained to translate texts from and to English BIBREF0, BIBREF1.", "id": 1063, "question": "Which algorithm is used in the UDS-DFKI system?", "title": "UDS--DFKI Submission to the WMT2019 Similar Language Translation Shared Task" }, { "answers": [ "" ], "context": "With the widespread use of MT technology and the commercial and academic success of NMT, there has been more interest in training systems to translate between languages other than English BIBREF5. One reason for this is the growing need of direct translation between pairs of similar languages, and to a lesser extent language varieties, without the use of English as a pivot language. The main challenge is to overcome the limitation of available parallel data taking advantage of the similarity between languages. Studies have been published on translating between similar languages (e.g. Catalan - Spanish BIBREF5) and language varieties such as European and Brazilian Portuguese BIBREF6, BIBREF7. The study by lakew2018neural tackles both training MT systems to translate between European–Brazilian Portuguese and European–Canadian French, and two pairs of similar languages Croatian–Serbian and Indonesian–Malay.", "id": 1064, "question": "Does the use of out-of-domain data improve the performance of the method?", "title": "UDS--DFKI Submission to the WMT2019 Similar Language Translation Shared Task" }, { "answers": [ "" ], "context": "Traditional Chinese Medicine (TCM) is one of the most important forms of medical treatment in China and the surrounding areas. TCM has accumulated large quantities of documentation and therapy records in the long history of development. Prescriptions consisting of herbal medication are the most important form of TCM treatment. TCM practitioners prescribe according to a patient's symptoms that are observed and analyzed by the practitioners themselves instead of using medical equipment, e.g., the CT. The patient takes the decoction made out of the herbal medication in the prescription. A complete prescription includes the composition of herbs, the proportion of herbs, the preparation method and the doses of the decoction. In this work, we focus on the composition part of the prescription, which is the most essential part of the prescription.", "id": 1065, "question": "Do they impose any grammatical constraints over the generated output?", "title": "Exploration on Generating Traditional Chinese Medicine Prescriptions from Symptoms with an End-to-End Approach" }, { "answers": [ "They think it will help human TCM practitioners make prescriptions." ], "context": "There has not been much work concerning computational TCM. zhou2010development attempted to build a TCM clinical data warehouse so that the TCM knowledge can be analyzed and used. This is a typical way of collecting data, since the number of prescriptions given by the practitioners in the clinics is very large. However, in reality, most of the TCM doctors do not refer to the constructed digital systems, because the quality of the input data tends to be poor. Therefore, we choose prescriptions in the classics (books or documentation) of TCM. Although the available data can be fewer than the clinical data, it guarantees the quality of the prescriptions.", "id": 1066, "question": "Why did they think this was a good idea?", "title": "Exploration on Generating Traditional Chinese Medicine Prescriptions from Symptoms with an End-to-End Approach" }, { "answers": [ "" ], "context": "Human languages are intertwined with their cultures and societies, having evolved together, reflecting them and in turn shaping them BIBREF0 , BIBREF1 . Part-of-day nouns (e.g. ‘morning’ or ‘night’) are an example of this, as their meaning depends on how each language's speakers organize their daily schedule. For example, while the morning in English-speaking countries is assumed to end at noon, the Spanish term (‘mañana’) is understood to span until lunch time, which normally takes place between 13:00 and 15:00 in Spain. It is fair to relate this difference to cultural (lunch being the main meal of the day in Spain, as opposed to countries like the uk, and therefore being a milestone in the daily timetable) and sociopolitical factors (the late lunch time being influenced by work schedules and the displacement of the Spanish time zones with respect to solar time). Similar differences have been noted for different pairs of languages BIBREF2 and for cultures using the same language BIBREF3 , based on manual study, field research and interviews with natives. Work on automatically extracting the semantics of part-of-day nouns is scarce, as classic corpora are not timestamped. Reiter2003a,Reiter2003b overcome it by analyzing weather forecasts and aligning them to timestamped simulations, giving approximate groundings for time-of-day nouns and showing idiolectal variation on the term ‘evening’, but the work is limited to English.", "id": 1067, "question": "How many languages are included in the tweets?", "title": "Grounding the Semantics of Part-of-Day Nouns Worldwide using Twitter" }, { "answers": [ "" ], "context": "To ground the semantics of greetings we used 5 terms as seeds: ‘good morning’, ‘good afternoon’, ‘good evening’, ‘good night’ and ‘hello’ (a time-unspecific greeting used for comparison). We translated them to 53 languages and variants using Bing translator. We use italics to refer to greetings irrespective of the language. 172,802,620 tweets were collected from Sept. 2 to Dec. 7 2016.", "id": 1068, "question": "What languages are explored?", "title": "Grounding the Semantics of Part-of-Day Nouns Worldwide using Twitter" }, { "answers": [ "" ], "context": "Given a country, some of the tweets are written in foreign languages for reasons like tourism or immigration. This paper refers to tweets written in official or de facto languages, unless otherwise specified. Also, analyzing differences according to criteria such as gender or solar time can be relevant. As determining the impact of all those is a challenge on its own, we focus on the primary research question: can we learn semantics of the part-of-day nouns from simple analysis of tweets? To verify data quality, good morning tweets were revised: out of 1 000 random tweets from the usa, 97.9% were legitimate greetings and among the rest, some reflected somehow that the user just started the day (e.g ‘Didn't get any good morning sms’). We did the same for Spain (98,1% legitimate), Brazil (97.8%) and India (99.6%).", "id": 1069, "question": "Which countries did they look at?", "title": "Grounding the Semantics of Part-of-Day Nouns Worldwide using Twitter" }, { "answers": [ "A pointer network decodes the answer from a bidirectional LSTM with attention flow layer and self-matching layer, whose inputs come from word and character embeddings of the query and input text fed through a highway layer." ], "context": "Information Extraction (IE), which refers to extracting structured information (i.e., relation tuples) from unstructured text, is the key problem in making use of large-scale texts. High quality extracted relation tuples can be used in various downstream applications such as Knowledge Base Population BIBREF0 , Knowledge Graph Acquisition BIBREF1 , and Natural Language Understanding. However, existing IE systems still cannot produce high-quality relation tuples to effectively support downstream applications.", "id": 1070, "question": "What QA models were used?", "title": "QA4IE: A Question Answering based Framework for Information Extraction" }, { "answers": [ "" ], "context": "Most of previous IE systems can be divided into Relation Extraction (RE) based systems BIBREF2 , BIBREF3 and Open IE systems BIBREF4 , BIBREF5 , BIBREF6 .", "id": 1071, "question": "Can this approach model n-ary relations?", "title": "QA4IE: A Question Answering based Framework for Information Extraction" }, { "answers": [ "" ], "context": "To overcome the above weaknesses of existing IE systems, we propose a novel IE framework named QA4IE to perform document level general IE with the help of state-of-the-art approaches in Question Answering (QA) and Machine Reading Comprehension (MRC) area.", "id": 1072, "question": "Was this benchmark automatically created from an existing dataset?", "title": "QA4IE: A Question Answering based Framework for Information Extraction" }, { "answers": [ "" ], "context": "The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind.", "id": 1073, "question": "How does morphological analysis differ from morphological inflection?", "title": "A Resource for Studying Chatino Verbal Morphology" }, { "answers": [ "" ], "context": "Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers.", "id": 1074, "question": "What was the criterion used for selecting the lemmata?", "title": "A Resource for Studying Chatino Verbal Morphology" }, { "answers": [ "" ], "context": "Eastern Chatino languages , including SJQ Chatino, are intensively tonal BIBREF2, BIBREF3. Tones mark both lexical and grammatical distinctions in Eastern Chatino languages.", "id": 1075, "question": "What are the architectures used for the three tasks?", "title": "A Resource for Studying Chatino Verbal Morphology" }, { "answers": [ "" ], "context": "SJQ Chatino verb inflection distinguishes four aspect/mood categories: completive (`I did'), progressive (`I am doing'), habitual (`I habitually do') and potential (`I might do'). In each of these categories, verbs inflect for three persons (first, second, third) and two numbers (singular, plural) and distinguish inclusive and exclusive categories of the first person plural (`we including you' vs `we excluding you'). Verbs can be classified into dozens of different conjugation classes. Each conjugation class involves its own tone pattern; each tone pattern is based on a series of three person/number (PN) triplets. A PN triplet [X, Y, Z] consists of three tones: tone X is employed in the third person singular as well as in all plural forms; tone Y is employed in the second person singular, and tone Z, in the third person singular. Thus, a verb's membership in a particular conjugation class entails the assignment of one tone triplet to completive forms, another to progressive forms, and a third to habitual and potential forms. The paradigm of the verb lyu1 `fall' in Table illustrates: the conjugation class to which this verb belongs entails the assignment of the triplet [1, 42, 20] to the completive, [1, 42, 32] to the progressive, and [20, 42, 32] to the habitual and potential. Verbs in other conjugation classes exhibit other triplet series.", "id": 1076, "question": "Which language family does Chatino belong to?", "title": "A Resource for Studying Chatino Verbal Morphology" }, { "answers": [ "" ], "context": "We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:", "id": 1077, "question": "What system is used as baseline?", "title": "A Resource for Studying Chatino Verbal Morphology" }, { "answers": [ "" ], "context": "Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection,\" inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.", "id": 1078, "question": "How was annotation done?", "title": "A Resource for Studying Chatino Verbal Morphology" }, { "answers": [ "" ], "context": "Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.", "id": 1079, "question": "How was the data collected?", "title": "A Resource for Studying Chatino Verbal Morphology" }, { "answers": [ "They achieved best result in the PAN 2017 shared task with accuracy for Variety prediction task 0.0013 more than the 2nd best baseline, accuracy for Gender prediction task 0.0029 more than 2nd best baseline and accuracy for Joint prediction task 0.0101 more than the 2nd best baseline" ], "context": "With the rise of social media, more and more people acquire some kind of on-line presence or persona, mostly made up of images and text. This means that these people can be considered authors, and thus that we can profile them as such. Profiling authors, that is, inferring personal characteristics from text, can reveal many things, such as their age, gender, personality traits, location, even though writers might not consciously choose to put indicators of those characteristics in the text. The uses for this are obvious, for cases like targeted advertising and other use cases, such as security, but it is also interesting from a linguistic standpoint.", "id": 1080, "question": "How do their results compare against other competitors in the PAN 2017 shared task on Author Profiling?", "title": "N-GrAM: New Groningen Author-profiling Model" }, { "answers": [ "Gender prediction task" ], "context": "After an extensive grid-search we submitted as our final run, a simple SVM system (using the scikit-learn LinearSVM implementation) that uses character 3- to 5-grams and word 1- to 2-grams with tf-idf weighting with sublinear term frequency scaling, where instead of the standard term frequency the following is used:", "id": 1081, "question": "On which task does do model do worst?", "title": "N-GrAM: New Groningen Author-profiling Model" }, { "answers": [ "Variety prediction task" ], "context": "The training dataset provided consist of 11400 sets of tweets, each set representing a single author. The target labels are evenly distributed across variety and gender. The labels for the gender classification task are `male' and `female'. Table TABREF4 shows the labels for the language variation task and also shows the data distribution across languages.", "id": 1082, "question": "On which task does do model do best?", "title": "N-GrAM: New Groningen Author-profiling Model" }, { "answers": [ "" ], "context": "The need to classify sentiment based on the multi-modal input arises in many different problems in customer related marketing fields. Super Characters BIBREF0 is a two-step method for sentiment analysis. It first converts text into images; then feeds the images into CNN models to classify the sentiment. Sentiment classification performance on large text contents from customer online comments shows that the Super Character method is superior to other existing methods. The Super Characters method also shows that the pretrained models on a larger dataset help improve accuracy by finetuning the CNN model on a smaller dataset. Compared with from-scratch trained Super Characters model, the finetuned one improves the accuracy from 95.7% to 97.8% on the well-known Chinese dataset of Fudan Corpus. Squared English Word (SEW) BIBREF1 is an extension of the Super Characters method into Latin Languages. With the wide availability of low-power CNN accelerator chips BIBREF2 BIBREF3, Super Characters method has the great potential to be deployed in large scale by saving power and fast inference speed. In addition, it is easy to deploy as well. The recent work also extend its applications to chatbot BIBREF4, image captioning BIBREF5, and also tabular data machine learning BIBREF6.", "id": 1083, "question": "Is their implementation on CNN-DSA compared to GPU implementation in terms of power consumption, accuracy and speed?", "title": "Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device" }, { "answers": [ "" ], "context": "For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9.", "id": 1084, "question": "Does this implementation on CNN-DSA lead to diminishing of performance?", "title": "Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device" }, { "answers": [ "" ], "context": "The training data set has 12,860 samples with 16 columns. The first ten columns are attributes, including sentenceid, author, nchar, created_utc, score, subreddit, label, full_text, wordcount, and id. And the other six columns are labels for each of the tasks of Emotion_disclosure, Information_disclosure, Support, Emmotion_support, Information_support, and General_support. Each task is a binary classification problem based on the ten attributes. So there will be 60 models to be trained for a 10-fold validation. The test data set has 5000 samples with only the ten columns of attributes. The system run will give labels on these test samples based on the 10-fold training.", "id": 1085, "question": "How is Super Character method modified to handle tabular data also?", "title": "Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device" }, { "answers": [ "" ], "context": "In text mining and Natural Language Processing (NLP), a lemmatizer is a tool used to determine the basic form of a word (lemma). Lemmatization differs from stemming in the way this base form is determined. While stemmers chop off word endings to reach the common stem of words, lemmatizers take into account the morphology of the words in order to produce the common morphological base form, i.e., the form of the word found in a dictionary. This type of text normalization is an important step in pre-processing morphologically complex languages, like Icelandic, before conducting various tasks, such as machine translation, text mining and information retrieval.", "id": 1086, "question": "How are the substitution rules built?", "title": "Nefnir: A high accuracy lemmatizer for Icelandic" }, { "answers": [ "" ], "context": "The most basic approach to lemmatization is a simple look-up in a lexicon. This method has the obvious drawback that words that are not in the lexicon cannot be processed. To solve this, word transformation rules have been used to analyze the surface form of the word (the token) in order to produce the base form. These rules can either be hand-crafted or learned automatically using machine learning. When hand-crafting the rules that are used to determine the lemmas, a thorough knowledge of the morphological features of the language is needed. This is a time-consuming task, further complicated in Icelandic by the extensive inflectional system BIBREF1 . An example of a hand-crafted lemmatizer is the morphological analyzer that is part of the Czech Dependency Treebank BIBREF3 .", "id": 1087, "question": "Which dataset do they use?", "title": "Nefnir: A high accuracy lemmatizer for Icelandic" }, { "answers": [ "" ], "context": "Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicity with specific activities BIBREF1. Recent work in natural language processing has found similar biases, such as in word embeddings BIBREF2, BIBREF3, BIBREF4, object classification BIBREF5, natural language inference BIBREF6, and coreference resolution BIBREF7. Less work has focused on the biases present in dialogue utterances BIBREF8, BIBREF9, despite bias being clearly present in human interactions, and the rapid development of dialogue agents for real-world use-cases, such as interactive assistants. In this work we aim to address this by focusing on mitigating gender bias.", "id": 1088, "question": "What baseline is used to compare the experimental results against?", "title": "Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation" }, { "answers": [ "The training dataset is augmented by swapping all gendered words by their other gender counterparts" ], "context": "Recent work in dialogue incorporates personas, or personality descriptions that ground speaker's chat, such as I love fishing BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Personas have been shown to increase engagingness and improve consistency. However, they can be a starting point for bias BIBREF17, BIBREF18, BIBREF9, as bias in the personas propagates to subsequent conversations.", "id": 1089, "question": "How does counterfactual data augmentation aim to tackle bias?", "title": "Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation" }, { "answers": [ "Gendered characters in the dataset" ], "context": "Analyzing the personas in LIGHT qualitatively, we find many examples of bias. For example, the character girl contains the line I regularly clean and cook dinner. Further examples are given in Table TABREF1.", "id": 1090, "question": "In the targeted data collection approach, what type of data is targetted?", "title": "Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation" }, { "answers": [ "" ], "context": "Text understanding starts with the challenge of finding machine-understandable representation that captures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably the most commonly used document representations. Despite its simplicity, BoW works surprisingly well for many tasks BIBREF0 . However, by treating words and phrases as unique and discrete symbols, BoW often fails to capture the similarity between words or phrases and also suffers from sparsity and high dimensionality.", "id": 1091, "question": "Which language models do they compare against?", "title": "Efficient Vector Representation for Documents through Corruption" }, { "answers": [ "" ], "context": "Text representation learning has been extensively studied. Popular representations range from the simplest BoW and its term-frequency based variants BIBREF9 , language model based methods BIBREF10 , BIBREF11 , BIBREF12 , topic models BIBREF13 , BIBREF3 , Denoising Autoencoders and its variants BIBREF14 , BIBREF15 , and distributed vector representations BIBREF8 , BIBREF2 , BIBREF16 . Another prominent line of work includes learning task-specific document representation with deep neural networks, such as CNN BIBREF17 or LSTM based approaches BIBREF18 , BIBREF19 .", "id": 1092, "question": "Is their approach similar to making an averaged weighted sum of word vectors, where weights reflect word frequencies?", "title": "Efficient Vector Representation for Documents through Corruption" }, { "answers": [ "Informative are those that will not be suppressed by regularization performed." ], "context": "Several works BIBREF6 , BIBREF5 showcased that syntactic and semantic regularities of phrases and sentences are reasonably well preserved by adding or subtracting word embeddings learned through Word2Vec. It prompts us to explore the option of simply representing a document as an average of word embeddings. Figure FIGREF9 illustrates the new model architecture.", "id": 1093, "question": "How do they determine which words are informative?", "title": "Efficient Vector Representation for Documents through Corruption" }, { "answers": [ "" ], "context": "We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\\leftrightarrow $English, German$\\leftrightarrow $French, Chinese$\\leftrightarrow $English, English$\\rightarrow $Lithuanian, English$\\rightarrow $Finnish, and Russian$\\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\\rightarrow $English, Finnish$\\rightarrow $English, and English$\\rightarrow $Kazakh.", "id": 1094, "question": "What is their best performance on the largest language direction dataset?", "title": "Microsoft Research Asia's Systems for WMT19" }, { "answers": [ "" ], "context": "The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\\mathcal {X}$ to domain $\\mathcal {Y}$) and dual task (mapping from domain $\\mathcal {Y}$ to $\\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\\leftrightarrow $English and German$\\leftrightarrow $French translations.", "id": 1095, "question": "How does soft contextual data augmentation work?", "title": "Microsoft Research Asia's Systems for WMT19" }, { "answers": [ "" ], "context": "Pre-training and fine-tuning have achieved great success in language understanding. MASS BIBREF3, a pre-training method designed for language generation, adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. It was integrated into our submitted systems for Chinese$\\rightarrow $English and English$\\rightarrow $Lithuanian translations.", "id": 1096, "question": "How does muli-agent dual learning work?", "title": "Microsoft Research Asia's Systems for WMT19" }, { "answers": [ "" ], "context": "As well known, the evolution of neural network architecture plays a key role in advancing neural machine translation. Neural architecture optimization (NAO), our newly proposed method BIBREF4, leverages the power of a gradient-based method to conduct optimization and guide the creation of better neural architecture in a continuous and more compact space given the historically observed architectures and their performances. It was applied in English$\\leftrightarrow $Finnish translations in our submitted systems.", "id": 1097, "question": "Which language directions are machine translation systems of WMT evaluated on?", "title": "Microsoft Research Asia's Systems for WMT19" }, { "answers": [ "" ], "context": "Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , images BIBREF4 , BIBREF5 , and audio BIBREF6 , BIBREF7 . For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.", "id": 1098, "question": "Approximately how much computational cost is saved by using this model?", "title": "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" }, { "answers": [ "" ], "context": "Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure FIGREF8 ). All parts of the network are trained jointly by back-propagation.", "id": 1099, "question": "What improvement does the MOE model make over the SOTA on machine translation?", "title": "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" }, { "answers": [ "Perpexity is improved from 34.7 to 28.0." ], "context": "Since its introduction more than two decades ago BIBREF16 , BIBREF17 , the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs BIBREF18 , Gaussian Processes BIBREF19 , BIBREF20 , BIBREF21 , Dirichlet Processes BIBREF22 , and deep networks. Other work has focused on different expert configurations such as a hierarchical structure BIBREF23 , infinite numbers of experts BIBREF24 , and adding experts sequentially BIBREF25 . BIBREF26 suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model.", "id": 1100, "question": "What improvement does the MOE model make over the SOTA on language modelling?", "title": "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" }, { "answers": [ "" ], "context": "The Mixture-of-Experts (MoE) layer consists of a set of INLINEFORM0 “expert networks\" INLINEFORM1 , and a “gating network\" INLINEFORM2 whose output is a sparse INLINEFORM3 -dimensional vector. Figure FIGREF8 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.", "id": 1101, "question": "How is the correct number of experts to use decided?", "title": "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" }, { "answers": [ "" ], "context": "A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0 ", "id": 1102, "question": "What equations are used for the trainable gating network?", "title": "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" }, { "answers": [ "Accuracy of best interpretible system was 0.3945 while accuracy of LSTM-ELMo net was 0.6818." ], "context": "Spelling error correction is a fundamental NLP task. Most language processing applications benefit greatly from being provided clean texts for their best performance. Human users of computers also often expect competent help in making spelling of their texts correct.", "id": 1103, "question": "What is the difference in performance between the interpretable system (e.g. vectors and cosine distance) and LSTM with ELMo system?", "title": "Evaluation of basic modules for isolated spelling error correction in Polish texts" }, { "answers": [ "" ], "context": "Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .", "id": 1104, "question": "What solutions are proposed for error detection and context awareness?", "title": "Evaluation of basic modules for isolated spelling error correction in Polish texts" }, { "answers": [ "" ], "context": "The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.", "id": 1105, "question": "How is PIEWi annotated?", "title": "Evaluation of basic modules for isolated spelling error correction in Polish texts" }, { "answers": [ "" ], "context": "A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.", "id": 1106, "question": "What methods are tested in PIEWi?", "title": "Evaluation of basic modules for isolated spelling error correction in Polish texts" }, { "answers": [ "" ], "context": "Another powerful approach, if conceptually simple in linguistic terms, is using a character-based recurrent neural network. Here, we test uni- and bidirectional Long Short-Term Memory networks BIBREF14 that are fed characters of the error as their input and are expected to output its correct form, character after character. This is similar to traditional solutions conceptualizing the spelling error as a chain of characters, which are used as evidence to predict the most likely chain of replacements (original characters). This was done with n-gram methods, Markov chains and other probabilistic models BIBREF15 . Since nowadays neural networks enjoy a large awareness as an element of software infrastructure, with actively maintained packages readily available, their evaluation seems to be the most practically useful. We used the PyTorch BIBREF16 implementation of LSTM in particular.", "id": 1107, "question": "Which specific error correction solutions have been proposed for specialized corpora in the past?", "title": "Evaluation of basic modules for isolated spelling error correction in Polish texts" }, { "answers": [ "" ], "context": "Task-oriented dialog systems are becoming increasingly popular, as they can assist users in various daily activities such as ticket booking and restaurant reservations. In a typical task-oriented dialog system, the Natural Language Generation (NLG) module plays a crucial role: it converts a system action (often specified in a semantic form selected by a dialog policy) into a final response in natural language. Hence, the response should be adequate to represent semantic dialog actions, and fluent to engage users' attention. As the ultimate interface to interacts with users, NLG plays a significant impact on the users' experience.", "id": 1108, "question": "What was the criteria for human evaluation?", "title": "Few-shot Natural Language Generation for Task-Oriented Dialog" }, { "answers": [ "" ], "context": "A typical task-oriented spoken dialog system uses a pipeline architecture, as shown in Figure FIGREF2 (a), where each dialog turn is processed using a four-step procedure. $({1})$ Transcriptions of user’s input are first passed to the natural language understanding (NLU) module, where the user’s intention and other key information are extracted. $({2})$ This information is then formatted as the input to dialog state tracking (DST), which maintains the current state of the dialog. $({3})$ Outputs of DST are passed to the dialog policy module, which produces a dialog act based on the facts or entities retrieved from external resources (such as a database or a knowledge base). $({4})$ The dialog act emitted by the dialog policy module serves as the input to the NLG, through which a system response in natural language is generated. In this paper, we focus on the NLG component of task-oriented dialog systems, how to produce natural language responses conditioned on dialog acts.", "id": 1109, "question": "What automatic metrics are used to measure performance of the system?", "title": "Few-shot Natural Language Generation for Task-Oriented Dialog" }, { "answers": [ "" ], "context": "We tackle this generation problem using conditional neural language models. Given training data of $N$ samples $=\\lbrace (_n, _n)\\rbrace _{n=1}^{N}$, our goal is to build a statistical model parameterized by $$ to characterize $p_{}(| )$. To leverage the sequential structure of response, one may further decompose the joint probability of $$ using the chain rule, casting an auto-regressive generation process as follows:", "id": 1110, "question": "What existing methods is SC-GPT compared to?", "title": "Few-shot Natural Language Generation for Task-Oriented Dialog" }, { "answers": [ "French-English" ], "context": "Neural machine translation (NMT) has grown rapidly in the past years BIBREF0, BIBREF1. It usually takes the form of an encoder-decoder neural network architecture in which source sentences are summarized into a vector representation by the encoder and are then decoded into target sentences by the decoder. NMT has outperformed conventional statistical machine translation (SMT) by a significant margin over the past years, benefiting from gating and attention techniques. Various models have been proposed based on different architectures such as RNN BIBREF0, CNN BIBREF2 and Transformer BIBREF1, the latter having achieved state-of-the-art performances while significantly reducing training time.", "id": 1111, "question": "Which language-pair had the better performance?", "title": "Using Whole Document Context in Neural Machine Translation" }, { "answers": [ "" ], "context": "Interest in considering the whole document instead of a set of sentences preceding the current pair lies in the necessity for a human translator to account for broader context in order to keep a coherent translation. The idea of representing and using documents for a model is interesting, since the model could benefit from information located before or after the current processed sentence.", "id": 1112, "question": "Which datasets were used in the experiment?", "title": "Using Whole Document Context in Neural Machine Translation" }, { "answers": [ "" ], "context": "We propose to use the simplest method to estimate document embeddings. The approach is called SWEM-aver (Simple Word Embedding Model – average) BIBREF12. The embedding of a document $k$ is computed by taking the average of all its $N$ word vectors (see Eq. DISPLAY_FORM2) and therefore has the same dimension. Out of vocabulary words are ignored.", "id": 1113, "question": "What evaluation metrics did they use?", "title": "Using Whole Document Context in Neural Machine Translation" }, { "answers": [ "" ], "context": "The crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world. Today, over 1.23 million people in the United States are members of a street gang BIBREF0 , BIBREF1 , which is a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise BIBREF2 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Moreover, data from the Centers for Disease Control in the United States suggests that the victims of at least 1.3% of all gang-related homicides are merely innocent bystanders who live in gang occupied neighborhoods BIBREF3 .", "id": 1114, "question": "Do they evaluate only on English datasets?", "title": "Finding Street Gang Members on Twitter" }, { "answers": [ "" ], "context": "Gang violence is a well studied social science topic dating back to 1927 BIBREF17 . However, the notions of “Cyber-” or “Internet banging”, which is defined as “the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization” BIBREF7 , was only recently introduced BIBREF18 , BIBREF10 . Patton et al. introduced the concept of “Internet banging” and studied how social media is now being used as a tool for gang self-promotion and as a way for gang members to gain and maintain street credibility BIBREF7 . They also discussed the relationship between gang-related crime and hip-hop culture, giving examples on how hip-hop music shared on social media websites targeted at harassing rival gang members often ended up in real-world collisions among those gangs. Decker et al. and Patton et al. have also reported that street gangs perform Internet banging with social media posts of videos depicting their illegal behaviors, threats to rival gangs, and firearms BIBREF19 , BIBREF20 .", "id": 1115, "question": "What are the differences in the use of emojis between gang member and the rest of the Twitter population?", "title": "Finding Street Gang Members on Twitter" }, { "answers": [ "" ], "context": "This section discusses the methodology we followed to study and classify the Twitter profiles of gang members automatically. It includes a semi-automatic data collection process to discover a large set of verifiable gang member profiles, an evaluation of the tweets of gang and non-gang member posts to identify promising features, and the deployment of multiple supervised learning algorithms to perform the classification.", "id": 1116, "question": "What are the differences in the use of YouTube links between gang member and the rest of the Twitter population?", "title": "Finding Street Gang Members on Twitter" }, { "answers": [ "" ], "context": "Discovering gang member profiles on Twitter to build training and testing datasets is a challenging task. Past strategies to find these profiles were to search for keywords, phrases, and events that are known to be related to gang activity in a particular city a priori BIBREF10 , BIBREF20 . However, such approaches are unlikely to yield adequate data to train an automatic classifier since gang members from different geographic locations and cultures use local languages, location-specific hashtags, and share information related to activities in a local region BIBREF10 . Such region-specific tweets and profiles may be used to train a classifier to find gang members within a small region but not across the Twitterverse. To overcome these limitations, we adopted a semi-automatic workflow, illustrated in Figure FIGREF7 , to build a dataset of gang member profiles suitable for training a classifier. The steps of the workflow are:", "id": 1117, "question": "What are the differences in the use of images between gang member and the rest of the Twitter population?", "title": "Finding Street Gang Members on Twitter" }, { "answers": [ "" ], "context": "We next explore differences between gang and non-gang member Twitter usage to find promising features for classifying profiles. For this purpose, profiles of non-gang members were collected from the Twitter Streaming API. We collected a random sample of tweets and the profiles of the users who authored the tweets in the random sample. We manually verified that all Twitter profiles collected in this approach belong to non-gang members. The profiles selected were then filtered by location to remove non-U.S. profiles by reverse geo-coding the location stated in their profile description by the Google Maps API. Profiles with location descriptions that were unspecified or did not relate to a location in the U.S. were discarded. We collected 2,000 non-gang member profiles in this manner. In addition, we added 865 manually verified non-gang member profiles collected using the location neutral keywords discussed in Section SECREF3 . Introducing these profiles, which have some characteristics of gang members (such as cursing frequently or cursing at law enforcement) but are not, captures local languages used by family/friends of gang members and ordinary people in a neighborhood where gangs operate.", "id": 1118, "question": "What are the differences in language use between gang member and the rest of the Twitter population?", "title": "Finding Street Gang Members on Twitter" }, { "answers": [ "" ], "context": "The unigrams of tweets, profile text, and linked YouTube video descriptions and comments, along with the distribution of emoji symbols and the profile image tags were used to train four different classification models: a Naive Bayes net, a Logistic Regression, a Random Forest, and a Support Vector Machine (SVM). These four models were chosen because they are known to perform well over text features, which is the dominant type of feature considered. The performance of the models are empirically compared to determine the most suitable classification technique for this problem. Data for the models are represented as a vector of term frequencies where the terms were collected from one or more feature sets described above.", "id": 1119, "question": "How is gang membership verified?", "title": "Finding Street Gang Members on Twitter" }, { "answers": [ "" ], "context": "We next evaluate the performance of classifiers that use the above features to discover gang member profiles on Twitter. For this purpose, we use the training set discussed in Section SECREF3 with 400 gang member profiles (the `positive'/`gang' class) and 2,865 non-gang member profiles (the `negative'/`non-gang' class). We trained and evaluated the performance of the classifiers mentioned in Section SECREF31 under a 10-fold cross validation scheme. For each of the four learning algorithms, we consider variations involving only tweet text, emoji, profile, image, or music interest (YouTube comments and video description) features, and a final variant that considers all types of features together. The classifiers that use a single feature type were intended to help us study the quality of their predictive power by itself. When building these single-feature classifiers, we filtered the training dataset based on the availability of the single feature type in the training data. For example, we only used the twitter profiles that had at least a single emoji in their tweets to train classifiers that consider emoji features. We found 3,085 such profiles out of the 3,265 profiles in the training set. When all feature types were considered, we developed two different models:", "id": 1120, "question": "Do the authors provide evidence that 'most' street gang members use Twitter to intimidate others?", "title": "Finding Street Gang Members on Twitter" }, { "answers": [ "" ], "context": "The exponential increase of interactions on the various social media platforms has generated the huge amount of data on social media platforms like Facebook and Twitter, etc. These interactions resulted not only positive effect but also negative effect over billions of people owing to the fact that there are lots of aggressive comments (like hate, anger, and bullying). These cause not only mental and psychological stress but also account deactivation and even suicideBIBREF1. In this paper we concentrate on problems related to aggressiveness.", "id": 1121, "question": "What is English mixed with in the TRAC dataset?", "title": "A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts" }, { "answers": [ "Emotion Sensor Feature, Part of Speech, Punctuation, Sentiment Analysis, Empath, TF-IDF Emoticon features" ], "context": "There are several works for aggression identification submitted at TRAC 2018 among them some approaches use the ensemble of multiple statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9. Similarly, some of the models likeBIBREF10, BIBREF11, BIBREF12 have used ensemble of statistical and deep learning models. In these models the statistical part of the model uses additional features from text analysis like parts-of-speech tags, punctuation, emotion, emoticon etc. Model like: BIBREF13 has used the ensemble of deep learning models based on majority voting.", "id": 1122, "question": "Which psycholinguistic and basic linguistic features are used?", "title": "A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts" }, { "answers": [ "Systems do not perform well both in Facebook and Twitter texts" ], "context": "In this section, we describe our system architecture for aggressiveness classifier. In section SECREF23 we describe data preprocessing applied on the input text before feeding it to each of the classification models. Section SECREF26 describes the computation of NLP features. In Sections SECREF30, SECREF34 and SECREF45 we have described the architecture of different deep learning models like Deep Pyramid CNN, Disconnected RNN and Pooled BiLSTM respectively. Finally, in Section SECREF49, we describe model averaging based classification model which combines the prediction probabilities from three deep learninig architectures discussed above. (see Figure FIGREF22. for block diagram of system architecture).", "id": 1123, "question": "How have the differences in communication styles between Twitter and Facebook increase the complexity of the problem?", "title": "A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts" }, { "answers": [ "" ], "context": "We consider the text to be well formatted before applying the text to the embedding layer. First, we detect non-English text(which are few) and translate all of them to English using Google Translate. Still, there is some code mixed words like \"mc\", \"bc\" and other English abbreviations and spelling errors like \"nd\" in place of \"and\", \"u\" in place of \"you\" causes deep learning model to confuse with sentences of the same meaning. We follow the strategy of preprocessor as inBIBREF17 to normalize the abbreviations and remove spelling errors, URLs and punctuation marks, converting emojis to their description.", "id": 1124, "question": "What are the key differences in communication styles between Twitter and Facebook?", "title": "A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts" }, { "answers": [ "None" ], "context": "We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).", "id": 1125, "question": "What data/studies do the authors provide to support the assertion that the majority of aggressive conversations contain code-mixed languages?", "title": "A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts" }, { "answers": [ "" ], "context": "With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections.", "id": 1126, "question": "What is the baseline?", "title": "An Emotional Analysis of False Information in Social Media and News Articles" }, { "answers": [ "" ], "context": "Trusted news is recounting its content in a naturalistic way without attempting to affect the opinion of the reader. On the other hand, false news is taking advantage of the presented issue sensitivity to affect the readers' emotions which sequentially may affect their opinions as well. A set of works has been done previously to investigate the language of false information. The authors in BIBREF3 have studied rumours in Twitter. They have investigated a corpus of true and false tweets rumours from different aspects. From an emotional point of view, they found that false rumours inspired fear, disgust, and surprise in their replies while the true ones inspired joy and anticipation. Some kinds of false information are similar to other language phenomena. For example, satire by its definition showed similarity with irony language. The work in BIBREF4 showed that affective features work well in the detection of irony. In addition, they confirmed that positive words are more relevant for identifying sarcasm and negative words for irony BIBREF5. The results of these works motivate us to investigate the impact of emotions on false news types. These are the research questions we aim to answer:", "id": 1127, "question": "What datasets did they use?", "title": "An Emotional Analysis of False Information in Social Media and News Articles" }, { "answers": [ "" ], "context": "Knowledge bases (KBs), such as WordNet BIBREF0 , YAGO BIBREF1 , Freebase BIBREF2 and DBpedia BIBREF3 , represent relationships between entities as triples $(\\mathrm {head\\ entity, relation, tail\\ entity})$ . Even very large knowledge bases are still far from complete BIBREF4 , BIBREF5 . Link prediction or knowledge base completion systems BIBREF6 predict which triples not in a knowledge base are likely to be true BIBREF7 , BIBREF8 . A variety of different kinds of information is potentially useful here, including information extracted from external corpora BIBREF9 , BIBREF10 and the other relationships that hold between the entities BIBREF11 , BIBREF12 . For example, toutanova-EtAl:2015:EMNLP used information from the external ClueWeb-12 corpus to significantly enhance performance.", "id": 1128, "question": "What scoring function does the model use to score triples?", "title": "STransE: a novel embedding model of entities and relationships in knowledge bases" }, { "answers": [ "WN18, FB15k" ], "context": "Let $\\mathcal {E}$ denote the set of entities and $\\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \\in \\mathcal {E}$ and $r \\in \\mathcal {R}$ , the STransE model defines a score function $f_r(h, t)$ of its implausibility. Our goal is to choose $f$ such that the score $f_r(h,t)$ of a plausible triple $(h,r,t)$ is smaller than the score $f_{r^{\\prime }}(h^{\\prime },t^{\\prime })$ of an implausible triple $\\mathcal {R}$0 . We define the STransE score function $\\mathcal {R}$1 as follows:", "id": 1129, "question": "What datasets are used to evaluate the model?", "title": "STransE: a novel embedding model of entities and relationships in knowledge bases" }, { "answers": [ "" ], "context": "Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method.", "id": 1130, "question": "How long it took for each Doc2Vec model to be trained?", "title": "Doc2Vec on the PubMed corpus: study of a new approach to generate related articles" }, { "answers": [ "" ], "context": "PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1.", "id": 1131, "question": "How better are results for pmra algorithm than Doc2Vec in human evaluation? ", "title": "Doc2Vec on the PubMed corpus: study of a new approach to generate related articles" }, { "answers": [ "" ], "context": "To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed.", "id": 1132, "question": "What Doc2Vec architectures other than PV-DBOW have been tried?", "title": "Doc2Vec on the PubMed corpus: study of a new approach to generate related articles" }, { "answers": [ "" ], "context": "Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document.", "id": 1133, "question": "What four evaluation tasks are defined to determine what influences proximity?", "title": "Doc2Vec on the PubMed corpus: study of a new approach to generate related articles" }, { "answers": [ "" ], "context": "Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities.", "id": 1134, "question": "What six parameters were optimized with grid search?", "title": "Doc2Vec on the PubMed corpus: study of a new approach to generate related articles" }, { "answers": [ "SLQA, Rusalka, HMA Model (single), TriAN (single), jiangnan (ensemble), MITRE (ensemble), TriAN (ensemble), HMA Model (ensemble)" ], "context": "Content: Task Definition", "id": 1135, "question": "What baseline models do they compare against?", "title": "Multi-Perspective Fusion Network for Commonsense Reading Comprehension" }, { "answers": [ "This approach considers related images" ], "context": "The Internet provides instant access to a wide variety of online content, news included. Formerly, users had static preferences, gravitating towards their trusted sources, incurring an unwavering sense of loyalty. The same cannot be said for current trends since users are likely to go with any source readily available to them.", "id": 1136, "question": "What are the differences with previous applications of neural networks for this task?", "title": "Identifying Clickbait: A Multi-Strategy Approach Using Neural Networks" }, { "answers": [ "It eliminates non-termination in some models fixing for some models up to 6% of non-termination ratio." ], "context": "Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success, MLE-trained neural sequence models have been shown to exhibit issues such as length bias BIBREF3, BIBREF4 and degenerate repetition BIBREF5. These issues are suspected to be related to the maximum likelihood objective's local normalization, which results in a discrepancy between the learned model's distribution and the distribution induced by the decoding algorithm used to generate sequences BIBREF6, BIBREF7. This has prompted the development of alternative decoding methods BIBREF8, BIBREF5 and training objectives BIBREF9, BIBREF10. In this paper, we formalize and study this discrepancy between the model and the decoding algorithm.", "id": 1137, "question": "How much improvement is gained from the proposed approaches?", "title": "Consistency of a Recurrent Language Model With Respect to Incomplete Decoding" }, { "answers": [ "" ], "context": "We begin our discussion by establishing background definitions. First, we define a sequence which is the main object of our investigation.", "id": 1138, "question": "Is the problem of determining whether a given model would generate an infinite sequence is a decidable problem? ", "title": "Consistency of a Recurrent Language Model With Respect to Incomplete Decoding" }, { "answers": [ "There are is a strong conjecture that it might be the reason but it is not proven." ], "context": "A recurrent language model is an autoregressive model of a sequence distribution, where each conditional probability is parameterized with a neural network. Importantly, we assume that all tokens in a sequence are dependent on each other under a recurrent language model. This allows us to avoid cases in which the model degenerates to a Markovian language model, such as an $n$-gram model with a finite $n$.", "id": 1139, "question": "Is infinite-length sequence generation a result of training with maximum likelihood?", "title": "Consistency of a Recurrent Language Model With Respect to Incomplete Decoding" }, { "answers": [ "" ], "context": "When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information.", "id": 1140, "question": "What metrics are used in challenge?", "title": "Modality-Balanced Models for Visual Dialogue" }, { "answers": [ "" ], "context": "Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features.", "id": 1141, "question": "What model was winner of the Visual Dialog challenge 2019?", "title": "Modality-Balanced Models for Visual Dialogue" }, { "answers": [ "" ], "context": "The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions.", "id": 1142, "question": "What model was winner of the Visual Dialog challenge 2018?", "title": "Modality-Balanced Models for Visual Dialogue" }, { "answers": [ "" ], "context": "In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\\textrm {HISTORY}_t = \\lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \\rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective.", "id": 1143, "question": "Which method for integration peforms better ensemble or consensus dropout fusion with shared parameters?", "title": "Modality-Balanced Models for Visual Dialogue" }, { "answers": [ "133,287 images" ], "context": "Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \\in \\mathbb {R}^{k \\times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone).", "id": 1144, "question": "How big is dataset for this challenge?", "title": "Modality-Balanced Models for Visual Dialogue" }, { "answers": [ "" ], "context": "Semantic applications typically work on the basis of intermediate structures derived from sentences. Traditional word-level intermediate structures, such as POS-tags, dependency trees and semantic role labels, have been widely applied. Recently, entity and relation level intermediate structures attract increasingly more attentions.", "id": 1145, "question": "What open relation extraction tasks did they experiment on?", "title": "Logician: A Unified End-to-End Neural Approach for Open-Domain Information Extraction" }, { "answers": [ "" ], "context": "When reading a sentence in natural language, humans are able to recognize the facts involved in the sentence and accurately express them. In this paper, Symbolic Aided Open Knowledge Expression (SAOKE) is proposed as the form for honestly recording these facts. SAOKE expresses the primary information of sentences in n-ary tuples $(subject,predicate,object_{1},\\cdots ,object_{N})$ , and (in this paper) neglects some auxiliary information. In the design of SAOKE, we take four requirements into consideration: completeness, accurateness, atomicity and compactness.", "id": 1146, "question": "How is Logician different from traditional seq2seq models?", "title": "Logician: A Unified End-to-End Neural Approach for Open-Domain Information Extraction" }, { "answers": [ "" ], "context": "After having analyzed a large number of sentences, we observe that the majority of facts can be classified into the following classes:", "id": 1147, "question": "What's the size of the previous largest OpenIE dataset?", "title": "Logician: A Unified End-to-End Neural Approach for Open-Domain Information Extraction" }, { "answers": [ "" ], "context": "Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments.", "id": 1148, "question": "How is data for RTFM collected?", "title": "RTFM: Generalising to Novel Environment Dynamics via Reading" }, { "answers": [ "Proposed model achive 66+-22 win rate, baseline CNN 13+-1 and baseline FiLM 32+-3 ." ], "context": "A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal.", "id": 1149, "question": "How better is performance of proposed model compared to baselines?", "title": "RTFM: Generalising to Novel Environment Dynamics via Reading" }, { "answers": [ "" ], "context": "Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics.", "id": 1150, "question": "How does propose model model that capture three-way interactions?", "title": "RTFM: Generalising to Novel Environment Dynamics via Reading" }, { "answers": [ "" ], "context": "Transferring knowledge between related domains could help to improve a learner on a special domain. Moreover, gathering data from different related resources proves very time-consuming and expensive. Therefore, this necessitates the development of machine learning methods to extract knowledge from these resources with different properties and characteristics. Transfer learning is a field in which machine learning methods encounter this challenge.", "id": 1151, "question": "Do transferring hurt the performance is the corpora are not related?", "title": "ISS-MULT: Intelligent Sample Selection for Multi-Task Learning in Question Answering" }, { "answers": [ "" ], "context": "Transfer learning has a long history, and many researchers try to utilize the knowledge from relevant datasets to improve the performance on the target dataset [9, 7]. Moreover, transfer learning has been well-studied for deep learning methods in computer vision. Similarity between datasets is a key factor for performing transfer learning. Based on the similarity between two domains, different methods could be used. Bengio et. al [11] examine the ability of the neural networks in transferring knowledge from different domain.", "id": 1152, "question": "Is accuracy the only metric they used to compare systems?", "title": "ISS-MULT: Intelligent Sample Selection for Multi-Task Learning in Question Answering" }, { "answers": [ "" ], "context": "Five different datasets (SQuAD, SelQA , WikiQA, WikiQA and InforbaxQA) are used for evaluation of the INIT, MULT and ISS-MULT methods. These datasets were proposed in recent years for the question answering problems. These datasets are produced differently. Therefore, they are may not be semanticaly related datasets, and this feature plays an important role in transfer learning in NLP tasks.", "id": 1153, "question": "How do they transfer the model?", "title": "ISS-MULT: Intelligent Sample Selection for Multi-Task Learning in Question Answering" }, { "answers": [ "" ], "context": "Question answering (QA) systems can provide most value for users by showing them a fine-grained short answer (answer span) in a context that supports the answer (paragraph in a document). However, fine-grained short answer annotations for question answering are costly to obtain, whereas non-expert annotators can annotate coarse-grained passages or documents faster and with higher accuracy. In addition, coarse-grained annotations are often freely available from community forums such as Quora. Therefore, methods that can learn to select short answers based on more abundant coarsely annotated paragraph-level data can potentially bring significant improvements. As an example of the two types of annotation, Figure 1 shows on the left a question with corresponding short answer annotation (underlined short answer) in a document, and on the right a question with a document annotated at the coarse-grained paragraph relevance level. In this work we study methods for learning short answer models from small amounts of data annotated at the short answer level and larger amounts of data annotated at the paragraph level. min-seo-hajishirzi:2017:Short recently studied a related problem of transferring knowledge from a fine-grained QA model to a coarse-grained model via multi-task learning and showed that finely annotated data can help improve performance on the coarse-grained task. We investigate the opposite and arguably much more challenging direction: improving fine-grained models using coarse-grained data.", "id": 1154, "question": "Will these findings be robust through different datasets and different question answering algorithms?", "title": "Improving Span-based Question Answering Systems with Coarsely Labeled Data" }, { "answers": [ "" ], "context": "The fine-grained short question answering task asks to select an answer span in a document containing multiple paragraphs. In the left example in Figure 1, the short answer to the question What was Nikola Tesla's ethnicity? is the phrase Serbian in the first paragraph in the document.", "id": 1155, "question": "What is the underlying question answering algorithm?", "title": "Improving Span-based Question Answering Systems with Coarsely Labeled Data" }, { "answers": [ "" ], "context": "We define the fine-grained task of interest $T_y$ as predicting outputs $y$ from a set of possible outputs ${\\cal {Y}}(x)$ given inputs $x$ . We say that a task $T_z$ to predict outputs $z$ given inputs $x$ is a coarse-grained counterpart of $T_y$ , iff each coarse label $z$ determines a sub-set of possible labels ${\\cal {Y}}(z,x) \\subset {\\cal {Y}}(x)$ , and each fine label $y$0 has a deterministically corresponding single coarse label $y$1 . We refer to the fine-grained and coarse-grained training data as $y$2 and $y$3 respectively.", "id": 1156, "question": "What datasets have this method been evaluated on?", "title": "Improving Span-based Question Answering Systems with Coarsely Labeled Data" }, { "answers": [ "" ], "context": "Language plays a vital role in the human life. A language is a structured system of communication BIBREF2. There are various language systems in the world with the estimated number being between 5,000 and 7,000 BIBREF3. Natural Language Processing (NLP) which we commonly hear is a subfield of linguistics. NLP aims to provide interactions between computers and human languages. The performance of NLP is evaluated by how computers can process and analyze large amounts of natural language data BIBREF4. In terms of language processing, we cannot but mention Computational Linguistics BIBREF5. Computational Linguistics is the scientific study of language from a computational perspective, and thus an interdisciplinary field, involving linguistics, computer science, mathematics, logic, cognitive science, and cognitive psychology.", "id": 1157, "question": "Is there a machine learning approach that tries to solve same problem?", "title": "AandP: Utilizing Prolog for converting between active sentence and passive sentence with three-steps conversion" }, { "answers": [ "Author's own DCG rules are defined from scratch." ], "context": "The main challenge of this work is how much it can handle cases. There are a variety of cases in terms of active sentence and passive sentence. The cases that I solved in this work are shown as follows.", "id": 1158, "question": "What DCGs are used?", "title": "AandP: Utilizing Prolog for converting between active sentence and passive sentence with three-steps conversion" }, { "answers": [ "" ], "context": "The objective of this work is sentences: active sentence and passive sentence, so I need to determine the representation of both active sentence and passive sentence.", "id": 1159, "question": "What else is tried to be solved other than 12 tenses, model verbs and negative form?", "title": "AandP: Utilizing Prolog for converting between active sentence and passive sentence with three-steps conversion" }, { "answers": [ "" ], "context": "User interacts with the program by posing the query with the form (Figure FIGREF56):", "id": 1160, "question": "What is used for evaluation of this approach?", "title": "AandP: Utilizing Prolog for converting between active sentence and passive sentence with three-steps conversion" }, { "answers": [ "" ], "context": "There are 12 tenses in English. Each tense has a specific structure for the sentence. If each tense is handled individually, it will be quite long and be not an optimal solution. Therefore, as my best observation, I found a solution which divides 12 English tenses into 4 groups (same color means same group) based on the number of auxiliary verbs in the active sentence. This solution is summarized in Figure FIGREF72, consisting of:", "id": 1161, "question": "Is there information about performance of these conversion methods?", "title": "AandP: Utilizing Prolog for converting between active sentence and passive sentence with three-steps conversion" }, { "answers": [ "" ], "context": "The three-steps conversion consists of three steps:", "id": 1162, "question": "Are there some experiments performed in the paper?", "title": "AandP: Utilizing Prolog for converting between active sentence and passive sentence with three-steps conversion" }, { "answers": [ "" ], "context": "Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN).", "id": 1163, "question": "How much is performance improved by disabling attention in certain heads?", "title": "Revealing the Dark Secrets of BERT" }, { "answers": [ "" ], "context": "There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers.", "id": 1164, "question": "In which certain heads was attention disabled in experiments?", "title": "Revealing the Dark Secrets of BERT" }, { "answers": [ "" ], "context": "We pose the following research questions:", "id": 1165, "question": "What handcrafter features-of-interest are used?", "title": "Revealing the Dark Secrets of BERT" }, { "answers": [ "" ], "context": "In this section, we present the experiments conducted to address the above research questions.", "id": 1166, "question": "What subset of GLUE tasks is used?", "title": "Revealing the Dark Secrets of BERT" }, { "answers": [ "" ], "context": "Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification.", "id": 1167, "question": "Do they predict the sentiment of the review summary?", "title": "Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis" }, { "answers": [ "2.7 accuracy points" ], "context": "The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11.", "id": 1168, "question": "What is the performance difference of using a generated summary vs. a user-written one?", "title": "Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis" }, { "answers": [ "" ], "context": "In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods.", "id": 1169, "question": "Which review dataset do they use?", "title": "Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis" }, { "answers": [ "accuracy with standard deviation" ], "context": "Los investigadores en Procesamiento de Lenguaje Natural (PLN) durante mucho tiempo han utilizado corpus constituidos por documentos enciclopédicos (notablemente Wikipedia), periodísticos (periódicos o revistas) o especializados (documentos legales, científicos o técnicos) para el desarrollo y pruebas de sus modelos BIBREF0, BIBREF1, BIBREF2.", "id": 1170, "question": "What evaluation metrics did they look at?", "title": "Generaci\\'on autom\\'atica de frases literarias en espa\\~nol" }, { "answers": [ "" ], "context": "La generación de texto es una tarea relativamente clásica, que ha sido estudiada en diversos trabajos. Por ejemplo, BIBREF5 presentan un modelo basado en cadenas de Markov para la generación de texto en idioma polaco. Los autores definen un conjunto de estados actuales y calculan la probabilidad de pasar al estado siguiente. La ecuación (DISPLAY_FORM1) calcula la probabilidad de pasar al estado $X_{i}$ a partir de $X_{j}$,", "id": 1171, "question": "What datasets are used?", "title": "Generaci\\'on autom\\'atica de frases literarias en espa\\~nol" }, { "answers": [ "Datasets used are Celex (English, Dutch), Festival (Italian), OpenLexuque (French), IIT-Guwahati (Manipuri), E-Hitz (Basque)" ], "context": "Words can be considered compositions of syllables, which in turn are compositions of phones. Phones are units of sound producible by the human vocal apparatus. Syllables play an important role in prosody and are influential components of natural language understanding, speech production, and speech recognition systems. Text-to-speech (TTS) systems can rely heavily on automatically syllabified phone sequences BIBREF0. One prominent example is Festival, an open source TTS system that relies on a syllabification algorithm to organize speech production BIBREF1.", "id": 1172, "question": "What are the datasets used for the task?", "title": "Language-Agnostic Syllabification with Neural Sequence Labeling" }, { "answers": [ "Authors report their best models have following accuracy: English CELEX (98.5%), Dutch CELEX (99.47%), Festival (99.990%), OpenLexique (100%), IIT-Guwahat (95.4%), E-Hitz (99.83%)" ], "context": "Syllabification can be considered a sequence labeling task where each label delineates the existence or absence of a syllable boundary. As such, syllabification has much in common with well-researched topics such as part-of-speech tagging, named-entity recognition, and chunking BIBREF17. Neural networks have recently outpaced more traditional methods in sequence labeling tasks. These neural-based approaches are taking the place of HMMs, maximum entropy Markov models (MEMM), and conditional random fields (CRF) BIBREF18.", "id": 1173, "question": "What is the accuracy of the model for the six languages tested?", "title": "Language-Agnostic Syllabification with Neural Sequence Labeling" }, { "answers": [ "CELEX (Dutch and English) - SVM-HMM\nFestival, E-Hitz and OpenLexique - Liang hyphenation\nIIT-Guwahat - Entropy CRF" ], "context": "Recurrent neural networks (RNNs) differ from standard feed-forward neural networks in their treatment of input order; each element is processed given the context of the input that came before. RNNs operate on sequential data and can take many forms. Our network leverages the long short-term memory (LSTM) cell which is a prominent RNN variant capable of capturing long-term sequential dependencies BIBREF20. The gated memory cells of LSTM are an improvement over the standard RNN because the standard RNN is often biased toward short-term dependencies BIBREF21, BIBREF22. At each time step, the LSTM cell determines what information is important to introduce, to keep, and to output. This is done using an input gate, a forget gate, and an output gate shown in Fig. FIGREF5. LSTM operates in a single direction through time. This can be a limitation when a time step has both past dependency and future dependency. For example, a consonant sound may be the coda of a syllable earlier in the sequence or the onset of a syllable later in the sequence. Thus, processing a phonetic sequence in both the forward and backwards directions provides an improved context for assigning syllable boundaries. A bidirectional LSTM (BiLSTM) is formed when an LSTM moving forward through time is concatenated with an LSTM moving backward through time BIBREF23.", "id": 1174, "question": "Which models achieve state-of-the-art performances?", "title": "Language-Agnostic Syllabification with Neural Sequence Labeling" }, { "answers": [ "" ], "context": "Convolutional neural networks (CNNs) are traditionally used in computer vision, but perform well in many text processing tasks that benefit from position-invariant abstractions BIBREF24, BIBREF25. These abstractions depend exclusively on local neighboring features rather than the position of features in a global structure. According to a comparative study by BIBREF26, BiLSTMs tend to outperform CNNs in sequential tasks such as POS tagging, but CNNs tend to outperform BiLSTMs in global relation detection tasks such as keyphrase matching for question answering. We use both the BiLSTM and the CNN in our network so that the strengths of each are incorporated. CNNs have been combined with BiLSTMs to perform state-of-the-art sequence tagging in both POS tagging and NER. BIBREF27 used BiLSTMs to process the word sequence while each word's character sequence was processed with CNNs to provide a second representation. In textual syllabification, the only input is the phone sequence.", "id": 1175, "question": "Is the LSTM bidirectional?", "title": "Language-Agnostic Syllabification with Neural Sequence Labeling" }, { "answers": [ "" ], "context": "Neural machine translation (NMT) systems are conventionally trained based on the approach of maximizing the log-likelihood on a training corpus in order to learn distributed representations of words according to their sentence context, which is highly demanding in terms of training data as well as the network capacity. Under conditions of lexical sparsity, which may include the cases when the amount of training examples is insufficient to observe words in different context, and particularly in translation of morphologically-rich languages, where the same word can have exponentially many different surface realizations due to syntactic conditions, which are often rarely or ever observed in any set of collected examples, the model may suffer in learning accurate representations of words. The standard approach to overcome this limitation is to replace the word representations in the model with subword units that are shared among words, which are, in principle, more reliable as they are observed more frequently in varying context BIBREF0, BIBREF1. One drawback related to this approach, however, is that the estimation of the subword vocabulary relies on word segmentation methods optimized using corpus-dependent statistics, disregarding any linguistic notion and the translation objective, which may result in morphological errors during splitting, resulting in subword units that are semantically ambiguous as they might be used in far too many lexical contexts BIBREF2. Moreover, the words are generated predicting multiple subword units, which makes generalizing to unseen word forms more difficult, where some of the subword units that could be used to reconstruct a given word may be unlikely in the given context. To alleviate the sub-optimal effects of using explicit segmentation and generalize better to new morphological forms, recent studies explored the idea of extending the same approach to model translation directly at the level of characters BIBREF3, BIBREF4, which, in turn, have demonstrated the requirement of using comparably deeper networks, as the network would then need to learn longer distance grammatical dependencies BIBREF5.", "id": 1176, "question": "What are the three languages studied in the paper?", "title": "A Latent Morphology Model for Open-Vocabulary Neural Machine Translation" }, { "answers": [ "" ], "context": "Semantic parsing aims to solve the problem of canonicalizing language and representing its meaning: given an input sentence, it aims to extract a semantic representation of that sentence. Abstract meaning representation BIBREF0 , or AMR for short, allows us to do that with the inclusion of most of the shallow-semantic natural language processing (NLP) tasks that are usually addressed separately, such as named entity recognition, semantic role labeling and co-reference resolution. AMR is partially motivated by the need to provide the NLP community with a single dataset that includes basic disambiguation information, instead of having to rely on different datasets for each disambiguation problem. The annotation process is straightforward, enabling the development of large datasets. Alternative semantic representations have been developed and studied, such as CCG BIBREF1 , BIBREF2 and UCCA BIBREF3 .", "id": 1177, "question": "Do they use pretrained models as part of their parser?", "title": "An Incremental Parser for Abstract Meaning Representation" }, { "answers": [ "" ], "context": "Similarly to dependency parsing, AMR parsing is partially based on the identification of predicate-argument structures. Much of the dependency parsing literature focuses on transition-based dependency parsing—an approach to parsing that scans the sentence from left to right in linear time and updates an intermediate structure that eventually ends up being a dependency tree.", "id": 1178, "question": "Which subtasks do they evaluate on?", "title": "An Incremental Parser for Abstract Meaning Representation" }, { "answers": [ "" ], "context": "In recent years, Deep Neural Networks (DNNs) have been successfully applied to Automatic Speech Recognition (ASR) for many well-resourced languages including Mandarin and English BIBREF0 , BIBREF1 . However, only a small portion of languages have clean speech labeled corpus. As a result, there is an increasing interest in building speech recognition systems for low-resource languages. To address this issue, researchers have successfully exploited multilingual speech recognition models by taking advantage of labeled corpora in other languages BIBREF2 , BIBREF3 . Multilingual speech recognition enables acoustic models to share parameters across multiple languages, therefore low-resource acoustic models can benefit from rich resources.", "id": 1179, "question": "Do they test their approach on large-resource tasks?", "title": "Multilingual Speech Recognition with Corpus Relatedness Sampling" }, { "answers": [ "" ], "context": "Multilingual speech recognition has explored various models to share parameters across languages in different ways. For example, parameters can be shared by using posterior features from other languages BIBREF5 , applying the same GMM components across different HMM states BIBREF6 , training shared hidden layers in DNNs BIBREF2 , BIBREF3 or LSTM BIBREF4 , using language independent bottleneck features BIBREF7 , BIBREF8 . Some models only share their hidden layers, but use separate output layers to predict their phones BIBREF2 , BIBREF3 . Other models have only one shared output layer to predict the universal phone set shared by all languages BIBREF9 , BIBREF10 , BIBREF11 . While those works proposed the multilingual models in different ways, few of them have explicitly exploited the relatedness across various languages and corpora. In contrast, our work computes the relatedness between different corpora using the embedding representations and exploits them efficiently.", "id": 1180, "question": "By how much do they, on average, outperform the baseline multilingual model on 16 low-resource tasks?", "title": "Multilingual Speech Recognition with Corpus Relatedness Sampling" }, { "answers": [ "" ], "context": "In this section, we describe our approach to compute the corpus embedding and our Corpus Relatedness Sampling strategy.", "id": 1181, "question": "How do they compute corpus-level embeddings?", "title": "Multilingual Speech Recognition with Corpus Relatedness Sampling" }, { "answers": [ "" ], "context": "Deep models have been shown to be vulnerable against adversarial input perturbations BIBREF0, BIBREF1. Small, semantically invariant input alterations can lead to drastic changes in predictions, leading to poor performance on adversarially chosen samples. Recent work BIBREF2, BIBREF3, BIBREF4 also exposed the vulnerabilities of neural NLP models, e.g. with small character perturbations BIBREF5 or paraphrases BIBREF6, BIBREF7. These adversarial attacks highlight often unintuitive model failure modes and present a challenge to deploying NLP models.", "id": 1182, "question": "Which dataset do they use?", "title": "Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation" }, { "answers": [ "For relation prediction they test TransE and for relation extraction they test position aware neural sequence model" ], "context": "Author contributions: Hao Zhu designed the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the paper. Zhiyuan Liu is the corresponding author.", "id": 1183, "question": "Which competitive relational classification models do they test?", "title": "Quantifying Similarity between Relations with Fact Distribution" }, { "answers": [ "" ], "context": "Just as introduced in sec:introduction, we quantify the similarity between relations by their corresponding head-tail entity pair distributions. Consider the typical case that we have got numbers of facts, but they are still sparse among all facts in the real world. How could we obtain a well-generalized distribution over the whole space of possible triples beyond the training facts? This section proposes a method to parameterize such a distribution.", "id": 1184, "question": "Which tasks do they apply their method to?", "title": "Quantifying Similarity between Relations with Fact Distribution" }, { "answers": [ "" ], "context": "A fact is a triple INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are called head and tail entities, INLINEFORM3 is the relation connecting them, INLINEFORM4 and INLINEFORM5 are the sets of entities and relations respectively. We consider a score function INLINEFORM6 maps all triples to a scalar value. As a special case, the function can be factorized into the sum of two parts: INLINEFORM7 . We use INLINEFORM8 to define the unnormalized probability. DISPLAYFORM0 ", "id": 1185, "question": "Which knowledge bases do they use?", "title": "Quantifying Similarity between Relations with Fact Distribution" }, { "answers": [ "By assessing similarity of 360 pairs of relations from a subset of Wikidata using an integer similarity score from 0 to 4" ], "context": "Here we introduce our special design of neural networks. For the first part and the second part, we implement the scoring functions introduced in eq:local-normalization as DISPLAYFORM0 ", "id": 1186, "question": "How do they gather human judgements for similarity between relations?", "title": "Quantifying Similarity between Relations with Fact Distribution" }, { "answers": [ "" ], "context": "Now we discuss the method to perform training. In this paper, we consider joint training. By minimizing the loss function, we compute the model parameters INLINEFORM0 : DISPLAYFORM0 ", "id": 1187, "question": "Which sampling method do they use to approximate similarity between the conditional probability distributions over entity pairs?", "title": "Quantifying Similarity between Relations with Fact Distribution" }, { "answers": [ "To classify a text as belonging to one of the ten possible classes." ], "context": "Tremendous advances in natural language processing (NLP) have been enabled by novel deep neural network architectures and word embeddings. Historically, convolutional neural network (CNN) BIBREF0 , BIBREF1 and recurrent neural network (RNN) BIBREF2 , BIBREF3 topologies have competed to provide state-of-the-art results for NLP tasks, ranging from text classification to reading comprehension. CNNs identify and aggregate patterns with increasing feature sizes, reflecting our common practice of identifying patterns, literal or idiomatic, for understanding language; they are thus adept at tasks involving key phrase identification. RNNs instead construct a representation of sentences by successively updating their understanding of the sentence as they read new words, appealing to the formally sequential and rule-based construction of language. While both networks display great efficacy at certain tasks BIBREF4 , RNNs tend to be the more versatile, have emerged as the clear victor in, e.g., language translation BIBREF5 , BIBREF6 , BIBREF7 , and are typically more capable of identifying important contextual points through attention mechanisms for, e.g., reading comprehension BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . With an interest in NLP, we thus turn to RNNs.", "id": 1188, "question": "What text classification task is considered?", "title": "The emergent algebraic structure of RNNs and embeddings in NLP" }, { "answers": [ "A network, whose learned functions satisfy a certain equation. The network contains RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state." ], "context": "We embedded words as vectors and used a uni-directional GRU connected to a dense layer to classify the account from which tweets may have originated. The embeddings and simple network were trained end-to-end to avoid imposing any artificial or heuristic constraints on the system.", "id": 1189, "question": "What novel class of recurrent-like networks is proposed?", "title": "The emergent algebraic structure of RNNs and embeddings in NLP" }, { "answers": [ "" ], "context": "We provide two points to motivate examining the potential algebraic properties of RNNs and their space of inputs in the context of NLP.", "id": 1190, "question": "Is there a formal proof that the RNNs form a representation of the group?", "title": "The emergent algebraic structure of RNNs and embeddings in NLP" }, { "answers": [ "Switchboard-2000 contains 1700 more hours of speech data." ], "context": "Powerful neural networks have enabled the use of “end-to-end” speech recognition models that directly map a sequence of acoustic features to a sequence of words without conditional independence assumptions. Typical examples are attention based encoder-decoder BIBREF0 and recurrent neural network transducer models BIBREF1. Due to training on full sequences, an utterance corresponds to a single observation from the view point of these models; thus, data sparsity is a general challenge for such approaches, and it is believed that these models are effective only when sufficient training data is available. Indeed, many end-to-end speech recognition papers focus on LibriSpeech, which has 960 hours of training audio. Nevertheless, the best performing systems follow the traditional hybrid approach BIBREF2, outperforming attention based encoder-decoder models BIBREF3, BIBREF4, BIBREF5, BIBREF6, and when less training data is used, the gap between “end-to-end” and hybrid models is more prominent BIBREF3, BIBREF7. Several methods have been proposed to tackle data sparsity and overfitting problems; a detailed list can be found in Sec. SECREF2. Recently, increasingly complex attention mechanisms have been proposed to improve seq2seq model performance, including stacking self and regular attention layers and using multiple attention heads in the encoder and decoder BIBREF4, BIBREF8.", "id": 1191, "question": "How much bigger is Switchboard-2000 than Switchboard-300 database?", "title": "Single headed attention based sequence-to-sequence model for state-of-the-art results on Switchboard-300" }, { "answers": [ "" ], "context": "In contrast to traditional hybrid models, where even recurrent networks are trained on randomized, aligned chunks of labels and features BIBREF10, BIBREF11, whole sequence models are more prone to memorizing the training samples. In order to improve generalization, many of the methods we investigate introduce additional noise, either directly or indirectly, to stochastic gradient descent (SGD) training to avoid narrow, local optima. The other techniques we study address the highly non-convex nature of training neural networks, ease the optimization process, and speed up convergence.", "id": 1192, "question": "How big is Switchboard-300 database?", "title": "Single headed attention based sequence-to-sequence model for state-of-the-art results on Switchboard-300" }, { "answers": [ "" ], "context": "The Common Voice project is a response to the current state of affairs in speech technology, in which training data is either prohibitively expensive or unavailable for most languages BIBREF0. We believe that speech technology (like all technology) should be decentralized and open, and the Common Voice project achieves this goal via a mix of community building, open source tooling, and a permissive licensing scheme. The corpus is designed to organically scale to new languages as community members use the provided tools to translate the interface, submit text sentences, and finally record and validate voices in their new language . The project was started with an initial focus on English in July 2017 and then in June 2018 was made available for any language.", "id": 1193, "question": "What crowdsourcing platform is used for data collection and data validation?", "title": "Common Voice: A Massively-Multilingual Speech Corpus" }, { "answers": [ "" ], "context": "Some notable multilingual speech corpora include VoxForge BIBREF1, Babel BIBREF2, and M-AILABS BIBREF3. Even though the Babel corpus contains high-quality data from 22 minority languages, it is not released under an open license. VoxForge is most similar to Common Voice in that it is community-driven, multilingual (17 languages), and released under an open license (GNU General Public License). However, the VoxForge does not have a sustainable data collection pipeline compared to Common Voice, and there is no data validation step in place. M-AILABS data contains 9 language varieties with a modified BSD 3-Clause License, however there is no community-driven aspect. Common Voice is a sustainable, open alternative to these projects which allows for collection of minority and majority languages alike.", "id": 1194, "question": "How is validation of the data performed?", "title": "Common Voice: A Massively-Multilingual Speech Corpus" }, { "answers": [ "" ], "context": "The data presented in this paper was collected and validated via Mozilla's Common Voice initiative. Using either the Common Voice website or iPhone app, contributors record their voice by reading sentences displayed on the screen (see Figure (FIGREF5)). The recordings are later verified by other contributors using a simple voting system. Shown in Figure (FIGREF6), this validation interface has contributors mark $<$audio,transcript$>$ pairs as being either correct (up-vote) or incorrect (down-vote).", "id": 1195, "question": "Is audio data per language balanced in dataset?", "title": "Common Voice: A Massively-Multilingual Speech Corpus" }, { "answers": [ "" ], "context": "Text classification is a fundamental task in Natural Language processing which has been found useful in a wide spectrum of applications ranging from search engines enabling users to identify content on websites, sentiment and social media analysis, customer relationship management systems, and spam detection. Over the past several years, text classification has been predominantly modeled as a supervised learning problem (e.g., BIBREF0 , BIBREF1 , BIBREF2 ) for which appropriately labeled data must be collected. Such data is often domain-dependent (i.e., covering specific topics such as those relating to “Business” or “Medicine”) and a classifier trained using data from one domain is likely to perform poorly on another. For example, the phrase “the mouse died quickly” may indicate negative sentiment in a customer review describing the hand-held pointing device or positive sentiment when describing a laboratory experiment performed on a rodent. The ability to handle a wide variety of domains has become more pertinent with the rise of data-hungry machine learning techniques like neural networks and their application to a plethora of textual media ranging from news articles to twitter, blog posts, medical journals, Reddit comments, and parliamentary debates BIBREF0 , BIBREF3 , BIBREF4 , BIBREF5 .", "id": 1196, "question": "What is the performance of their model?", "title": "Weakly Supervised Domain Detection" }, { "answers": [ "" ], "context": "Our work lies at the intersection of multiple research areas, including domain adaptation, representation learning, multiple instance learning, and topic modeling. We review related work below.", "id": 1197, "question": "Which text genres did they experiment with?", "title": "Weakly Supervised Domain Detection" }, { "answers": [ "Answer with content missing: (Experimental setup not properly rendered) In our experiments we used seven target domains: “Business and Commerce” (BUS), “Government and Politics” (GOV), “Physical and Mental Health” (HEA), “Law and Order” (LAW),\n“Lifestyle” (LIF), “Military” (MIL), and “General Purpose” (GEN). Exceptionally, GEN does\nnot have a natural root category." ], "context": "We formulate domain detection as a multilabel learning problem. Our model is trained on samples of document-label pairs. Each document consists of INLINEFORM0 sentences INLINEFORM1 and is associated with discrete labels INLINEFORM2 . In this work, domain labels are not annotated manually but extrapolated from Wikipedia (see Section SECREF6 for details). In a non-MIL framework, a model typically learns to predict document labels by directly conditioning on its sentence representations INLINEFORM3 or their aggregate. In contrast, INLINEFORM4 under MIL is a learned function INLINEFORM5 of latent instance-level labels, i.e., INLINEFORM6 . A MIL classifier will therefore first produce domain scores for all instances (aka sentences), and then learn to integrate instance scores into a bag (aka document) prediction.", "id": 1198, "question": "What domains are detected in this paper?", "title": "Weakly Supervised Domain Detection" }, { "answers": [ "1. there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty.\n2. Macro F1 = 14.6 (MLR, length 96 snippet)\nWeighted F1 = 31.1 (LSTM, length 128 snippet)" ], "context": "Natural language processing (nlp) has achieved significant advances in reading comprehension tasks BIBREF0 , BIBREF1 . These are partially due to embedding methods BIBREF2 , BIBREF3 and neural networks BIBREF4 , BIBREF5 , BIBREF6 , but also to the availability of new resources and challenges. For instance, in cloze-form tasks BIBREF7 , BIBREF8 , the goal is to predict the missing word given a short context. weston2015towards presented baBI, a set of proxy tasks for reading comprenhension. In the SQuAD corpus BIBREF9 , the aim is to answer questions given a Wikipedia passage. 2017arXiv171207040K introduce NarrativeQA, where answering the questions requires to process entire stories. In a related line, 2017arXiv171011601F use fictional crime scene investigation data, from the CSI series, to define a task where the models try to answer the question: ‘who committed the crime?’.", "id": 1199, "question": "Why do they think this task is hard? What is the baseline performance?", "title": "Harry Potter and the Action Prediction Challenge from Natural Language" }, { "answers": [ "" ], "context": "To build an action prediction corpus, we need to: (1) consider the set of actions, and (2) collect data where these occur. Data should come from different users, to approximate a real natural language task. Also, it needs to be annotated, determining that a piece of text ends up triggering an action. These tasks are however time consuming, as they require annotators to read vast amounts of large texts. In this context, machine comprehension resources usually establish a compromise between their complexity and the costs of building them BIBREF7 , BIBREF13 .", "id": 1200, "question": "Isn't simple word association enough to predict the next spell?", "title": "Harry Potter and the Action Prediction Challenge from Natural Language" }, { "answers": [ "" ], "context": "We rely on an intuitive idea that uses transcripts from the Harry Potter world to build up a corpus for textual action prediction. The domain has a set of desirable properties to evaluate reading comprehension systems, which we now review.", "id": 1201, "question": "Do they literally just treat this as \"predict the next spell that appears in the text\"?", "title": "Harry Potter and the Action Prediction Challenge from Natural Language" }, { "answers": [ "" ], "context": "The number of occurrences of spells in the original Harry Potter books is small (432 occurrences), which makes it difficult to train and test a machine learning model. However, the amount of available fan fiction for this saga allows to create a large corpus. For hpac, we used fan fiction (and only fan fiction texts) from https://www.fanfiction.net/book/Harry-Potter/ and a version of the crawler by milli2016beyond. We collected Harry Potter stories written in English and marked with the status ‘completed’. From these we extracted a total of 82 836 spell occurrences, that we used to obtain the scene descriptions. Table 2 details the statistics of the corpus (see also Appendix \"Corpus distribution\" ). Note that similar to Twitter corpora, fan fiction stories can be deleted over time by users or admins, causing losses in the dataset.", "id": 1202, "question": "How well does a simple bag-of-words baseline do?", "title": "Harry Potter and the Action Prediction Challenge from Natural Language" }, { "answers": [ "" ], "context": "There are several existing works that focus on modelling conversation using prior human to human conversational data BIBREF0 , BIBREF1 , BIBREF2 . BIBREF3 models the conversation from pairs of consecutive tweets. Deep learning based approaches have also been used to model the dialog in an end to end manner BIBREF4 , BIBREF5 . Memory networks have been used by Bordes et al Bor16 to model goal based dialog conversations. More recently, deep reinforcement learning models have been used for generating interactive and coherent dialogs BIBREF6 and negotiation dialogs BIBREF7 .", "id": 1203, "question": "Do they study frequent user responses to help automate modelling of those?", "title": "Finding Dominant User Utterances And System Responses in Conversations" }, { "answers": [ "" ], "context": "The notion of adjacency pairs was introduced by Sacks et al SSE74 to formalize the structure of a dialog. Adjacency pairs have been used to analyze the semantics of the dialog in computational linguistics community BIBREF9 . Clustering has been used for different tasks related to conversation. BIBREF10 considers the task of discovering dialog acts by clustering the raw utterances. We aim to obtain the frequent adjacency pairs through clustering.", "id": 1204, "question": "How do they divide text into utterances?", "title": "Finding Dominant User Utterances And System Responses in Conversations" }, { "answers": [ "" ], "context": "In this section we describe our approach SimCluster that performs clustering in the two domains simultaneously and ensures that the generated clusters can be aligned with each other. We will describe the model in section SECREF9 and the algorithm in Section SECREF11 .", "id": 1205, "question": "Do they use the same distance metric for both the SimCluster and K-means algorithm?", "title": "Finding Dominant User Utterances And System Responses in Conversations" }, { "answers": [ "using generative process" ], "context": "We consider a problem setting where we are given a collection of pairs of consecutive utterances, with vector representations INLINEFORM0 where INLINEFORM1 s are in speaker 1's domain and INLINEFORM2 s are in speaker 2's domain. We need to simultaneously cluster the utterances in their respective domains to minimize the variations within each domain and also ensure that the clusters for both domains are close together.", "id": 1206, "question": "How do they generate the synthetic dataset?", "title": "Finding Dominant User Utterances And System Responses in Conversations" }, { "answers": [ "" ], "context": "Web and social media have become primary sources of information. Users' expectations and information seeking activities co-evolve with the increasing sophistication of these resources. Beyond navigation, document retrieval, and simple factual question answering, users seek direct answers to complex and compositional questions. Such search sessions may require multiple iterations, critical assessment, and synthesis BIBREF0 .", "id": 1207, "question": "how are multiple answers from multiple reformulated questions aggregated?", "title": "Ask the Right Questions: Active Question Reformulation with Reinforcement Learning" }, { "answers": [ "Average claim length is 8.9 tokens." ], "context": "Understanding most nontrivial claims requires insights from various perspectives. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias. In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the perspectives presented in them or whether they are supported by evidence.", "id": 1208, "question": "What is the average length of the claims?", "title": "Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims" }, { "answers": [ "" ], "context": "In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to substantiated perspective discovery. To clarify our description we use to following notation. Let INLINEFORM0 indicate a target claim of interest (for example, the claims INLINEFORM1 and INLINEFORM2 in Figure FIGREF6 ). Each claim INLINEFORM3 is addressed by a collection of perspectives INLINEFORM4 that are grouped into clusters of equivalent perspectives. Additionally, each perspective INLINEFORM5 is supported, relative to INLINEFORM6 , by at least one evidence paragraph INLINEFORM7 , denoted INLINEFORM8 .", "id": 1209, "question": "What debate websites did they look at?", "title": "Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims" }, { "answers": [ "" ], "context": "In this section we describe a multi-step process, constructed with detailed analysis, substantial refinements and multiple pilots studies.", "id": 1210, "question": "What crowdsourcing platform did they use?", "title": "Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims" }, { "answers": [ "" ], "context": "We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 .", "id": 1211, "question": "Which machine baselines are used?", "title": "Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims" }, { "answers": [ "" ], "context": "We perform a closer investigation of the abilities required to solve the stance classification task. One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels. We follow the common definitions used in prior work BIBREF37 , BIBREF38 . The result of this annotation is depicted in Figure FIGREF24 . As can be seen, the problem requires understanding of common-sense, i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text. Additionally, the task requires various types of coreference understanding, such as event coreference and entity coreference.", "id": 1212, "question": "What challenges are highlighted?", "title": "Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims" }, { "answers": [ "Ethics, Gender, Human rights, Sports, Freedom of Speech, Society, Religion, Philosophy, Health, Culture, World, Politics, Environment, Education, Digital Freedom, Economy, Science and Law" ], "context": "In this section we provide empirical analysis to address the tasks. We create a split of 60%/15%/25% of the data train/dev/test. In order to make sure our baselines are not overfitting to the keywords of each topic (the “topic” annotation from Section SECREF20 ), we make sure to have claims with the same topic fall into the same split.", "id": 1213, "question": "What debate topics are included in the dataset?", "title": "Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims" }, { "answers": [ "In terms of F1 score, the Hybrid approach improved by 23.47% and 1.39% on BiDAF and DCN respectively. The DCA approach improved by 23.2% and 1.12% on BiDAF and DCN respectively." ], "context": "Enabling machines to understand natural language is one of the key challenges to achieve artificially intelligent systems. Asking machines questions and getting a meaningful answer adds value to us since it automatizes knowledge acquisition efforts drastically. Apple's Siri and Amazon's Echo are two such examples of mass market products capable of machine comprehension that has led to a paradigm shift on how consumers' interact with machines.", "id": 1214, "question": "By how much, the proposed method improves BiDAF and DCN on SQuAD dataset?", "title": "Pay More Attention - Neural Architectures for Question-Answering" }, { "answers": [ "" ], "context": "Irony and sarcasm in dialogue constitute a highly creative use of language signaled by a large range of situational, semantic, pragmatic and lexical cues. Previous work draws attention to the use of both hyperbole and rhetorical questions in conversation as distinct types of lexico-syntactic cues defining diverse classes of sarcasm BIBREF0 .", "id": 1215, "question": "Do they report results only on English datasets?", "title": "Creating and Characterizing a Diverse Corpus of Sarcasm in Dialogue" }, { "answers": [ "Each class has different patterns in adjectives, adverbs and verbs for sarcastic and non-sarcastic classes" ], "context": "There has been relatively little theoretical work on sarcasm in dialogue that has had access to a large corpus of naturally occurring examples. Gibbs00 analyzes a corpus of 62 conversations between friends and argues that a robust theory of verbal irony must account for the large diversity in form. He defines several subtypes, including rhetorical questions and hyperbole:", "id": 1216, "question": "What are the linguistic differences between each class?", "title": "Creating and Characterizing a Diverse Corpus of Sarcasm in Dialogue" }, { "answers": [ "" ], "context": "We first replicated the pattern-extraction experiments of LukinWalker13 on their dataset using AutoSlog-TS BIBREF13 , a weakly-supervised pattern learner that extracts lexico-syntactic patterns associated with the input data. We set up the learner to extract patterns for both sarcastic and not-sarcastic utterances. Our first discovery is that we can classify not-sarcastic posts with very high precision, ranging between 80-90%.", "id": 1217, "question": "What simple features are used?", "title": "Creating and Characterizing a Diverse Corpus of Sarcasm in Dialogue" }, { "answers": [ "" ], "context": "The goal of collecting additional corpora for rhetorical questions and hyperbole is to increase the diversity of the corpus, and to allow us to explore the semantic differences between sarcastic and not-sarcastic utterances when particular lexico-syntactic cues are held constant. We hypothesize that identifying surface-level cues that are instantiated in both sarcastic and not sarcastic posts will force learning models to find deeper semantic cues to distinguish between the classes.", "id": 1218, "question": "What lexico-syntactic cues are used to retrieve sarcastic utterances?", "title": "Creating and Characterizing a Diverse Corpus of Sarcasm in Dialogue" }, { "answers": [ "" ], "context": "Music is part of the day-to-day life of a huge number of people, and many works try to understand the best way to classify, recommend, and identify similarities between songs. Among the tasks that involve music classification, genre classification has been studied widely in recent years BIBREF0 since musical genres are the main top-level descriptors used by music dealers and librarians to organize their music collections BIBREF1.", "id": 1219, "question": "what is the source of the song lyrics?", "title": "Brazilian Lyrics-Based Music Genre Classification Using a BLSTM Network" }, { "answers": [ "" ], "context": "Several works have been carried out to add textual information to genre and mood classification. Fell and Sporleder BIBREF6 used several handcraft features, such as vocabulary, style, semantics, orientation towards the world, and song structure to obtain performance gains on three different classification tasks: detecting genre, distinguishing the best and the worst songs, and determining the approximate publication time of a song. The experiments in genre classification focused on eight genres: Blues, Rap, Metal, Folk, R&B, Reggae, Country, and Religious. Only lyrics in English were included and they used an SVM with the default settings for the classification.", "id": 1220, "question": "what genre was the most difficult to classify?", "title": "Brazilian Lyrics-Based Music Genre Classification Using a BLSTM Network" }, { "answers": [ "" ], "context": "In this chapter we present all the major steps we have taken, from obtaining the dataset to the proposed approach to address the automatic music genre classification problem.", "id": 1221, "question": "what word embedding techniques did they experiment with?", "title": "Brazilian Lyrics-Based Music Genre Classification Using a BLSTM Network" }, { "answers": [ "Gospel, Sertanejo, MPB, Forró, Pagode, Rock, Samba, Pop, Axé, Funk-carioca, Infantil, Velha-guarda, Bossa-nova and Jovem-guarda" ], "context": "In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors. The implementation of a crawler was necessary because, although the Vagalume site provides an API, it is only for consultation and does not allow obtaining large amounts of data. The crawler was implemented using Scrapy, an open-source and collaborative Python library to extract data from websites.", "id": 1222, "question": "what genres do they songs fall under?", "title": "Brazilian Lyrics-Based Music Genre Classification Using a BLSTM Network" }, { "answers": [ "" ], "context": "Text document classification is an important task for diverse natural language processing based applications. Traditional machine learning approaches mainly focused on reducing dimensionality of textual data to perform classification. This although improved the overall classification accuracy, the classifiers still faced sparsity problem due to lack of better data representation techniques. Deep learning based text document classification, on the other hand, benefitted greatly from the invention of word embeddings that have solved the sparsity problem and researchers’ focus mainly remained on the development of deep architectures. Deeper architectures, however, learn some redundant features that limit the performance of deep learning based solutions. In this paper, we propose a two stage text document classification methodology which combines traditional feature engineering with automatic feature engineering (using deep learning). The proposed methodology comprises a filter based feature selection (FSE) algorithm followed by a deep convolutional neural network. This methodology is evaluated on the two most commonly used public datasets, i.e., 20 Newsgroups data and BBC news data. Evaluation results reveal that the proposed methodology outperforms the state-of-the-art of both the (traditional) machine learning and deep learning based text document classification methodologies with a significant margin of 7.7% on 20 Newsgroups and 6.6% on BBC news datasets.", "id": 1223, "question": "Is the filter based feature selection (FSE) a form of regularization?", "title": "A Robust Hybrid Approach for Textual Document Classification" }, { "answers": [ "LSTMs with and without attention, HRED, VHRED with and without attention, MMI and Reranking-RL" ], "context": "Many modern dialogue generation models use a sequence-to-sequence architecture as their backbone BIBREF0, following its success when applied to Machine Translation (MT) BIBREF1. However, dialogue tasks also have a requirement different from that of MT: the response not only has to be \"correct\" (coherent and relevant), but also needs to be diverse and informative. However, seq2seq has been reported by many previous works to have low corpus-level diversity BIBREF2, BIBREF3, BIBREF0, BIBREF4, as it tends to generate safe, terse, and uninformative responses, such as \"I don't know.\". These responses unnecessarily make a dialogue system much less interactive than it should be.", "id": 1224, "question": "To what other competitive baselines is this approach compared?", "title": "AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses" }, { "answers": [ "Through Amazon MTurk annotators to determine plausibility and content richness of the response" ], "context": "By only keeping a static shortlist of boring responses or tokens, one basically assumes that we humans should decide which tokens are dull. However, we argue that we should instead look from the model's perspective to identify dull tokens, because even if the model outputs a word that we consider rare, including it in too many responses is still considered a dull behavior. Motivated by this thought experiment, we propose a novel metric, Average Output Probability Distribution (AvgOut), that dynamically keeps track of which tokens the model is biased toward. To calculate this, during training, we average out all the output probability distributions for each time step of the decoder for the whole mini-batch. The resulting vector $D^{\\prime }$ will reflect each token's probability of being generated from the model's perspective. Note that we do not use discrete ground-truth tokens to evaluate the model's bias, because there is a fine distinction between the two: a statistics of frequency on ground-truth tokens is an evaluation of the corpus's bias, while AvgOut is an evaluation of what bias the model has learned because by generating dull responses more frequently than the training corpus has, it is the model itself that we should adjust. Also note that the reason we take the average is that a single output distribution will largely depend on the context and the previous target tokens (which are fed as inputs to the decoder during training), but on average the distribution should be a faithful evaluation on which words are more likely to be generated from model's perspective.", "id": 1225, "question": "How is human evaluation performed, what was the criteria?", "title": "AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses" }, { "answers": [ "on diversity 6.87 and on relevance 4.6 points higher" ], "context": "AvgOut can play at least three roles. First, it can be used to directly supervise output distribution during training; second, it can be used as a prior in labeled sequence transduction methods to control diversity of the generated response; and third, it can be used as a reward signal for Reinforcement Learning to encourage diverse sampled responses. In this section, we begin with a base vanilla seq2seq model, and next present our three models to diversify responses based on AvgOut.", "id": 1226, "question": "How much better were results of the proposed models than base LSTM-RNN model?", "title": "AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses" }, { "answers": [ "the hybrid model MinAvgOut + RL" ], "context": "Our MinAvgOut model (Figure FIGREF3) directly integrates AvgOut into the loss function by summarizing it into a single numerical value named Continuous-AvgOut. We do this by taking the dot-product of $D$ and $D^{\\prime }$ (Figure FIGREF6). The intuition behind this simple calculation is that $D$ can also be viewed as a set of weights which add up to $1.0$, since it is a probability vector. By taking the dot product, we are actually calculating a weighted average of each probability in $D^{\\prime }$. To evaluate how diverse the model currently is, the duller tokens should obviously carry higher weights since they contribute more to the \"dullness\" of the whole utterance. Assuming that $D$ is a column vector, the continuous diversity score is $B_c$, and the resulting extra loss term is $L_B$, the total loss $L$ is given by:", "id": 1227, "question": "Which one of the four proposed models performed best?", "title": "AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses" }, { "answers": [ "" ], "context": "Task-oriented dialogue system is an important tool to build personal virtual assistants, which can help users to complete most of the daily tasks by interacting with devices via natural language. It's attracting increasing attention of researchers, and lots of works have been proposed in this area BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7.", "id": 1228, "question": "What metrics are used to measure performance of models?", "title": "Generative Dialog Policy for Task-oriented Dialog Systems" }, { "answers": [ "most of the models have similar performance on BPRA: DSTC2 (+0.0015), Maluuba (+0.0729)\nGDP achieves the best performance in APRA: DSTC2 (+0.2893), Maluuba (+0.2896)\nGDP significantly outperforms the baselines on BLEU: DSTC2 (+0.0791), Maluuba (+0.0492)" ], "context": "Usually, the existing task-oriented dialogue systems use a pipeline of four separate modules: natural language understanding, dialogue belief tracker, dialogue policy and natural language generator. Among these four modules, dialogue policy maker plays a key role in task-oriented dialogue systems, which generates the next dialogue action.", "id": 1229, "question": "How much is proposed model better than baselines in performed experiments?", "title": "Generative Dialog Policy for Task-oriented Dialog Systems" }, { "answers": [ "" ], "context": "Seq2Seq model was first introduced by BIBREF15 for statistical machine translation. It uses two recurrent neural networks (RNN) to solve the sequence-to-sequence mapping problem. One called encoder encodes the user utterance into a dense vector representing its semantics, the other called decoder decodes this vector to the target sentence. Now Seq2Seq framework has already been used in task-oriented dialog systems such as BIBREF4 and BIBREF1, and shows the challenging performance. In the Seq2Seq model, given the user utterance $Q=(x_1, x_2, ..., x_n)$, the encoder squeezes it into a context vector $C$ and then used by decoder to generate the response $R=(y_1, y_2, ..., y_m)$ word by word by maximizing the generation probability of $R$ conditioned on $Q$. The objective function of Seq2Seq can be written as:", "id": 1230, "question": "What are state-of-the-art baselines?", "title": "Generative Dialog Policy for Task-oriented Dialog Systems" }, { "answers": [ "" ], "context": "Attention mechanisms BIBREF17 have been proved to improved effectively the generation quality for the Seq2Seq framework. In Seq2Seq with attention, each $y_i$ corresponds to a context vector $C_i$ which is calculated dynamically. It is a weighted average of all hidden states of the encoder RNN. Formally, $C_i$ is defined as $C_i=\\sum _{j=1}^{n} \\alpha _{ij}h_j$, where $\\alpha _{ij}$ is given by:", "id": 1231, "question": "What two benchmark datasets are used?", "title": "Generative Dialog Policy for Task-oriented Dialog Systems" }, { "answers": [ "" ], "context": "Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$.", "id": 1232, "question": "What languages are the model evaluated on?", "title": "Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding" }, { "answers": [ "" ], "context": "HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence).", "id": 1233, "question": "Do they compare to other models appart from HAN?", "title": "Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding" }, { "answers": [ "" ], "context": "In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context.", "id": 1234, "question": "What are the datasets used", "title": "Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding" }, { "answers": [ "" ], "context": "The rapid growth of social media platforms such as Twitter provides rich multimedia data in large scales for various research opportunities, such as sentiment analysis which focuses on automatically sentiment (positive and negative) prediction on given contents. Sentiment analysis has been widely used in real world applications by analyzing the online user-generated data, such as election prediction, opinion mining and business-related activity analysis. Emojis, which consist of various symbols ranging from cartoon facial expressions to figures such as flags and sports, are widely used in daily communications to express people's feelings . Since their first release in 2010, emojis have taken the place of emoticons (such as “:- INLINEFORM0 ” and “:-P”) BIBREF0 to create a new form of language for social media users BIBREF1 . According to recent science reports, there are 2,823 emojis in unicode standard in Emoji 11.0 , with over 50% of the Instagram posts containing one or more emojis BIBREF2 and 92% of the online population using emojis BIBREF3 .", "id": 1235, "question": "Do they evaluate only on English datasets?", "title": "Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM" }, { "answers": [ "" ], "context": "Sentiment analysis is to extract and quantify subjective information including the status of attitudes, emotions and opinions from a variety of contents such as texts, images and audios BIBREF18 . Sentiment analysis has been drawing great attentions because of its wide applications in business and government intelligence, political science, sociology and psychology BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . From a technical perspective, textual sentiment analysis is first explored by researchers as an NLP task. Methods range from lexical-based approaches using features including keywords BIBREF23 , BIBREF24 where each word corresponds to a sentiment vector with entries representing the possibility of the word and each sentiment and phase-level features (n-grams and unigrams) BIBREF25 , BIBREF26 , to deep neural network based embedding approaches including skip-grams, continuous bag-of-words (CBoW) and skip-thoughts BIBREF27 , BIBREF28 , BIBREF16 , BIBREF29 . It was until recent years when researchers start focusing on image and multimodal sentiments BIBREF30 , BIBREF31 and analyzing how to take advantage of the cross-modality resources BIBREF10 , BIBREF32 . For multimodal sentiment analysis, an underlying assumption is that both modalities express similar sentiment and such similarity is enforced in order to train a robust sentiment inference model BIBREF10 . However, the same assumption does not stand in modeling textual tweets and emojis because the complexities of natural language exist extensively, such as the use of irony, jokes, sarcasm, etc. BIBREF9 .", "id": 1236, "question": "What evidence does visualizing the attention give to show that it helps to obtain a more robust understanding of semantics and sentiments?", "title": "Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM" }, { "answers": [ "" ], "context": "With the overwhelming development of Internet of Things (IOT), the growing accessibility and popularity of subjective contents have provided new opportunities and challenges for sentiment analysis BIBREF33 . For example, social medias such as Twitter and Instagram have been explored because the massive user-generated contents with rich user sentiments BIBREF25 , BIBREF34 , BIBREF35 where emojis (and emoticons) are extensively used. Non-verbal cues of sentiment, such as emoticon which is considered as the previous generation of emoji, has been studied for their sentiment effect before emojis take over BIBREF36 , BIBREF37 , BIBREF38 . For instance, BIBREF36 , BIBREF38 pre-define sentiment labels to emoticons and construct a emoticon-sentiment dictionary. BIBREF37 applies emoticons for smoothing noisy sentiment labels. Similar work from BIBREF39 first considers emoji as a component in extracting the lexical feature for further sentiment analysis. BIBREF40 constructs an emoji sentiment ranking based on the occurrences of emojis and the human-annotated sentiments of the corresponding tweets where each emoji is assigned with a sentiment score from negative to positive , similar to the SentiWordNet BIBREF41 . However, the relatively intuitive use of emojis by lexical- and dictionary-based approaches lacks insightful understanding of the complexed semantics of emojis. Therefore, inspired by the success of word semantic embedding algorithms such as BIBREF28 , BIBREF16 , BIBREF7 obtains semantic embeddings of each emoji by averaging the words from its descriptions and shows it is effective to take advantage of the emoji embedding for the task of Twitter sentiment analysis. BIBREF8 proposes a convoluntional neural network to predict the emoji occurrence and jointly learns the emoji embedding via a matching layer based on cosine similarities. Despite the growing popularity of Twitter sentiment analysis, there is a limited number of emoji datasets with sentiment labels available because previous studies usually filter out urls, emojis and sometimes emoticons. However, BIBREF9 shows that it is effective to extract sentiment information from emojis for emotion classification and sarcasm detection tasks in the absence of learning vector-based emoji representations by pre-training a deep neural network to predict the emoji occurrence.", "id": 1237, "question": "Which SOTA models are outperformed?", "title": "Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM" }, { "answers": [ "" ], "context": "We propose two mechanisms, namely Word-guide Attention-based LSTM and Multi-level Attention-based LSTM, to take advantage of bi-sense emoji embedding for the sentiment analysis task. The frameworks of these two methods are shown in Figure FIGREF10 and Figure FIGREF19 , respectively. Our workflow includes the following steps: initialization of bi-sense emoji embedding, generating senti-emoji embedding based on self-selected attention, and sentiment classification via the proposed attention-based LSTM networks.", "id": 1238, "question": "What is the baseline for experiments?", "title": "Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM" }, { "answers": [ "" ], "context": "Recent research shows great success in word embedding task such as word2vec and fasttext BIBREF27 , BIBREF16 . We use fasttext to initialize emoji embeddings by considering each emoji as a special word, together with word embeddings. The catch is, different from conventional approaches where each emoji responds to one embedding vector (as we call word-emoji embedding), we embed each emoji into two distinct vectors (bi-sense emoji embedding): we first assign two distinct tokens to each emoji, of which one is for the particular emoji used in positive sentimental contexts and the other one is for this emoji used in negative sentimental contexts (text sentiment initialized by Vader BIBREF17 , details will be discussed in Section SECREF23 ), respectively; the same fasttext training process is used to embed each token into a distinct vector, and we thus obtain the positive-sense and negative-sense embeddings for each emoji.", "id": 1239, "question": "What is the motivation for training bi-sense embeddings?", "title": "Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM" }, { "answers": [ "" ], "context": "There has been a recent surge of improvements in language modeling, powered by the introduction of the transformer architecture BIBREF0. These gains stem from the ability of the transformer self-attention mechanism to better model long context (as compared to RNN networks), spanning hundreds of characters BIBREF1 or words BIBREF2, BIBREF3. These approaches consider language modeling as a classification problem with the aim of predicting the next token given a fixed-size preceding context. To support variable-length context, BIBREF4 adds recurrence to a transformer model, improving the state-of-the-art further.", "id": 1240, "question": "How many parameters does the model have?", "title": "Bridging the Gap for Tokenizer-Free Language Models" }, { "answers": [ "" ], "context": "Language models (LMs) assign a probability distribution over a sequence $x_{0:t}$ by factoring out the joint probability from left to right as follows", "id": 1241, "question": "How many characters are accepted as input of the language model?", "title": "Bridging the Gap for Tokenizer-Free Language Models" }, { "answers": [ "" ], "context": "From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5.", "id": 1242, "question": "What dataset is used for this task?", "title": "Features in Extractive Supervised Single-document Summarization: Case of Persian News" }, { "answers": [ "" ], "context": "Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18.", "id": 1243, "question": "What features of the document are integrated into vectors of every sentence?", "title": "Features in Extractive Supervised Single-document Summarization: Case of Persian News" }, { "answers": [ "ROUGE-1 increases by 0.05, ROUGE-2 by 0.06 and ROUGE-L by 0.09" ], "context": "As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation.", "id": 1244, "question": "By how much is precission increased?", "title": "Features in Extractive Supervised Single-document Summarization: Case of Persian News" }, { "answers": [ "" ], "context": "The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next.", "id": 1245, "question": "Is new approach tested against state of the art?", "title": "Features in Extractive Supervised Single-document Summarization: Case of Persian News" }, { "answers": [ "" ], "context": "In our online world, social media users tweet, post, and message an incredible number of times each day, and the interconnected, information-heavy nature of our lives makes stress more prominent and easily observable than ever before. With many platforms such as Twitter, Reddit, and Facebook, the scientific community has access to a massive amount of data to study the daily worries and stresses of people across the world.", "id": 1246, "question": "Is the dataset balanced across categories?", "title": "Dreaddit: A Reddit Dataset for Stress Analysis in Social Media" }, { "answers": [ "" ], "context": "Because of the subjective nature of stress, relevant research tends to focus on physical signals, such as cortisol levels in saliva BIBREF2, electroencephalogram (EEG) readings BIBREF3, or speech data BIBREF4. This work captures important aspects of the human reaction to stress, but has the disadvantage that hardware or physical presence is required. However, because of the aforementioned proliferation of stress on social media, we believe that stress can be observed and studied purely from text.", "id": 1247, "question": "What supervised methods are used?", "title": "Dreaddit: A Reddit Dataset for Stress Analysis in Social Media" }, { "answers": [ "binary label of stress or not stress" ], "context": "Reddit is a social media website where users post in topic-specific communities called subreddits, and other users comment and vote on these posts. The lengthy nature of these posts makes Reddit an ideal source of data for studying the nuances of phenomena like stress. To collect expressions of stress, we select categories of subreddits where members are likely to discuss stressful topics:", "id": 1248, "question": "What labels are in the dataset?", "title": "Dreaddit: A Reddit Dataset for Stress Analysis in Social Media" }, { "answers": [ "" ], "context": "We annotate a subset of the data using Amazon Mechanical Turk in order to begin exploring the characteristics of stress. We partition the posts into contiguous five-sentence chunks for labeling; we wish to annotate segments of the posts because we are ultimately interested in what parts of the post depict stress, but we find through manual inspection that some amount of context is important. Our posts, however, are quite long, and it would be difficult for annotators to read and annotate entire posts. This type of data will allow us in the future not only to classify the presence of stress, but also to locate its expressions in the text, even if they are diffused throughout the post.", "id": 1249, "question": "What categories does the dataset come from?", "title": "Dreaddit: A Reddit Dataset for Stress Analysis in Social Media" }, { "answers": [ "" ], "context": "In an organization, the Information Technology (IT) support help desk operation is an important unit which handles the IT services of a business. Many large scale organizations would have a comprehensive IT support team to handle engagement and requests with employees on a 24$\\times $7 basis. As any routinized tasks, most processes of the support help desk unit are considered repetitive in nature BIBREF0. Some may occur on a daily basis and others may occur more frequently. Many support engineers and agent would spend time on these repetitive task such as entering information to an application, resetting passwords, unlocking applications, creating credentials, activating services, preparing documentation, etc.", "id": 1250, "question": "What are all machine learning approaches compared in this work?", "title": "Corporate IT-Support Help-Desk Process Hybrid-Automation Solution with Machine Learning Approach" }, { "answers": [ "" ], "context": "The need for real-time, efficient, and reliable customer service has grown in recent years. Twitter has emerged as a popular medium for customer service dialogue, allowing customers to make inquiries and receive instant live support in the public domain. In order to provide useful information to customers, agents must first understand the requirements of the conversation, and offer customers the appropriate feedback. While this may be feasible at the level of a single conversation for a human agent, automatic analysis of conversations is essential for data-driven approaches towards the design of automated customer support agents and systems.", "id": 1251, "question": "Do they evaluate only on English datasets?", "title": "\"How May I Help You?\": Modeling Twitter Customer Service Conversations Using Fine-Grained Dialogue Acts" }, { "answers": [ "" ], "context": "Developing computational speech and dialogue act models has long been a topic of interest BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , with researchers from many different backgrounds studying human conversations and developing theories around conversational analysis and interpretation on intent. Modern intelligent conversational BIBREF3 , BIBREF4 and dialogue systems draw principles from many disciplines, including philosophy, linguistics, computer science, and sociology. In this section, we describe relevant previous work on speech and dialogue act modeling, general conversation modeling on Twitter, and speech and dialogue act modeling of customer service in other data sources.", "id": 1252, "question": "Which patterns and rules are derived?", "title": "\"How May I Help You?\": Modeling Twitter Customer Service Conversations Using Fine-Grained Dialogue Acts" }, { "answers": [ "By annotators on Amazon Mechanical Turk." ], "context": "The underlying goal of this work is to show how a well-defined taxonomy of dialogue acts can be used to summarize semantic information in real-time about the flow of a conversation to derive meaningful insights into the success/failure of the interaction, and then to develop actionable rules to be used in automating customer service interactions. We focus on the customer service domain on Twitter, which has not previously been explored in the context of dialogue act classification. In this new domain, we can provide meaningful recommendations about good communicative practices, based on real data. Our methodology pipeline is shown in Figure FIGREF2 .", "id": 1253, "question": "How are customer satisfaction, customer frustration and overall problem resolution data collected?", "title": "\"How May I Help You?\": Modeling Twitter Customer Service Conversations Using Fine-Grained Dialogue Acts" }, { "answers": [ "" ], "context": "As described in the related work, the taxonomy of 12 acts to classify dialogue acts in an instant-messaging scenario, developed by Ivanovic in 2005, has been used by previous work when approaching the task of dialogue act classification for customer service BIBREF18 , BIBREF20 , BIBREF19 , BIBREF21 , BIBREF22 . The dataset used consisted of eight conversations from chat logs in the MSN Shopping Service (around 550 turns spanning around 4,500 words) BIBREF19 . The conversations were gathered by asking five volunteers to use the platform to inquire for help regarding various hypothetical situations (i.e. buying an item for someone) BIBREF19 . The process of selection of tags to develop the taxonomy, beginning with the 42 tags from the DAMSL set BIBREF0 , involved removing tags inappropriate for written text, and collapsing sets of tags into a more coarse-grained label BIBREF18 . The final taxonomy consists of the following 12 dialogue acts (sorted by frequency in the dataset): Statement (36%), Thanking (14.7%), Yes-No Question (13.9%), Response-Acknowledgement (7.2%), Request (5.9%), Open-Question (5.3%), Yes-Answer (5.1%), Conventional-Closing (2.9%), No-Answer (2.5%), Conventional-Opening (2.3%), Expressive (2.3%) and Downplayer (1.9%).", "id": 1254, "question": "Which Twitter customer service industries are investigated?", "title": "\"How May I Help You?\": Modeling Twitter Customer Service Conversations Using Fine-Grained Dialogue Acts" }, { "answers": [ "" ], "context": "Given our taxonomy of fine-grained dialogue acts that expands upon previous work, we set out to gather annotations for Twitter customer service conversations.", "id": 1255, "question": "Which dialogue acts are more suited to the twitter domain?", "title": "\"How May I Help You?\": Modeling Twitter Customer Service Conversations Using Fine-Grained Dialogue Acts" }, { "answers": [ "one" ], "context": "Given the data-driven nature of neural machine translation (NMT), the limited source-to-target bilingual sentence pairs have been one of the major obstacles in building competitive NMT systems. Recently, pseudo parallel data, which refer to the synthetic bilingual sentence pairs automatically generated by existing translation models, have reported promising results with regard to the data scarcity in NMT. Many studies have found that the pseudo parallel data combined with the real bilingual parallel corpus significantly enhance the quality of NMT models BIBREF0 , BIBREF1 , BIBREF2 . In addition, synthesized parallel data have played vital roles in many NMT problems such as domain adaptation BIBREF0 , zero-resource NMT BIBREF3 , and the rare word problem BIBREF4 .", "id": 1256, "question": "How many improvements on the French-German translation benchmark?", "title": "Building a Neural Machine Translation System Using Only Synthetic Parallel Data" }, { "answers": [ "" ], "context": "Given a source sentence $x = (x_1, \\ldots , x_m)$ and its corresponding target sentence $y= (y_1, \\ldots , y_n)$ , the NMT aims to model the conditional probability $p(y|x)$ with a single large neural network. To parameterize the conditional distribution, recent studies on NMT employ the encoder-decoder architecture BIBREF7 , BIBREF8 , BIBREF9 . Thereafter, the attention mechanism BIBREF10 , BIBREF11 has been introduced and successfully addressed the quality degradation of NMT when dealing with long input sentences BIBREF12 .", "id": 1257, "question": "How do they align the synthetic data?", "title": "Building a Neural Machine Translation System Using Only Synthetic Parallel Data" }, { "answers": [ "" ], "context": "In statistical machine translation (SMT), synthetic bilingual data have been primarily proposed as a means to exploit monolingual corpora. By applying a self-training scheme, the pseudo parallel data were obtained by automatically translating the source-side monolingual corpora BIBREF13 , BIBREF14 . In a similar but reverse way, the target-side monolingual corpora were also employed to build the synthetic parallel data BIBREF15 , BIBREF16 . The primary goal of these works was to adapt trained SMT models to other domains using relatively abundant in-domain monolingual data.", "id": 1258, "question": "Where do they collect the synthetic data?", "title": "Building a Neural Machine Translation System Using Only Synthetic Parallel Data" }, { "answers": [ "" ], "context": "The analysis of social media content to understand online human behavior has gained significant importance in recent years BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . However, a major limitation of the design of such analysis is that it often fails to account for content created by bots, which can significantly influence the messaging in social media. A social bot is an autonomous entity on social media that is typically engineered to pass as a human, often with the intent to manipulate online discourse BIBREF4 . Recent studies have shown that a significant majority of the social media content is generated by bots. For example, a six-week study by the Pew Research Center found that around two-thirds of all tweets with URL links were posted by likely bots BIBREF5 . As a result, the presence of bots can negatively impact the results of social media analysis and misinform our understanding of how humans interact within the online social space. In particular, any social media analysis that doesn't take into account the impact of bots is incomplete. While some bots can be beneficial (e.g., customer service chatbots), the focus in this work is on content-polluter bots that mimic human behavior online to spread falsified information BIBREF6 , create a false sense of public support BIBREF7 , and proliferate dangerous ideologies BIBREF8 , BIBREF9 .", "id": 1259, "question": "Do they analyze what type of content Arabic bots spread in comparison to English?", "title": "Hateful People or Hateful Bots? Detection and Characterization of Bots Spreading Religious Hatred in Arabic Social Media" }, { "answers": [ "" ], "context": "In this section, we first discuss the main challenges encountered in analyzing Arabic language and social media content in general. We then survey prior research on online hate speech and bot detection and analysis.", "id": 1260, "question": "Do they propose a new model to better detect Arabic bots specifically?", "title": "Hateful People or Hateful Bots? Detection and Characterization of Bots Spreading Religious Hatred in Arabic Social Media" }, { "answers": [ "They exclude slot-specific parameters and incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN)." ], "context": "With the rapid development in deep learning, there is a recent boom of task-oriented dialogue systems in terms of both algorithms and datasets. The goal of task-oriented dialogue is to fulfill a user's requests such as booking hotels via communication in natural language. Due to the complexity and ambiguity of human language, previous systems have included semantic decoding BIBREF0 to project natural language input into pre-defined dialogue states. These states are typically represented by slots and values: slots indicate the category of information and values specify the content of information. For instance, the user utterance “can you help me find the address of any hotel in the south side of the city” can be decoded as $inform(area, south)$ and $request(address)$, meaning that the user has specified the value south for slot area and requested another slot address.", "id": 1261, "question": "How do they prevent the model complexity increasing with the increased number of slots?", "title": "SIM: A Slot-Independent Neural Model for Dialogue State Tracking" }, { "answers": [ "" ], "context": "As outlined in BIBREF9, the dialogue state tracking task is formulated as follows: at each turn of dialogue, the user's utterance is semantically decoded into a set of slot-value pairs. There are two types of slots. Goal slots indicate the category, e.g. area, food, and the values specify the constraint given by users for the category, e.g. South, Mediterranean. Request slots refer to requests, and the value is the category that the user demands, e.g. phone, area. Each user's turn is thus decoded into turn goals and turn requests. Furthermore, to summarize the user's goals so far, the union of all previous turn goals up to the current turn is defined as joint goals.", "id": 1262, "question": "What network architecture do they use for SIM?", "title": "SIM: A Slot-Independent Neural Model for Dialogue State Tracking" }, { "answers": [ "By the number of parameters." ], "context": "To predict whether a slot-value pair should be included in the turn goals/requests, previous models BIBREF0, BIBREF5 usually define network components for each slot $s\\in S$. This can be cumbersome when the ontology is large, and it suffers from the insufficient data problem: the labelled data for a single slot may not suffice to effectively train the parameters for the slot-specific neural networks structure.", "id": 1263, "question": "How do they measure model size?", "title": "SIM: A Slot-Independent Neural Model for Dialogue State Tracking" }, { "answers": [ "" ], "context": "In the past few years, models employing self-attention BIBREF0 have achieved state-of-art results for many tasks, such as machine translation, language modeling, and language understanding BIBREF0, BIBREF1. In particular, large Transformer-based language models have brought gains in speech recognition tasks when used for second-pass re-scoring and in first-pass shallow fusion BIBREF2. As typically used in sequence-to-sequence transduction tasks BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, Transformer-based models attend over encoder features using decoder features, implying that the decoding has to be done in a label-synchronous way, thereby posing a challenge for streaming speech recognition applications. An additional challenge for streaming speech recognition with these models is that the number of computations for self-attention increases quadratically with input sequence size. For streaming to be computationally practical, it is highly desirable that the time it takes to process each frame remains constant relative to the length of the input. Transformer-based alternatives to RNNs have recently been explored for use in ASR BIBREF8, BIBREF9, BIBREF10, BIBREF11.", "id": 1264, "question": "Does model uses pretrained Transformer encoders?", "title": "Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss" }, { "answers": [ "" ], "context": "In this paper, we present all experimental results with the RNN-T loss BIBREF13 for consistency, which performs similarly to the monotonic RNN-T loss BIBREF19 in our experiments.", "id": 1265, "question": "What was previous state of the art model?", "title": "Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss" }, { "answers": [ "" ], "context": "The Transformer BIBREF0 is composed of a stack of multiple identical layers. Each layer has two sub-layers, a multi-headed attention layer and a feed-forward layer. Our multi-headed attention layer first applies $\\mathrm {LayerNorm}$, then projects the input to $\\mathrm {Query}$, $\\mathrm {Key}$, and $\\mathrm {Value}$ for all the heads BIBREF1. The attention mechanism is applied separately for different attention heads. The attention mechanism provides a flexible way to control the context that the model uses. For example, we can mask the attention score to the left of the current frame to produce output conditioned only on the previous state history. The weight-averaged $\\mathrm {Value}$s for all heads are concatenated and passed to a dense layer. We then employ a residual connection on the normalized input and the output of the dense layer to form the final output of the multi-headed attention sub-layer (i.e. $\\mathrm {LayerNorm}(x) + \\mathrm {AttentionLayer}(\\mathrm {LayerNorm}(x))$, where $x$ is the input to the multi-headed attention sub-layer). We also apply dropout on the output of the dense layer to prevent overfitting. Our feed-forward sub-layer applies $\\mathrm {LayerNorm}$ on the input first, then applies two dense layers. We use $\\mathrm {ReLu}$ as the activation for the first dense layer. Again, dropout to both dense layers for regularization, and a residual connection of normalized input and the output of the second dense layer (i.e. $\\mathrm {LayerNorm}(x) + \\mathrm {FeedForwardLayer}(\\mathrm {LayerNorm}(x))$, where $x$ is the input to the feed-forward sub-layer) are applied. See Figure FIGREF10 for more details.", "id": 1266, "question": "What was previous state of the art accuracy on LibriSpeech benchmark?", "title": "Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss" }, { "answers": [ "" ], "context": "We evaluated the proposed model using the publicly available LibriSpeech ASR corpus BIBREF23. The LibriSpeech dataset consists of 970 hours of audio data with corresponding text transcripts (around 10M word tokens) and an additional 800M word token text only dataset. The paired audio/transcript dataset was used to train T-T models and an LSTM-based baseline. The full 810M word tokens text dataset was used for standalone language model (LM) training. We extracted 128-channel logmel energy values from a 32 ms window, stacked every 4 frames, and sub-sampled every 3 frames, to produce a 512-dimensional acoustic feature vector with a stride of 30 ms. Feature augmentation BIBREF22 was applied during model training to prevent overfitting and to improve generalization, with only frequency masking ($\\mathrm {F}=50$, $\\mathrm {mF}=2$) and time masking ($\\mathrm {T}=30$, $\\mathrm {mT}=10$).", "id": 1267, "question": "How big is LibriSpeech dataset?", "title": "Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss" }, { "answers": [ "" ], "context": "Transfer learning has driven a number of recent successes in computer vision and NLP. Computer vision tasks like image captioning BIBREF0 and visual question answering typically use CNNs pretrained on ImageNet BIBREF1 , BIBREF2 to extract representations of the image, while several natural language tasks such as reading comprehension and sequence labeling BIBREF3 have benefited from pretrained word embeddings BIBREF4 , BIBREF5 that are either fine-tuned for a specific task or held fixed.", "id": 1268, "question": "Which language(s) do they work with?", "title": "Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning" }, { "answers": [ "" ], "context": "The problem of learning distributed representations of phrases and sentences dates back over a decade. For example, BIBREF19 present an additive and multiplicative linear composition function of the distributed representations of individual words. BIBREF20 combine symbolic and distributed representations of words using tensor products. Advances in learning better distributed representations of words BIBREF4 , BIBREF5 combined with deep learning have made it possible to learn complex non-linear composition functions of an arbitrary number of word embeddings using convolutional or recurrent neural networks (RNNs). A network's representation of the last element in a sequence, which is a non-linear composition of all inputs, is typically assumed to contain a squashed “summary” of the sentence. Most work in supervised learning for NLP builds task-specific representations of sentences rather than general-purpose ones.", "id": 1269, "question": "How do they evaluate their sentence representations?", "title": "Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning" }, { "answers": [ "Answer with content missing: (Skip-thought vectors-Natural Language Inference paragraphs) The encoder for the current sentence and the decoders for the previous (STP) and next sentence (STN) are typically parameterized as separate RNNs\n- RNN" ], "context": "Five out of the six tasks that we consider for multi-task learning are formulated as sequence-to-sequence problems BIBREF26 , BIBREF27 . Briefly, sequence-to-sequence models are a specific case of encoder-decoder models where the inputs and outputs are sequential. They directly model the conditional distribution of outputs given inputs INLINEFORM0 . The input INLINEFORM1 and output INLINEFORM2 are sequences INLINEFORM3 and INLINEFORM4 . The encoder produces a fixed length vector representation INLINEFORM5 of the input, which the decoder then conditions on to generate an output. The decoder is auto-regressive and breaks down the joint probability of outputs into a product of conditional probabilities via the chain rule: INLINEFORM6 ", "id": 1270, "question": "Which model architecture do they for sentence encoding?", "title": "Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning" }, { "answers": [ "" ], "context": " BIBREF31 present a simple one-to-many multi-task sequence-to-sequence learning model for NMT that uses a shared encoder for English and task-specific decoders for multiple target languages. BIBREF22 extend this by also considering many-to-one (many encoders, one decoder) and many-to-many architectures. In this work, we consider a one-to-many model since it lends itself naturally to the idea of combining inductive biases from different training objectives. The same bidirectional GRU encodes the input sentences from different tasks into a compressed summary INLINEFORM0 which is then used to condition a task-specific GRU to produce the output sentence.", "id": 1271, "question": "How many tokens can sentences in their model at most contain?", "title": "Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning" }, { "answers": [ "" ], "context": "Our motivation for multi-task training stems from theoretical insights presented in BIBREF18 . We refer readers to that work for a detailed discussion of results, but the conclusions most relevant to this discussion are (i) that learning multiple related tasks jointly results in good generalization as measured by the number of training examples required per task; and (ii) that inductive biases learned on sufficiently many training tasks are likely to be good for learning novel tasks drawn from the same environment.", "id": 1272, "question": "Which training objectives do they combine?", "title": "Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning" }, { "answers": [ "- En-Fr (WMT14)\n- En-De (WMT15)\n- Skipthought (BookCorpus)\n- AllNLI (SNLI + MultiNLI)\n- Parsing (PTB + 1-billion word)" ], "context": "Multi-task training with different data sources for each task stills poses open questions. For example: When does one switch to training on a different task? Should the switching be periodic? Do we weight each task equally? If not, what training ratios do we use?", "id": 1273, "question": "Which data sources do they use?", "title": "Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning" }, { "answers": [ "" ], "context": "Simultaneous translation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, the task of producing a partial translation of a sentence before the whole input sentence ends, is useful in many scenarios including outbound tourism, international summit and multilateral negotiations. Different from the consecutive translation in which translation quality alone matters, simultaneous translation trades off between translation quality and latency. The syntactic structure difference between the source and target language makes simultaneous translation more challenging. For example, when translating from a verb-final (SOV) language (e.g., Japanese) to a verb-media (SVO) language (e.g., English), the verb appears much later in the source sequence than in the target language. Some premature translations can lead to significant loss in quality BIBREF5.", "id": 1274, "question": "Has there been previous work on SNMT?", "title": "How to Do Simultaneous Translation Better with Consecutive Neural Machine Translation?" }, { "answers": [ "" ], "context": "Given a set of source–target sentence pairs $\\left\\langle \\mathbf {x}_m,\\mathbf {y}^*_m\\right\\rangle _{m=1}^M$, a consecutive NMT model can be trained by maximizing the log-likelihood of the target sentence from its entire source side context:", "id": 1275, "question": "Which languages do they experiment on?", "title": "How to Do Simultaneous Translation Better with Consecutive Neural Machine Translation?" }, { "answers": [ "" ], "context": "In SNMT, we receive streaming input tokens, and learn to translate them in real-time. We formulate simultaneous translation as two nested loops: the outer loop that updates an input buffer with newly observed source tokens and the inner loop that translates source tokens in the buffer updated at each outer step.", "id": 1276, "question": "What corpora is used?", "title": "How to Do Simultaneous Translation Better with Consecutive Neural Machine Translation?" }, { "answers": [ "" ], "context": "Twitter has shown potential for monitoring public health trends, BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , disease surveillance, BIBREF6 , and providing a rich online forum for cancer patients, BIBREF7 . Social media has been validated as an effective educational and support tool for breast cancer patients, BIBREF8 , as well as for generating awareness, BIBREF9 . Successful supportive organizations use social media sites for patient interaction, public education, and donor outreach, BIBREF10 . The advantages, limitations, and future potential of using social media in healthcare has been thoroughly reviewed, BIBREF11 . Our study aims to investigate tweets mentioning “breast” and “cancer\" to analyze patient populations and selectively obtain content relevant to patient treatment experiences.", "id": 1277, "question": "Do the authors report results only on English datasets?", "title": "A Sentiment Analysis of Breast Cancer Treatment Experiences and Healthcare Perceptions Across Twitter" }, { "answers": [ "By using keywords `breast' AND `cancer' in tweet collecting process. \n" ], "context": " Twitter provides a free streaming Application Programming Interface (API), BIBREF12 , for researchers and developers to mine samples of public tweets. Language processing and data mining, BIBREF13 , was conducted using the Python programming language. The free public API allows targeted keyword mining of up to 1% of Twitter's full volume at any given time, referred to as the `Spritzer Feed'.", "id": 1278, "question": "How were breast cancer related posts compiled from the Twitter streaming API?", "title": "A Sentiment Analysis of Breast Cancer Treatment Experiences and Healthcare Perceptions Across Twitter" }, { "answers": [ "ML logistic regression classifier combined with a Convolutional Neural Network (CNN) to identify self-reported diagnostic tweets.\nNLP methods: tweet conversion to numeric word vector, removing tweets containing hyperlinks, removing \"retweets\", removing all tweets containing horoscope indicators, lowercasing and removing punctuation." ], "context": " We evaluated tweet sentiments with hedonometrics, BIBREF21 , BIBREF22 , using LabMT, a labeled set of 10,000 frequently occurring words rated on a `happiness' scale by individuals contracted through Amazon Mechanical Turk, a crowd-sourced survey tool. These happiness scores helped quantify the average emotional rating of text by totaling the scores from applicable words and normalizing by their total frequency. Hence, the average happiness score, INLINEFORM0 , of a corpus with INLINEFORM1 words in common with LabMT was computed with the weighted arithmetic mean of each word's frequency, INLINEFORM2 , and associated happiness score, INLINEFORM3 : DISPLAYFORM0 ", "id": 1279, "question": "What machine learning and NLP methods were used to sift tweets relevant to breast cancer experiences?", "title": "A Sentiment Analysis of Breast Cancer Treatment Experiences and Healthcare Perceptions Across Twitter" }, { "answers": [ "" ], "context": "The extraction of temporal relations among events is an important natural language understanding (NLU) task that can benefit many downstream tasks such as question answering, information retrieval, and narrative generation. The task can be modeled as building a graph for a given text, whose nodes represent events and edges are labeled with temporal relations correspondingly. Figure FIGREF1 illustrates such a graph for the text shown therein. The nodes assassination, slaughtered, rampage, war, and Hutu are the candidate events, and different types of edges specify different temporal relations between them: assassination is BEFORE rampage, rampage INCLUDES slaughtered, and the relation between slaughtered and war is VAGUE. Since “Hutu” is actually not an event, a system is expected to annotate the relations between “Hutu” and all other nodes in the graph as NONE (i.e., no relation).", "id": 1280, "question": "What kind of events do they extract?", "title": "Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction" }, { "answers": [ "" ], "context": "In this section we briefly summarize the existing work on event extraction and temporal relation extraction. To the best of our knowledge, there is no prior work on joint event and relation extraction, so we will review joint entity and relation extraction works instead.", "id": 1281, "question": "Is this the first paper to propose a joint model for event and temporal relation extraction?", "title": "Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction" }, { "answers": [ "" ], "context": "In this section we first provide an overview of our neural SSVM model, and then describe each component in our framework in detail (i.e., the multi-tasking neural scoring module, and how inference and learning are performed). We denote the set of all possible relation labels (including NONE) as $\\mathcal {R}$, all event candidates (both events and non-events) as $\\mathcal {E}$, and all relation candidates as $\\mathcal {E}\\mathcal {E}$.", "id": 1282, "question": "What datasets were used for this work?", "title": "Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction" }, { "answers": [ "" ], "context": "The people of the world speak about 6,900 different languages. Open-source off-the-shelf natural language processing (NLP) toolboxes like OpenNLP and CoreNLP cover only 6–7 languages, and we have sufficient labeled training data for inducing models for about 20–30 languages. In other words, supervised sequence learning algorithms are not sufficient to induce POS models for but a small minority of the world's languages.", "id": 1283, "question": "What languages did they experiment with?", "title": "Empirical Gaussian priors for cross-lingual transfer learning" }, { "answers": [ "PPL: SVT\nDiversity: GVT\nEmbeddings Similarity: SVT\nHuman Evaluation: SVT" ], "context": "Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure\"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training.", "id": 1284, "question": "What approach performs better in experiments global latent or sequence of fine-grained latent variables?", "title": "Variational Transformers for Diverse Response Generation" }, { "answers": [ "" ], "context": "Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry\" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation.", "id": 1285, "question": "What baselines other than standard transformers are used in experiments?", "title": "Variational Transformers for Diverse Response Generation" }, { "answers": [ "" ], "context": "Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer.", "id": 1286, "question": "What three conversational datasets are used for evaluation?", "title": "Variational Transformers for Diverse Response Generation" }, { "answers": [ "" ], "context": "Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models.", "id": 1287, "question": "What previous approaches did this method outperform?", "title": "Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER" }, { "answers": [ "" ], "context": "As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques .", "id": 1288, "question": "How big is the Universal Dependencies corpus?", "title": "Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER" }, { "answers": [ "" ], "context": "The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7.", "id": 1289, "question": "What data is the Prague Dependency Treebank built on?", "title": "Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER" }, { "answers": [ "" ], "context": "The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels.", "id": 1290, "question": "What data is used to build the embeddings?", "title": "Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER" }, { "answers": [ "" ], "context": "The spread of influenza is a major health concern. Without appropriate preventative measures, this can escalate to an epidemic, causing high levels of mortality. A potential route to early detection is to analyse statements on social media platforms to identify individuals who have reported experiencing symptoms of the illness. These numbers can be used as a proxy to monitor the spread of the virus.", "id": 1291, "question": "How big is dataset used for fine-tuning model for detection of red flag medical symptoms in individual statements?", "title": "Language Transfer for Early Warning of Epidemics from Social Media" }, { "answers": [ "" ], "context": "Previously, authors have created multilingual models which should allow transfer between languages by aligning models BIBREF0 or embedding spaces BIBREF1, BIBREF2. An alternative is translation of a high-resource language into the target low-resource language; for instance, BIBREF3 combined translation with subsequent selective correction by active learning of uncertain words and phrases believed to describe entities, to create a labelled dataset for named entity recognition.", "id": 1292, "question": "Is there any explanation why some choice of language pair is better than the other?", "title": "Language Transfer for Early Warning of Epidemics from Social Media" }, { "answers": [ "" ], "context": "It has become clear over the last year that pretraining sentence encoder neural networks on unsupervised tasks, such as language modeling, then fine-tuning them on individual target tasks, can yield significantly better target task performance than could be achieved using target task training data alone BIBREF1 , BIBREF0 , BIBREF2 . Large-scale unsupervised pretraining in experiments like these seems to produce pretrained sentence encoders with substantial knowledge of the target language (which, so far, is generally English). These works have shown that a mostly task-agnostic, one-size-fits-all approach to fine-tuning a large pretrained model with a thin output layer for a given task can achieve results superior to individually optimized models.", "id": 1293, "question": "Is the new model evaluated on the tasks that BERT and ELMo are evaluated on?", "title": "Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks" }, { "answers": [ "" ], "context": " BIBREF8 compare several pretraining tasks for syntactic target tasks, and find that language model pretraining reliably performs well. BIBREF9 investigate the architectural choices behind ELMo-style pretraining with a fixed encoder, and find that the precise choice of encoder architecture strongly influences training speed, but has a relatively small impact on performance. In an publicly-available ICLR 2019 submission, BIBREF10 compare a variety of tasks for pretraining in an ELMo-style setting with no encoder fine-tuning. They conclude that language modeling generally works best among candidate single tasks for pretraining, but show some cases in which a cascade of a model pretrained on language modeling followed by another model pretrained on tasks like MNLI can work well. The paper introducing BERT BIBREF2 briefly mentions encouraging results in a direction similar to ours: One footnote notes that unpublished experiments show “substantial improvements on RTE from multi-task training with MNLI”.", "id": 1294, "question": "Does the additional training on supervised tasks hurt performance in some tasks?", "title": "Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks" }, { "answers": [ "Attention-based translation model with convolution sequence to sequence model" ], "context": "Named entity recognition (NER) is a sequence tagging task that extracts the continuous tokens into specified classes, such as person names, organizations and locations. Current state-of-the-art approaches for NER usually base themselves on long short-term memory recurrent neural networks (LSTM RNNs) and a subsequent conditional random field (CRF) to predict the sequence labels BIBREF0 . Performances of neural NER methods are compromised if the training data are not enough BIBREF1 . This problem is severe for many languages due to a lack of labeled datasets, e.g., German and Spanish. In comparison, NER on English is well developed and there exist abundant labeled data for training purpose. Therefore, in this work, we regard English as a high-resource language, while other languages, even Chinese, as low-resource languages.", "id": 1295, "question": "Which translation system do they use to translate to English?", "title": "Back Attention Knowledge Transfer for Low-resource Named Entity Recognition" }, { "answers": [ "" ], "context": "In this section, we will introduce the BAN in three parts. Our model is based on the mainstream NER model BIBREF5 , using BiLSTM-CRF as the basic network structure. Given a sentence INLINEFORM0 and corresponding labels INLINEFORM1 , where INLINEFORM2 denotes the INLINEFORM3 th token and INLINEFORM4 denotes the INLINEFORM5 th label. The NER task is to estimate the probability INLINEFORM6 . Figure FIGREF1 shows the main architecture of our model.", "id": 1296, "question": "Which languages do they work with?", "title": "Back Attention Knowledge Transfer for Low-resource Named Entity Recognition" }, { "answers": [ "Bidirectional LSTM based NER model of Flair" ], "context": "Attention-base translation model We use the system of BIBREF6 , a convolutional sequence to sequence model. It divides translation process into two steps. First, in the encoder step, given an input sentence INLINEFORM0 of length INLINEFORM1 , INLINEFORM2 represents each word as word embedding INLINEFORM3 . After that, we obtain the absolute position of input elements INLINEFORM4 . Both vectors are concatenated to get input sentence representations INLINEFORM5 . Similarly, output elements INLINEFORM6 generated from decoder network have the same structure. A convolutional neural network (CNN) is used to get the hidden state of the sentence representation from left to right. Second, in the decoder step, attention mechanism is used in each CNN layer. In order to acquire the attention value, we combine the current decoder state INLINEFORM7 with the embedding of previous decoder output value INLINEFORM8 : DISPLAYFORM0 ", "id": 1297, "question": "Which pre-trained English NER model do they use?", "title": "Back Attention Knowledge Transfer for Low-resource Named Entity Recognition" }, { "answers": [ "" ], "context": "It can be challenging to build high-accuracy automatic speech recognition (ASR) systems in real world due to the vast language diversity and the requirement of extensive manual annotations on which the ASR algorithms are typically built. Series of research efforts have thus far been focused on guiding the ASR of a target language by using the supervised data from multiple languages.", "id": 1298, "question": "How much training data is required for each low-resource language?", "title": "Multilingual Graphemic Hybrid ASR with Massive Data Augmentation" }, { "answers": [ "" ], "context": "In this section we first briefly describe our deployed ASR architecture based on the weighted finite-state transducers (WFSTs) outlined in BIBREF26. Then we present its extension to multilingual training. Lastly, we discuss its language-independent decoding and language-specific decoding.", "id": 1299, "question": "What are the best within-language data augmentation methods?", "title": "Multilingual Graphemic Hybrid ASR with Massive Data Augmentation" }, { "answers": [ "Little overlap except common basic Latin alphabet and that Hindi and Marathi languages use same script." ], "context": "In the ASR framework of a hybrid BLSTM-HMM, the decoding graph can be interpreted as a composed WFST of cascade $H \\circ C \\circ L \\circ G$. Acoustic models, i.e. BLSTMs, produce acoustic scores over context-dependent HMM (i.e. triphone) states. A WFST $H$, which represents the HMM set, maps the triphone states to context-dependent phones.", "id": 1300, "question": "How much of the ASR grapheme set is shared between languages?", "title": "Multilingual Graphemic Hybrid ASR with Massive Data Augmentation" }, { "answers": [ "" ], "context": "In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society.", "id": 1301, "question": "What is the performance of the model for the German sub-task A?", "title": "HateMonitors: Language Agnostic Abuse Detection in Social Media" }, { "answers": [ "" ], "context": "Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum.", "id": 1302, "question": "Is the model tested for language identification?", "title": "HateMonitors: Language Agnostic Abuse Detection in Social Media" }, { "answers": [ "" ], "context": "The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages.", "id": 1303, "question": "Is the model compared to a baseline model?", "title": "HateMonitors: Language Agnostic Abuse Detection in Social Media" }, { "answers": [ "Hindi, English and German (German task won)" ], "context": "We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced.", "id": 1304, "question": "What are the languages used to test the model?", "title": "HateMonitors: Language Agnostic Abuse Detection in Social Media" }, { "answers": [ "thai" ], "context": "In this paper we discuss online handwriting recognition: Given a user input in the form of an ink, i.e. a list of touch or pen strokes, output the textual interpretation of this input. A stroke is a sequence of points INLINEFORM0 with position INLINEFORM1 and timestamp INLINEFORM2 .", "id": 1305, "question": "Which language has the lowest error rate reduction?", "title": "Fast Multi-language LSTM-based Online Handwriting Recognition" }, { "answers": [ "" ], "context": "Our handwriting recognition model draws its inspiration from research aimed at building end-to-end transcription models in the context of handwriting recognition BIBREF24 , optical character recognition BIBREF21 , and acoustic modeling in speech recognition BIBREF18 . The model architecture is constructed from common neural network blocks, i.e. bidirectional LSTMs and fully-connected layers (Figure FIGREF12 ). It is trained in an end-to-end manner using the CTC loss BIBREF24 .", "id": 1306, "question": "What datasets did they use?", "title": "Fast Multi-language LSTM-based Online Handwriting Recognition" }, { "answers": [ "" ], "context": "There is a broad consensus that artificial intelligence (AI) research is progressing steadily, and that its impact on society is likely to increase. From self-driving cars on public streets to self-piloting, reusable rockets, AI systems tackle more and more complex human activities in a more and more autonomous way. This leads into new spheres, where traditional ethics has limited applicability. Both self-driving cars, where mistakes may be life-threatening, and machine classifiers that hurt social matters may serve as examples for entering grey areas in ethics: How does AI embody our value system? Can AI systems learn human ethical judgements? If not, can we contest the AI system?", "id": 1307, "question": "Do they report results only on English data?", "title": "BERT has a Moral Compass: Improvements of ethical and moral values of machines" }, { "answers": [ "" ], "context": "In this section, we review our assumptions, in particular what we mean by moral choices, and the required background, following closely BIBREF0.", "id": 1308, "question": "What is the Moral Choice Machine?", "title": "BERT has a Moral Compass: Improvements of ethical and moral values of machines" }, { "answers": [ "Answer with content missing: (formula 1) bias(q, a, b) = cos(a, q) − cos(b, q)\nBias is calculated as substraction of cosine similarities of question and some answer for two opposite answers." ], "context": "Word-based approaches such as WEAT or Verb Extraction are comparatively simple. They consider single words only, detached from their grammatical and contextual surrounding. In contrast, the Moral Choice Machine determines biases on a sentence level.", "id": 1309, "question": "How is moral bias measured?", "title": "BERT has a Moral Compass: Improvements of ethical and moral values of machines" }, { "answers": [ "" ], "context": "As BIBREF0 (BIBREF0) showed the question/answer template is an appropriate method to extract moral biases. However as BIBREF13 (BIBREF13) showed, one is also able to even adapt the model's bias, e.g. debias the model's gender bias. They describe that the first step for debiasing word embeddings is to identify a direction (or, more generally, a subspace) of the embedding that captures the bias.", "id": 1310, "question": "What sentence embeddings were used in the previous Jentzsch paper?", "title": "BERT has a Moral Compass: Improvements of ethical and moral values of machines" }, { "answers": [ "" ], "context": "This section investigates empirically whether text corpora contain recoverable and accurate imprints of our moral choices. Specifically, we move beyond BIBREF0, by showing that BERT has a more accurate moral representation than that of the Universal Sentence Encoder.", "id": 1311, "question": "How do the authors define deontological ethical reasoning?", "title": "BERT has a Moral Compass: Improvements of ethical and moral values of machines" }, { "answers": [ "" ], "context": "Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7.", "id": 1312, "question": "How does framework automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model?", "title": "Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation" }, { "answers": [ "" ], "context": "Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively.", "id": 1313, "question": "What human judgement metrics are used?", "title": "Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation" }, { "answers": [ "" ], "context": "A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1):", "id": 1314, "question": "What automatic evaluation metrics are used?", "title": "Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation" }, { "answers": [ "" ], "context": "Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as:", "id": 1315, "question": "What state of the art models were used in experiments?", "title": "Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation" }, { "answers": [ "" ], "context": "A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\\textit {cos\\_sim}(\\textit {sent\\_emb}(c), \\textit {sent\\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\\textit {sent\\_emb}(e)=\\frac{1}{|e|}\\sum _{w\\in {}e}\\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively.", "id": 1316, "question": "What five dialogue attributes were analyzed?", "title": "Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation" }, { "answers": [ "" ], "context": "A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them.", "id": 1317, "question": "What three publicly available coropora are used?", "title": "Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation" }, { "answers": [ "" ], "context": "One of the recent challenges in machine learning (ML) is interpreting the predictions made by models, especially deep neural networks. Understanding models is not only beneficial, but necessary for wide-spread adoption of more complex (and potentially more accurate) ML models. From healthcare to financial domains, regulatory agencies mandate entities to provide explanations for their decisions BIBREF0 . Hence, most machine learning progress made in those areas is hindered by a lack of model explainability – causing practitioners to resort to simpler, potentially low-performance models. To supply for this demand, there has been many attempts for model interpretation in recent years for tree-based algorithms BIBREF1 and deep learning algorithms BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . On the other hand, the amount of research focusing on explainable natural language processing (NLP) models BIBREF8 , BIBREF9 , BIBREF10 is modest as opposed to image explanation techniques.", "id": 1318, "question": "Which datasets do they use?", "title": "Incorporating Priors with Feature Attribution on Text Classification" }, { "answers": [ "word error rate" ], "context": "End-to-end models such as Listen, Attend & Spell (LAS) BIBREF0 or the Recurrent Neural Network Transducer (RNN-T) BIBREF1 are sequence models that directly define $P(W | X)$, the posterior probability of the word or subword sequence $W$ given an audio frame sequence $X$, with no chaining of sub-module probabilities. State-of-the-art, or near state-of-the-art results have been reported for these models on challenging tasks BIBREF2, BIBREF3.", "id": 1319, "question": "What metrics are used for evaluation?", "title": "A Density Ratio Approach to Language Model Fusion in End-to-End Automatic Speech Recognition" }, { "answers": [ "163,110,000 utterances" ], "context": "Generative models and Bayes' rule. The Noisy Channel Model underlying the origins of statistical ASR BIBREF12 used Bayes' rule to combine generative models of both the acoustics $p(X|W)$ and the symbol sequence $P(W)$:", "id": 1320, "question": "How much training data is used?", "title": "A Density Ratio Approach to Language Model Fusion in End-to-End Automatic Speech Recognition" }, { "answers": [ "" ], "context": "The model makes the following assumptions:", "id": 1321, "question": "How is the training data collected?", "title": "A Density Ratio Approach to Language Model Fusion in End-to-End Automatic Speech Recognition" }, { "answers": [ "" ], "context": "The RNN Transducer (RNN-T) BIBREF1 defines a sequence-level posterior $P(W|X)$ for a given acoustic feature vector sequence $X = {\\mbox{\\bf x}}_1, ..., {\\mbox{\\bf x}}_T$ and a given word or sub-word sequence $W = s_1, ..., s_U$ in terms of possible alignments $S_W = \\lbrace ..., ({\\bf s}, {\\bf t}), ... \\rbrace $ of $W$ to $X$. The tuple $({\\bf s}, {\\bf t})$ denotes a specific alignment sequence, a symbol sequence and corresponding sequence of time indices, consistent with the sequence $W$ and utterance $X$. The symbols in ${\\bf s}$ are elements of an expanded symbol space that includes optional, repeatable blank symbols used to represent acoustics-only path extensions, where the time index is incremented, but no non-blank symbols are added. Conversely, non-blank symbols are only added to a partial path time-synchronously. (I.e., using $i$ to index elements of ${\\bf s}$ and ${\\bf t}$, $t_{i+1} = t_i + 1$ if $s_{i+1}$ is blank, and $t_{i + 1} = t_i$ if $s_{i+1}$ is non-blank). $P(W|X)$ is defined by summing over alignment posteriors:", "id": 1322, "question": "What language(s) is the model trained/tested on?", "title": "A Density Ratio Approach to Language Model Fusion in End-to-End Automatic Speech Recognition" }, { "answers": [ "" ], "context": "Conversational interactions between humans and Artificial Intelligence (AI) agents could amount to as much as thousands of interactions a day given recent developments BIBREF0. This surge in human-AI interactions has led to an interest in developing more fluid interactions between agent and human. The term `fluidity', when we refer to dialogue systems, tries to measure the concept of how humanlike communication is between a human and an AI entity. Conversational fluidity has historically been measured using metrics such as perplexity, recall, and F1-scores. However, one finds various drawbacks using these metrics. During the automatic evaluation stage of the second Conversational Intelligence Challenge (ConvAI2) BIBREF1 competition, it was noted that consistently replying with “I am you to do and your is like” would outperform the F1-score of all the models in the competition. This nonsensical phrase was constructed simply by picking several frequent words from the training set. Also, Precision at K, or the more specific Hits@1 metric has been used historically in assessing retrieval based aspects of the agent. This is defined as the accuracy of the next dialogue utterance when choosing between the gold response and N–1 distractor responses. Since these metrics are somewhat flawed, human evaluations were used in conjunction. Multiple attempts have been made historically to try to develop automatic metrics to assess dialogue fluidity. One of the earliest Eckert et al. (1997), used a stochastic system which regulated user-generated dialogues to debug and evaluate chatbots BIBREF2. In the same year, Marilyn et al. (1997) proposed the PARADISE BIBREF3 model. This framework was developed to evaluate dialogue agents in spoken conversations. A few years later the BLEU BIBREF4 metric was proposed. Subsequently, for almost two decades, this metric has been one of the few to be widely adopted by the research community. The method, which compares the matches in n-grams from the translated outputted text and the input text proved to be quick, inexpensive and has therefore been widely used. Therefore, we use the BLEU metric as a baseline to compare the quality of our proposed model.", "id": 1323, "question": "was bert used?", "title": "Measuring Conversational Fluidity in Automated Dialogue Agents" }, { "answers": [ "" ], "context": "For this study, we use two types of data namely single-turn and multi-turn. The first type, single-turn, is defined such that each instance is made up of one statement and one response. This pair is usually a fragment of a larger dialogue. When given to humans for evaluation of fluidity, we ask to give a score on characteristics such as “How related is the response to the statement?” or “Does the response contain repeated text from the user's statement?”. These are all things that should not be affected by the fact that no history or context is provided and therefore, can still be classified reasonably. Contrary to the single turn datasets, the second type is the multi-turn dataset. This contains multiple instances of statements and responses, building on each other to create a fuller conversation. With these kinds of datasets, one can also evaluate and classify the data on various other attributes. An example of such evaluations would be something like “Does this response continue on the flow of the conversation?” or “Is the chatbot using repetitive text from previous responses?”. The details of how we collected each dataset are detailed below.", "id": 1324, "question": "what datasets did they use?", "title": "Measuring Conversational Fluidity in Automated Dialogue Agents" }, { "answers": [ "" ], "context": "This section discusses the methods and used to develop our attributes and the technical details of how they are combined to create a final classification layer.", "id": 1325, "question": "which existing metrics do they compare with?", "title": "Measuring Conversational Fluidity in Automated Dialogue Agents" }, { "answers": [ "" ], "context": "Recently, neural models pre-trained on a language modeling task, such as ELMo BIBREF0 , OpenAI GPT BIBREF1 , and BERT BIBREF2 , have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. The success of BERT can largely be associated to the notion of context-aware word embeddings, which differentiate it from common approaches such as word2vec BIBREF3 that establish a static semantic embedding. Since the introduction of BERT, the NLP community continues to be impressed by the amount of ideas produced on top of this powerful language representation model. However, despite its success, it remains unclear whether the representations produced by BERT can be utilized for tasks such as commonsense reasoning. Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC). These tasks have been proposed as potential alternatives to the Turing Test, because they are formulated to be robust to statistics of word co-occurrence BIBREF4 .", "id": 1326, "question": "Which datasets do they evaluate on?", "title": "Attention Is (not) All You Need for Commonsense Reasoning" }, { "answers": [ "Their model does not differ from BERT." ], "context": "In this section we first review the main aspects of the BERT approach, which are important to understand our proposal and we introduce notations used in the rest of the paper. Then, we introduce Maximum Attention Score (MAS), and explain how it can be utilized for commonsense reasoning.", "id": 1327, "question": "How does their model differ from BERT?", "title": "Attention Is (not) All You Need for Commonsense Reasoning" }, { "answers": [ "" ], "context": "Narrative is a fundamental form of representation in human language and culture. Stories connect individuals and deliver experience, emotions and knowledge. Narrative comprehension has attracted long-standing interests in natural language processing (NLP) BIBREF1 , and is widely applicable to areas such as content creation. Enabling machines to understand narrative is also an important first step towards real intelligence. Previous studies on narrative comprehension include character roles identification BIBREF2 , narratives schema construction BIBREF3 , and plot pattern identification BIBREF4 . However, their main focus is on analyzing the stories themselves. In contrast, we concentrate on training machines to predict the end of the stories. Story completion tasks rely not only on the logic of the story itself, but also requires implicit commonsense knowledge outside the story. To understand stories, human can use the information from both the story itself and other implicit sources such as commonsense knowledge and normative social behaviors BIBREF5 . In this paper, we propose to imitate such behaviors to incorporate structured commonsense knowledge to aid the story ending prediction.", "id": 1328, "question": "Which metrics are they evaluating with?", "title": "Incorporating Structured Commonsense Knowledge in Story Completion" }, { "answers": [ "" ], "context": "Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation.", "id": 1329, "question": "What different properties of the posterior distribution are explored in the paper?", "title": "On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation" }, { "answers": [ "" ], "context": "We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\\text{I}({x};{z})$ BIBREF17.", "id": 1330, "question": "Why does proposed term help to avoid posterior collapse?", "title": "On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation" }, { "answers": [ "Answer with content missing: (Formula 2) Formula 2 is an answer: \n\\big \\langle\\! \\log p_\\theta({x}|{z}) \\big \\rangle_{q_\\phi({z}|{x})} - \\beta |D_{KL}\\big(q_\\phi({z}|{x}) || p({z})\\big)-C|" ], "context": "The reconstruction loss can naturally measure distortion ($D := - \\big \\langle \\log p_\\theta ({x}|{z}) \\big \\rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\\phi (z|x)$.", "id": 1331, "question": "How does explicit constraint on the KL divergence term that authors propose looks like?", "title": "On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation" }, { "answers": [ "" ], "context": "Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2.", "id": 1332, "question": "Did they experiment with the tool?", "title": "Seshat: A tool for managing and verifying annotation campaigns of audio data" }, { "answers": [ "" ], "context": "Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8.", "id": 1333, "question": "Can it be used for any language?", "title": "Seshat: A tool for managing and verifying annotation campaigns of audio data" }, { "answers": [ "" ], "context": "Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow:", "id": 1334, "question": "Is this software available to the public?", "title": "Seshat: A tool for managing and verifying annotation campaigns of audio data" }, { "answers": [ "" ], "context": "Language identification is a crucial first step in textual data processing and is considered feasible over formal texts BIBREF0 . The task is harder for social media (e.g. Twitter) where text is less formal, noisier and can be written in wide range of languages. We focus on identifying similar languages, where surface-level content alone may not be sufficient. Our approach combines a content model with evidence propagated over the social network of the authors. For example, a user well-connected to users posting in a language is more likely to post in that language. Our system scores 76.63%, 1.4% higher than the top submission to the tweetLID workshop.", "id": 1335, "question": "What shared task does this system achieve SOTA in?", "title": "Discriminating between similar languages in Twitter using label propagation" }, { "answers": [ "" ], "context": "Traditional language identification compares a document with a language fingerprint built from n-gram bag-of-words (character or word level). Tweets carry additional metadata useful for identifying language, such as geolocation BIBREF1 , username BIBREF2 , BIBREF1 and urls mentioned in the tweet BIBREF2 . Other methods expand beyond the tweet itself to use a histogram of previously predicted languages, those of users @-mentioned and lexical content of other tweets in a discussion BIBREF1 . Discriminating between similar languages was the focus of the VarDial workshop BIBREF3 , and most submissions used content analysis. These methods make limited use of the social context in which the authors are tweeting – our research question is “Can we identify the language of a tweet using the social graph of the tweeter?”.", "id": 1336, "question": "How are labels propagated using this approach?", "title": "Discriminating between similar languages in Twitter using label propagation" }, { "answers": [ "" ], "context": "Our method predicts the language INLINEFORM0 for a tweet INLINEFORM1 by combining scores from a content model and a graph model that takes social context into account, as per Equation EQREF2 :", "id": 1337, "question": "What information is contained in the social graph of tweet authors?", "title": "Discriminating between similar languages in Twitter using label propagation" }, { "answers": [ "" ], "context": "Determining the sentiment polarity of tweets has become a landmark homework exercise in natural language processing (NLP) and data science classes. This is perhaps because the task is easy to understand and it is also easy to get good results with very simple methods (e.g. positive - negative words counting). The practical applications of this task are wide, from monitoring popular events (e.g. Presidential debates, Oscars, etc.) to extracting trading signals by monitoring tweets about public companies. These applications often benefit greatly from the best possible accuracy, which is why the SemEval-2017 Twitter competition promotes research in this area. The competition is divided into five subtasks which involve standard classification, ordinal classification and distributional estimation. For a more detailed description see BIBREF0 .", "id": 1338, "question": "What were the five English subtasks?", "title": "BB_twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and LSTMs" }, { "answers": [ "" ], "context": "Let us now describe the architecture of the CNN we worked with. Its architecture is almost identical to the CNN of BIBREF8 . A smaller version of our model is illustrated on Fig. FIGREF2 . The input of the network are the tweets, which are tokenized into words. Each word is mapped to a word vector representation, i.e. a word embedding, such that an entire tweet can be mapped to a matrix of size INLINEFORM0 , where INLINEFORM1 is the number of words in the tweet and INLINEFORM2 is the dimension of the embedding space (we chose INLINEFORM3 ). We follow BIBREF8 zero-padding strategy such that all tweets have the same matrix dimension INLINEFORM4 , where we chose INLINEFORM5 . We then apply several convolution operations of various sizes to this matrix. A single convolution involves a filtering matrix INLINEFORM6 where INLINEFORM7 is the size of the convolution, meaning the number of words it spans. The convolution operation is defined as DISPLAYFORM0 ", "id": 1339, "question": "How many CNNs and LSTMs were ensembled?", "title": "BB_twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and LSTMs" }, { "answers": [ "There is no baseline." ], "context": "By only reading a single text review of a movie it can be difficult to say what the genre of that movie is, but by using text mining techniques on thousands of movie reviews is it possible to predict the genre?", "id": 1340, "question": "what was the baseline?", "title": "Classifying movie genres by analyzing text reviews" }, { "answers": [ "" ], "context": "In this section all relevant theory and methodology is described. Table TABREF1 lists basic terminology and a short description of their meaning.", "id": 1341, "question": "how many movie genres do they explore?", "title": "Classifying movie genres by analyzing text reviews" }, { "answers": [ "" ], "context": "Data preprocessing is important when working with text data because it can reduce the number of features and it formats the data into the desired form BIBREF2 .", "id": 1342, "question": "what evaluation metrics are discussed?", "title": "Classifying movie genres by analyzing text reviews" }, { "answers": [ "553,451 documents" ], "context": "Stock movement prediction is a central task in computational and quantitative finance. With recent advances in deep learning and natural language processing technology, event-driven stock prediction has received increasing research attention BIBREF0, BIBREF1. The goal is to predict the movement of stock prices according to financial news. Existing work has investigated news representation using bag-of-words BIBREF2, named entities BIBREF3, event structures BIBREF4 or deep learning BIBREF1, BIBREF5.", "id": 1343, "question": "How big is dataset used?", "title": "News-Driven Stock Prediction With Attention-Based Noisy Recurrent State Transition" }, { "answers": [ "" ], "context": "There has been a line of work predicting stock markets using text information from daily news. We compare this paper with previous work from the following two perspectives.", "id": 1344, "question": "What is dataset used for news-driven stock movement prediction?", "title": "News-Driven Stock Prediction With Attention-Based Noisy Recurrent State Transition" }, { "answers": [ "The model outperforms at every point in the\nimplicit-tuples PR curve reaching almost 0.8 in recall" ], "context": "Open Information Extraction (OpenIE) is the NLP task of generating (subject, relation, object) tuples from unstructured text e.g. “Fed chair Powell indicates rate hike” outputs (Powell, indicates, rate hike). The modifier open is used to contrast IE research in which the relation belongs to a fixed set. OpenIE has been shown to be useful for several downstream applications such as knowledge base construction BIBREF0 , textual entailment BIBREF1 , and other natural language understanding tasks BIBREF2 . In our previous example an extraction was missing: (Powell, works for, Fed). Implicit extractions are our term for this type of tuple where the relation (“works for” in this example) is not contained in the input sentence. In both colloquial and formal language, many relations are evident without being explicitly stated. However, despite their pervasiveness, there has not been prior work targeted at implicit predicates in the general case. Implicit information extractors for some specific implicit relations such as noun-mediated relations, numerical relations, and others BIBREF3 , BIBREF4 , BIBREF5 have been researched. While specific extractors are important, there are a multiplicity of implicit relation types and it would be intractable to categorize and design extractors for each one.", "id": 1345, "question": "How much better does this baseline neural model do?", "title": "Learning Open Information Extraction of Implicit Relations from Reading Comprehension Datasets" }, { "answers": [ "" ], "context": "Abstract Meaning Representation (AMR) BIBREF0 is a semantic formalism encoding the meaning of a sentence as a rooted, directed graph. AMR uses a graph to represent meaning, where nodes (such as “boy”, “want-01”) represent concepts, and edges (such as “ARG0”, “ARG1”) represent relations between concepts. Encoding many semantic phenomena into a graph structure, AMR is useful for NLP tasks such as machine translation BIBREF1 , BIBREF2 , question answering BIBREF3 , summarization BIBREF4 and event detection BIBREF5 .", "id": 1346, "question": "What is the SemEval-2016 task 8?", "title": "AMR-to-text Generation with Synchronous Node Replacement Grammar" }, { "answers": [ "" ], "context": "Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word embeddings, i.e., dense, low dimensional vector representations of words that aim to capture salient semantic and syntactic properties BIBREF1 .", "id": 1347, "question": "How much faster is training time for MGNC-CNN over the baselines?", "title": "MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification" }, { "answers": [ "MC-CNN\nMVCNN\nCNN" ], "context": "", "id": 1348, "question": "What are the baseline models?", "title": "MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification" }, { "answers": [ "In terms of Subj the Average MGNC-CNN is better than the average score of baselines by 0.5. Similarly, Scores of SST-1, SST-2, and TREC where MGNC-CNN has similar improvements. \nIn case of Irony the difference is about 2.0. \n" ], "context": "We first review standard one-layer CNN (which exploits a single set of embeddings) for sentence classification BIBREF3 , and then propose our augmentations, which exploit multiple embedding sets.", "id": 1349, "question": "By how much of MGNC-CNN out perform the baselines?", "title": "MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification" }, { "answers": [ "" ], "context": "Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.", "id": 1350, "question": "What dataset/corpus is this evaluated over?", "title": "MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification" }, { "answers": [ "" ], "context": "We consider three sets of word embeddings for our experiments: (i) word2vec is trained on 100 billion tokens of Google News dataset; (ii) GloVe BIBREF18 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora. These three embedding sets happen to all be 300-dimensional, but our model could accommodate arbitrary and variable sizes. We pre-trained our own syntactic embeddings following BIBREF8 . We parsed the ukWaC corpus BIBREF19 using the Stanford Dependency Parser v3.5.2 with Stanford Dependencies BIBREF20 and extracted (word, relation+context) pairs from parse trees. We “collapsed\" nodes with prepositions and notated inverse relations separately, e.g., “dog barks\" emits two tuples: (barks, nsubj_dog) and (dog, nsubj INLINEFORM0 _barks). We filter words and contexts that appear fewer than 100 times, resulting in INLINEFORM1 173k words and 1M contexts. We trained 300d vectors using word2vecf with default parameters.", "id": 1351, "question": "What are the comparable alternative architectures?", "title": "MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification" }, { "answers": [ "" ], "context": "Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.”", "id": 1352, "question": "Which state-of-the-art model is surpassed by 9.68% attraction score?", "title": "Hooks in the Headline: Learning to Generate Headlines with Controlled Styles" }, { "answers": [ "Humor in headlines (TitleStylist vs Multitask baseline):\nRelevance: +6.53% (5.87 vs 5.51)\nAttraction: +3.72% (8.93 vs 8.61)\nFluency: 1,98% (9.29 vs 9.11)" ], "context": "Our work is related to summarization and text style transfer.", "id": 1353, "question": "What is increase in percentage of humor contained in headlines generated with TitleStylist method (w.r.t. baselines)?", "title": "Hooks in the Headline: Learning to Generate Headlines with Controlled Styles" }, { "answers": [ "" ], "context": "Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27.", "id": 1354, "question": "How is attraction score measured?", "title": "Hooks in the Headline: Learning to Generate Headlines with Controlled Styles" }, { "answers": [ "" ], "context": "Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem.", "id": 1355, "question": "How is presence of three target styles detected?", "title": "Hooks in the Headline: Learning to Generate Headlines with Controlled Styles" }, { "answers": [ "" ], "context": "The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\\lbrace (\\mathbf {a^{(i)}},\\mathbf {h^{(i)}})\\rbrace _{i=1}^N$ consists of pairs of a news article $\\mathbf {a}$ and its plain headline $\\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\\lbrace \\mathbf {a^{(i)}}\\rbrace _{i=1}^N$, and $H=\\lbrace \\mathbf {h^{(i)}}\\rbrace _{i=1}^N$. The target corpus $T=\\lbrace \\mathbf {t^{(i)}}\\rbrace _{i=1}^{M}$ comprises of sentences $\\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$.", "id": 1356, "question": "How is fluency automatically evaluated?", "title": "Hooks in the Headline: Learning to Generate Headlines with Controlled Styles" }, { "answers": [ "" ], "context": "Emergence of web services such as blog, microblog and social networking websites allows people to contribute information publicly. This user-generated information is generally more personal, informal and often contains personal opinions. In aggregate, it can be useful for reputation analysis of entities and products, natural disasters detection, obtaining first-hand news, or even demographic analysis. Twitter, an easily accessible source of information, allows users to voice their opinions and thoughts in short text known as tweets.", "id": 1357, "question": "What are the measures of \"performance\" used in this paper?", "title": "Twitter-Network Topic Model: A Full Bayesian Treatment for Social Network and Text Modeling" }, { "answers": [ "The languages considered were English, Chinese, German, Russian, Arabic, Spanish, French" ], "context": "Anecdotally speaking, fluent bilingual speakers rarely face trouble translating a task learned in one language to another. For example, a bilingual speaker who is taught a math problem in English will trivially generalize to other known languages. Furthermore there is a large collection of evidence in linguistics arguing that although separate lexicons exist in multilingual speakers the core representations of concepts and theories are shared in memory BIBREF2 , BIBREF3 , BIBREF4 . The fundamental question we're interested in answering is on the learnability of these shared representations within a statistical framework.", "id": 1358, "question": "What are the languages they consider in this paper?", "title": "Towards Language Agnostic Universal Representations" }, { "answers": [ "They experimented with sentiment analysis and natural language inference task" ], "context": "Our work attempts to unite universal (task agnostic) representations with multilingual (language agnostic) representations BIBREF9 , BIBREF10 . The recent trend in universal representations has been moving away from context-less unsupervised word embeddings to context-rich representations. Deep contextualized word representations (ELMo) trains an unsupervised language model on a large corpus of data and applies it to a large set of auxiliary tasks BIBREF9 . These unsupervised representations boosted the performance of models on a wide array of tasks. Along the same lines BIBREF10 showed the power of using latent representations of translation models as features across other non-translation tasks. In general, initializing models with pre-trained language models shows promise against the standard initialization with word embeddings. Even further, BIBREF11 show that an unsupervised language model trained on a large corpus will contain a neuron that strongly correlates with sentiment without ever training on a sentiment task implying that unsupervised language models maybe picking up informative and structured signals.", "id": 1359, "question": "Did they experiment with tasks other than word problems in math?", "title": "Towards Language Agnostic Universal Representations" }, { "answers": [ "" ], "context": "Lately, there has been enormous increase in User Generated Contents (UGC) on the online platforms such as newsgroups, blogs, online forums and social networking websites. According to the January 2018 report, the number of active users in Facebook, YouTube, WhatsApp, Facebook Messenger and WeChat was more than 2.1, 1.5, 1.3, 1.3 and 0.98 billions respectively BIBREF1 . The UGCs, most of the times, are helpful but sometimes, they are in bad taste usually posted by trolls, spammers and bullies. According to a study by McAfee, 87% of the teens have observed cyberbullying online BIBREF2 . The Futures Company found that 54% of the teens witnessed cyber bullying on social media platforms BIBREF3 . Another study found 27% of all American internet users self-censor their online postings out of fear of online harassment BIBREF4 . Filtering toxic comments is a challenge for the content providers as their appearances result in the loss of subscriptions. In this paper, we will be using toxic and abusive terms interchangeably to represent comments which are inappropriate, disrespectful, threat or discriminative.", "id": 1360, "question": "Do they report results only on English data?", "title": "Is preprocessing of text really worth your time for online comment classification?" }, { "answers": [ "" ], "context": "A large number of studies have been done on comment classification in the news, finance and similar other domains. One such study to classify comments from news domain was done with the help of mixture of features such as the length of comments, uppercase and punctuation frequencies, lexical features such as spelling, profanity and readability by applying applied linear and tree based classifier BIBREF7 . FastText, developed by the Facebook AI research (FAIR) team, is a text classification tool suitable to model text involving out-of-vocabulary (OOV) words BIBREF8 BIBREF9 . Zhang et al shown that character level CNN works well for text classification without the need for words BIBREF10 .", "id": 1361, "question": "Do the authors offer any hypothesis as to why the transformations sometimes disimproved performance?", "title": "Is preprocessing of text really worth your time for online comment classification?" }, { "answers": [ "" ], "context": "Toxic comment classification is relatively new field and in recent years, different studies have been carried out to automatically classify toxic comments.Yin et.al. proposed a supervised classification method with n-grams and manually developed regular expressions patterns to detect abusive language BIBREF11 . Sood et. al. used predefined blacklist words and edit distance metric to detect profanity which allowed them to catch words such as sh!+ or @ss as profane BIBREF12 . Warner and Hirschberg detected hate speech by annotating corpus of websites and user comments geared towards detecting anti-semitic hate BIBREF13 . Nobata et. al. used manually labeled online user comments from Yahoo! Finance and news website for detecting hate speech BIBREF5 . Chen et. al. performed feature engineering for classification of comments into abusive, non-abusive and undecided BIBREF14 . Georgakopoulos and Plagianakos compared performance of five different classifiers namely; Word embeddings and CNN, BoW approach SVM, NB, k-Nearest Neighbor (kNN) and Linear Discriminated Analysis (LDA) and found that CNN outperform all other methods in classifying toxic comments BIBREF15 .", "id": 1362, "question": "What preprocessing techniques are used in the experiments?", "title": "Is preprocessing of text really worth your time for online comment classification?" }, { "answers": [ "" ], "context": "We found few dedicated papers that address the effect of incorporating different text transformations on the model accuracy for sentiment classification. Uysal and Gunal shown the impact of transformation on text classification by taking into account four transformations and their all possible combination on news and email domain to observe the classification accuracy. Their experimental analyses shown that choosing appropriate combination may result in significant improvement on classification accuracy BIBREF16 . Nobata et. al. used normalization of numbers, replacing very long unknown words and repeated punctuations with the same token BIBREF5 . Haddi et. al. explained the role of transformation in sentiment analyses and demonstrated with the help of SVM on movie review database that the accuracies improve significantly with the appropriate transformation and feature selection. They used transformation methods such as white space removal, expanding abbreviation, stemming, stop words removal and negation handling BIBREF17 .", "id": 1363, "question": "What state of the art models are used in the experiments?", "title": "Is preprocessing of text really worth your time for online comment classification?" }, { "answers": [ "Accuracy on each dataset and the average accuracy on all datasets." ], "context": "The distributed representation plays an important role in deep learning based natural language processing (NLP) BIBREF0 , BIBREF1 , BIBREF2 . On word level, many successful methods have been proposed to learn a good representation for single word, which is also called word embedding, such as skip-gram BIBREF3 , GloVe BIBREF4 , etc. There are also pre-trained word embeddings, which can easily used in downstream tasks. However, on sentence level, there is still no generic sentence representation which is suitable for various NLP tasks.", "id": 1364, "question": "What evaluation metrics are used?", "title": "Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks" }, { "answers": [ "" ], "context": "The primary role of sentence encoding models is to represent the variable-length sentence or paragraphs as fixed-length dense vector (distributed representation). Currently, the effective neural sentence encoding models include neural Bag-of-words (NBOW), recurrent neural networks (RNN) BIBREF2 , BIBREF6 , convolutional neural networks (CNN) BIBREF1 , BIBREF7 , BIBREF8 , and syntactic-based compositional model BIBREF9 , BIBREF10 , BIBREF11 .", "id": 1365, "question": "What dataset did they use?", "title": "Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks" }, { "answers": [ "" ], "context": "Multi-task Learning BIBREF5 utilizes the correlation between related tasks to improve classification by learning tasks in parallel, which has been widely used in various natural language processing tasks, such as text classification BIBREF12 , semantic role labeling BIBREF13 , machine translation BIBREF14 , and so on.", "id": 1366, "question": "What tasks did they experiment with?", "title": "Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks" }, { "answers": [ "" ], "context": "Paraphrasing is to express the same meaning using different expressions. Paraphrase generation plays an important role in various natural language processing (NLP) tasks such as response diversification in dialogue system, query reformulation in information retrieval, and data augmentation in machine translation. Recently, models based on Seq2Seq learning BIBREF1 have achieved the state-of-the-art results on paraphrase generation. Most of these models BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 focus on training the paraphrasing models based on a paraphrase corpus, which contains a number of pairs of paraphrases. However, high-quality paraphrases are usually difficult to acquire in practice, which becomes the major limitation of these methods. Therefore, we focus on zero-shot paraphrase generation approach in this paper, which aims to generate paraphrases without requiring a paraphrase corpus.", "id": 1367, "question": "What multilingual parallel data is used for training proposed model?", "title": "Zero-Shot Paraphrase Generation with Multilingual Language Models" }, { "answers": [ "" ], "context": "Transformer-based language model (TLM) is a neural language model constructed with a stack of Transformer decoder layers BIBREF8. Given a sequence of tokens, TLM is trained with maximizing the likelihood:", "id": 1368, "question": "How much better are results of proposed model compared to pivoting method?", "title": "Zero-Shot Paraphrase Generation with Multilingual Language Models" }, { "answers": [ "youtube video transcripts on news covering different topics like technology, human rights, terrorism and politics" ], "context": "The goal of Automatic Speech Recognition (ASR) is to transform spoken data into a written representation, thus enabling natural human-machine interaction BIBREF0 with further Natural Language Processing (NLP) tasks. Machine translation, question answering, semantic parsing, POS tagging, sentiment analysis and automatic text summarization; originally developed to work with formal written texts, can be applied over the transcripts made by ASR systems BIBREF1 , BIBREF2 , BIBREF3 . However, before applying any of these NLP tasks a segmentation process called Sentence Boundary Detection (SBD) should be performed over ASR transcripts to reach a minimal syntactic information in the text.", "id": 1369, "question": "What kind of Youtube video transcripts did they use?", "title": "WiSeBE: Window-based Sentence Boundary Evaluation" }, { "answers": [ "" ], "context": "Sentence Boundary Detection (SBD) has been a major research topic science ASR moved to more general domains as conversational speech BIBREF4 , BIBREF5 , BIBREF6 . Performance of ASR systems has improved over the years with the inclusion and combination of new Deep Neural Networks methods BIBREF7 , BIBREF8 , BIBREF0 . As a general rule, the output of ASR systems lacks of any syntactic information such as capitalization and sentence boundaries, showing the interst of ASR systems to obtain the correct sequence of words with almost no concern of the overall structure of the document BIBREF9 .", "id": 1370, "question": "Which SBD systems did they compare?", "title": "WiSeBE: Window-based Sentence Boundary Evaluation" }, { "answers": [ "It takes into account the agreement between different systems" ], "context": "SBD research has been focused on two different aspects; features and methods. Regarding the features, some work focused on acoustic elements like pauses duration, fundamental frequencies, energy, rate of speech, volume change and speaker turn BIBREF17 , BIBREF18 , BIBREF19 .", "id": 1371, "question": "What makes it a more reliable metric?", "title": "WiSeBE: Window-based Sentence Boundary Evaluation" }, { "answers": [ "Answer with content missing: (Table 1) The performance of all the target models raises significantly, while that on the original\nexamples remain comparable (e.g. the overall accuracy of BERT on modified examples raises from 24.1% to 66.0% on Quora)" ], "context": "Paraphrase identification is to determine whether a pair of sentences are paraphrases of each other BIBREF0. It is important for applications such as duplicate post matching on social media BIBREF1, plagiarism detection BIBREF2, and automatic evaluation for machine translation BIBREF3 or text summarization BIBREF4.", "id": 1372, "question": "How much in experiments is performance improved for models trained with generated adversarial examples?", "title": "Adversarial Examples with Difficult Common Words for Paraphrase Identification" }, { "answers": [ "" ], "context": "Paraphrase identification can be viewed as a problem of sentence matching. Recently, many deep models for sentence matching have been proposed and achieved great advancements on benchmark datasets. Among those, some approaches encode each sentence independently and apply a classifier on the embeddings of two sentences BIBREF10, BIBREF11, BIBREF12. In addition, some models make strong interactions between two sentences by jointly encoding and matching sentences BIBREF5, BIBREF13, BIBREF14 or hierarchically extracting matching features from the interaction space of the sentence pair BIBREF15, BIBREF16, BIBREF6. Notably, BERT pre-trained on large-scale corpora achieved even better results BIBREF7. In this paper, we study the robustness of recent typical deep models for paraphrase identification and generate new adversarial examples for revealing their robustness issues and improving their robustness.", "id": 1373, "question": "How much dramatically results drop for models on generated adversarial examples?", "title": "Adversarial Examples with Difficult Common Words for Paraphrase Identification" }, { "answers": [ "" ], "context": "Many methods have been proposed to find different types of adversarial examples for NLP tasks. We focus on those that can be applied to paraphrase identification. Some of them generate adversarial examples by adding semantic-preserving perturbations to the input sentences. BIBREF17 added perturbations to word embeddings. BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22 employed several character-level or word-level manipulations. BIBREF23 used syntactically controlled paraphrasing, and BIBREF24 paraphrased sentences with extracted rules. However, for some tasks including paraphrase identification, adversarial examples can be semantically different from original sentences, to study other robustness issues tailored to the corresponding tasks.", "id": 1374, "question": "What is discriminator in this generative adversarial setup?", "title": "Adversarial Examples with Difficult Common Words for Paraphrase Identification" }, { "answers": [ "" ], "context": "For a certain type of adversarial examples, adversarial attacks or adversarial example generation aim to find examples that are within the defined type and make existing models fail. Some work has no access to the target model until an adversarial dataset is generated BIBREF28, BIBREF26, BIBREF23, BIBREF24, BIBREF29, BIBREF27. However, in many cases including ours, finding successful adversarial examples, i.e. examples on which the target model fails, is challenging, and employing an attack algorithm with access to the target model during generation is often necessary to ensure a high success rate.", "id": 1375, "question": "What are benhmark datasets for paraphrase identification?", "title": "Adversarial Examples with Difficult Common Words for Paraphrase Identification" }, { "answers": [ "" ], "context": "1.1em", "id": 1376, "question": "What representations are presented by this paper?", "title": "Gender Representation in Open Source Speech Resources" }, { "answers": [ "" ], "context": "1.1.1em", "id": 1377, "question": "What corpus characteristics correlate with more equitable gender balance?", "title": "Gender Representation in Open Source Speech Resources" }, { "answers": [ "" ], "context": "1.1.1.1em", "id": 1378, "question": "What natural languages are represented in the speech resources studied?", "title": "Gender Representation in Open Source Speech Resources" }, { "answers": [ "Answer with content missing: (Formula) Formula is the answer." ], "context": "The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3).", "id": 1379, "question": "How is the delta-softmax calculated?", "title": "A Neural Approach to Discourse Relation Signal Detection" }, { "answers": [ "" ], "context": "A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank.", "id": 1380, "question": "Are some models evaluated using this metric, what are the findings?", "title": "A Neural Approach to Discourse Relation Signal Detection" }, { "answers": [ "" ], "context": "Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens.", "id": 1381, "question": "Where does proposed metric differ from juman judgement?", "title": "A Neural Approach to Discourse Relation Signal Detection" }, { "answers": [ "" ], "context": "In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data.", "id": 1382, "question": "Where does proposed metric overlap with juman judgement?", "title": "A Neural Approach to Discourse Relation Signal Detection" }, { "answers": [ "" ], "context": "The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time?", "id": 1383, "question": "Which baseline performs best?", "title": "Citation Text Generation" }, { "answers": [ "" ], "context": "Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document.", "id": 1384, "question": "Which baselines are explored?", "title": "Citation Text Generation" }, { "answers": [ "" ], "context": "We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data.", "id": 1385, "question": "What is the size of the corpus?", "title": "Citation Text Generation" }, { "answers": [ "" ], "context": "Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation.", "id": 1386, "question": "How was the evaluation corpus collected?", "title": "Citation Text Generation" }, { "answers": [ "" ], "context": "Methods for machine translations have been studied for years, and at the same time algorithms to generate word embeddings are becoming more and more accurate. Still, there is a lot of research aiming at unifying word embeddings across multiple languages. In this experience we try a technique for machine translation that relates word embeddings between two different languages. Based on the literature we found that it is possible to infer missing dictionary entries using distributed representations of words and phrases. One way of doing it is to create a linear mapping between the two vector spaces of two different languages. In order to achieve this, we first built two dictionaries of the two different languages. Next, we learned a function that projects the first vector space to the second one. In this way, we are able to translate every word belonging to the first language into the second one. Once we obtain the translated word embedding, we output the most similar word vector as the translation. The word embeddings were learnt using the Skip Gram method proposed by (Mikolov et al., 2013a). An example of how the method would work is reported in figure 1 and figure 2. After creating the word embeddings from the two dictionaries, we plotted the numbers in the two graphs using PCA. Figure 3 reports the results after creating a linear mapping between the embeddings from the two languages. You can see how similar words are closer together.", "id": 1387, "question": "Are any machine translation sysems tried with these embeddings, what is the performance?", "title": "Machine Translation with Cross-lingual Word Embeddings" }, { "answers": [ "" ], "context": "In recent years, various models for learning cross-lingual representations have been proposed. Two main broad categories with some related papers are identified here:", "id": 1388, "question": "Are any experiments performed to try this approach to word embeddings?", "title": "Machine Translation with Cross-lingual Word Embeddings" }, { "answers": [ "two surveys by two groups - school students and meteorologists to draw on a map a polygon representing a given geographical descriptor" ], "context": "Language grounding, i.e., understanding how words and expressions are anchored in data, is one of the initial tasks that are essential for the conception of a data-to-text (D2T) system BIBREF0 , BIBREF1 . This can be achieved through different means, such as using heuristics or machine learning algorithms on an available parallel corpora of text and data BIBREF2 to obtain a mapping between the expressions of interest and the underlying data BIBREF3 , getting experts to provide these mappings, or running surveys on writers or readers that provide enough data for the application of mapping algorithms BIBREF4 .", "id": 1389, "question": "Which two datasets does the resource come from?", "title": "Meteorologists and Students: A resource for language grounding of geographical descriptors" }, { "answers": [ "" ], "context": "Emotions are a central component of our existence as human beings, and are manifested by physiological and psychological changes that often affect behavior and action. Emotions involve a complicated interplay of mind, body, language, and culture BIBREF0.", "id": 1390, "question": "What model was used by the top team?", "title": "SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in Spoken Dialogues and Chats" }, { "answers": [ "" ], "context": "The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table .", "id": 1391, "question": "What was the baseline?", "title": "SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in Spoken Dialogues and Chats" }, { "answers": [ "1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation" ], "context": "NLP tasks require plenty of data. Due to the relatively small number of samples in our datasets, we added more labeled data using a technique developed in BIBREF7 that was used by the winning team in Kaggle's Toxic Comment Classification Challenge BIBREF8. The augmented datasets are similar to the original data files, but include additional machine-computed utterances for each original utterance. We created the additional utterances using the Google Translate API. Each original utterance was first translated from English into three target languages (German, French, and Italian), and then translated back into English. The resulting utterances were included together in the same object with the original utterance. These “duplex translations” can sometimes result in the original sentence, but many times variations are generated that convey the same emotions. Table shows an example utterance (labeled with “Joy”) after augmentation.", "id": 1392, "question": "What is the size of the second dataset?", "title": "SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in Spoken Dialogues and Chats" }, { "answers": [ "1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation" ], "context": "A dedicated website for the competition was set up. The website included instructions, the registration form, schedule, and other relevant details. Following registration, participants were able to download the training datasets.", "id": 1393, "question": "How large is the first dataset?", "title": "SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in Spoken Dialogues and Chats" }, { "answers": [ "IDEA" ], "context": "A total of eleven teams submitted their evaluations, and are presented in the online leaderboard. Seven of the teams also submitted technical reports, the highlights of which are summarized below. More details are available in the relevant reports.", "id": 1394, "question": "Who was the top-scoring team?", "title": "SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in Spoken Dialogues and Chats" }, { "answers": [ "" ], "context": "Word embedding models, which learn to encode dictionary words with vector space representations, have been shown to be valuable for a variety of natural language processing (NLP) tasks such as statistical machine translation BIBREF2 , part-of-speech tagging, chunking, and named entity recogition BIBREF3 , as they provide a more nuanced representation of words than a simple indicator vector into a dictionary. These models follow a long line of research in data-driven semantic representations of text, including latent semantic analysis BIBREF4 and its probabilistic extensions BIBREF5 , BIBREF6 . In particular, topic models BIBREF7 have found broad applications in computational social science BIBREF8 , BIBREF9 and the digital humanities BIBREF10 , where interpretable representations reveal meaningful insights. Despite widespread success at NLP tasks, word embeddings have not yet supplanted topic models as the method of choice in computational social science applications. I speculate that this is due to two primary factors: 1) a perceived reliance on big data, and 2) a lack of interpretability. In this work, I develop new models to address both of these limitations.", "id": 1395, "question": "What supervised learning tasks are attempted with these representations?", "title": "Mixed Membership Word Embeddings for Computational Social Science" }, { "answers": [ "" ], "context": "In this section, I provide the necessary background on word embeddings, as well as on topic models and mixed membership models. Traditional language models aim to predict words given the contexts that they are found in, thereby forming a joint probabilistic model for sequences of words in a language. BIBREF19 developed improved language models by using distributed representations BIBREF20 , in which words are represented by neural network synapse weights, or equivalently, vector space embeddings.", "id": 1396, "question": "What is MRR?", "title": "Mixed Membership Word Embeddings for Computational Social Science" }, { "answers": [ "" ], "context": "To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 . This suggests that these discrete distributions can be reinterpreted as topics INLINEFORM1 . We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right). In this topic model, input words INLINEFORM2 are fully observed “cluster assignments,” and the words in INLINEFORM3 's contexts are a “document.” The skip-gram differs from this supervised topic model only in the parameterization of the “topics” via word vectors which encode the distributions with a log-bilinear model. Note that although the skip-gram is discriminative, in the sense that it does not jointly model the input words INLINEFORM4 , we are here equivalently interpreting it as encoding a “conditionally generative” process for the context given the words, in order to develop probabilistic models that extend the skip-gram.", "id": 1397, "question": "Which techniques for word embeddings and topic models are used?", "title": "Mixed Membership Word Embeddings for Computational Social Science" }, { "answers": [ "Training embeddings from small-corpora can increase the performance of some tasks" ], "context": "The goals of our experiments were to study the relative merits of big data and domain-specific small data, to validate the proposed methods, and to study their applicability for computational social science research.", "id": 1398, "question": "Why is big data not appropriate for this task?", "title": "Mixed Membership Word Embeddings for Computational Social Science" }, { "answers": [ "Visualization of State of the union addresses" ], "context": "I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits.", "id": 1399, "question": "What is an example of a computational social science NLP task?", "title": "Mixed Membership Word Embeddings for Computational Social Science" }, { "answers": [ "" ], "context": "To understand the progress towards multimedia vision and language understanding, a visual Turing test was proposed by BIBREF0 that was aimed at visual question answering BIBREF1 . Visual Dialog BIBREF2 is a natural extension for VQA. Current dialog systems as evaluated in BIBREF3 show that when trained between bots, AI-AI dialog systems show improvement, but that does not translate to actual improvement for Human-AI dialog. This is because, the questions generated by bots are not natural (human-like) and therefore does not translate to improved human dialog. Therefore it is imperative that improvement in the quality of questions will enable dialog agents to perform well in human interactions. Further, BIBREF4 show that unanswered questions can be used for improving VQA, Image captioning and Object Classification.", "id": 1400, "question": "Do they report results only on English datasets?", "title": "Multimodal Differential Network for Visual Question Generation" }, { "answers": [ "" ], "context": "Generating a natural and engaging question is an interesting and challenging task for a smart robot (like chat-bot). It is a step towards having a natural visual dialog instead of the widely prevalent visual question answering bots. Further, having the ability to ask natural questions based on different contexts is also useful for artificial agents that can interact with visually impaired people. While the task of generating question automatically is well studied in NLP community, it has been relatively less studied for image-related natural questions. This is still a difficult task BIBREF5 that has gained recent interest in the community.", "id": 1401, "question": "What were the previous state of the art benchmarks?", "title": "Multimodal Differential Network for Visual Question Generation" }, { "answers": [ "" ], "context": "In this section, we clarify the basis for our approach of using exemplars for question generation. To use exemplars for our method, we need to ensure that our exemplars can provide context and that our method generates valid exemplars.", "id": 1402, "question": "How/where are the natural question generated?", "title": "Multimodal Differential Network for Visual Question Generation" }, { "answers": [ "" ], "context": "The task in visual question generation (VQG) is to generate a natural language question INLINEFORM0 , for an image INLINEFORM1 . We consider a set of pre-generated context INLINEFORM2 from image INLINEFORM3 . We maximize the conditional probability of generated question given image and context as follows: DISPLAYFORM0 ", "id": 1403, "question": "What is the input to the differential network?", "title": "Multimodal Differential Network for Visual Question Generation" }, { "answers": [ "" ], "context": "The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module.", "id": 1404, "question": "How do the authors define a differential network?", "title": "Multimodal Differential Network for Visual Question Generation" }, { "answers": [ "" ], "context": "The role of decoder is to predict the probability for a question, given INLINEFORM0 . RNN provides a nice way to perform conditioning on previous state value using a fixed length hidden vector. The conditional probability of a question token at particular time step INLINEFORM1 is modeled using an LSTM as used in machine translation BIBREF38 . At time step INLINEFORM2 , the conditional probability is denoted by INLINEFORM3 , where INLINEFORM4 is the hidden state of the LSTM cell at time step INLINEFORM5 , which is conditioned on all the previously generated words INLINEFORM6 . The word with maximum probability in the probability distribution of the LSTM cell at step INLINEFORM7 is fed as an input to the LSTM cell at step INLINEFORM8 as shown in part 3 of Figure FIGREF4 . At INLINEFORM9 , we are feeding the output of the mixture module to LSTM. INLINEFORM10 are the predicted question tokens for the input image INLINEFORM11 . Here, we are using INLINEFORM12 and INLINEFORM13 as the special token START and STOP respectively. The softmax probability for the predicted question token at different time steps is given by the following equations where LSTM refers to the standard LSTM cell equations: INLINEFORM14 ", "id": 1405, "question": "How do the authors define exemplars?", "title": "Multimodal Differential Network for Visual Question Generation" }, { "answers": [ "" ], "context": "The social web has become a common means for seeking romantic companionship, made evident by the wide assortment of online dating sites that are available on the Internet. As such, the notion of relationship recommendation systems is not only interesting but also highly applicable. This paper investigates the possibility and effectiveness of a deep learning based relationship recommendation system. An overarching research question is whether modern artificial intelligence (AI) techniques, given social profiles, can successfully approximate successful relationships and measure the relationship compatibility of two users.", "id": 1406, "question": "Is this a task other people have worked on?", "title": "CoupleNet: Paying Attention to Couples with Coupled Attention for Relationship Recommendation" }, { "answers": [ "" ], "context": "This section provides an overview of the main contributions of this work.", "id": 1407, "question": "Where did they get the data for this project?", "title": "CoupleNet: Paying Attention to Couples with Coupled Attention for Relationship Recommendation" }, { "answers": [ "Northeast U.S, South U.S., West U.S. and Midwest U.S." ], "context": "Sexual harassment is defined as \"bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors.\" In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sexual Violence Resource Center (NSRVC), one in five women and one in sixteen men are sexually assaulted while they are attending college. In addition to the prevalence of campus sexual harassment, it has been shown to have detrimental effects on student's well-being, including health-related disorders and psychological distress BIBREF0, BIBREF1. However, these studies on college sexual misconduct usually collect data based on questionnaires from a small sample of the college population, which might not be sufficiently substantial to capture the big picture of sexual harassment risk of the entire student body.", "id": 1408, "question": "Which major geographical regions are studied?", "title": "#MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media" }, { "answers": [ "0.9098 correlation" ], "context": "Previous works for sexual misconduct in academia and workplace dated back to last few decades, when researchers studied the existence, as well as psychometric and demographic insights regarding this social issue, based on survey and official data BIBREF2, BIBREF3, BIBREF4. However, these methods of gathering data are limited in scale and might be influenced by the psychological and cognitive tendencies of respondents not to provide faithful answers BIBREF5.", "id": 1409, "question": "How strong is the correlation between the prevalence of the #MeToo movement and official reports [of sexual harassment]?", "title": "#MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media" }, { "answers": [ "Using Latent Dirichlet Allocation on TF-IDF transformed from the corpus" ], "context": "In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users.", "id": 1410, "question": "How are the topics embedded in the #MeToo tweets extracted?", "title": "#MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media" }, { "answers": [ "" ], "context": "We pre-process the Twitter textual data to ensure that its lexical items are to a high degree lexically comparable to those of natural language. This is done by performing sentiment-aware tokenization, spell correction, word normalization, segmentation (for splitting hashtags) and annotation. The implemented tokenizer with SentiWordnet corpus BIBREF13 is able to avoid splitting expressions or words that should be kept intact (as one token), and identify most emoticons, emojis, expressions such as dates, currencies, acronyms, censored words (e.g. s**t), etc. In addition, we perform modifications on the extracted tokens. For spelling correction, we compose a dictionary for the most commonly seen abbreviations, censored words and elongated words (for emphasis, e.g. \"reallyyy\"). The Viterbi algorithm is used for word segmentation, with word statistics (unigrams and bigrams) computed from the NLTK English Corpus to obtain the most probable segmentation posteriors from the unigrams and bigrams probabilities. Moreover, all texts are lower-cased, and URLs, emails and mentioned usernames are replaced with common designated tags so that they would not need to be annotated by the semantic parser.", "id": 1411, "question": "How many tweets are explored in this paper?", "title": "#MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media" }, { "answers": [ "Northeast U.S., West U.S. and South U.S." ], "context": "The meta-statistics on the college demographics regarding enrollment, geographical location, private/public categorization and male-to-female ratio are obtained. Furthermore, we acquire the Campus Safety and Security Survey dataset from the official U.S. Department of Education website and use rape-related cases statistic as an attribute to complete the data for our linear regression model. The number of such reported cases by these 200 colleges in 2015 amounts to 2,939.", "id": 1412, "question": "Which geographical regions correlate to the trend?", "title": "#MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media" }, { "answers": [ "" ], "context": "We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college.", "id": 1413, "question": "How many followers did they analyze?", "title": "#MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media" }, { "answers": [ "evidence extraction and answer synthesis" ], "context": "Machine reading comprehension BIBREF0 , BIBREF1 , which attempts to enable machines to answer questions after reading a passage or a set of passages, attracts great attentions from both research and industry communities in recent years. The release of the Stanford Question Answering Dataset (SQuAD) BIBREF0 and the Microsoft MAchine Reading COmprehension Dataset (MS-MARCO) BIBREF1 provides the large-scale manually created datasets for model training and testing of machine learning (especially deep learning) algorithms for this task. There are two main differences in existing machine reading comprehension datasets. First, the SQuAD dataset constrains the answer to be an exact sub-span in the passage, while words in the answer are not necessary in the passages in the MS-MARCO dataset. Second, the SQuAD dataset only has one passage for a question, while the MS-MARCO dataset contains multiple passages.", "id": 1414, "question": "What two components are included in their proposed framework?", "title": "S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension" }, { "answers": [ "" ], "context": "Benchmark datasets play an important role in recent progress in reading comprehension and question answering research. BIBREF4 release MCTest whose goal is to select the best answer from four options given the question and the passage. CNN/Daily-Mail BIBREF5 and CBT BIBREF6 are the cloze-style datasets in which the goal is to predict the missing word (often a named entity) in a passage. Different from above datasets, the SQuAD dataset BIBREF0 whose answer can be much longer phrase is more challenging. The answer in SQuAD is a segment of text, or span, from the corresponding reading passage. Similar to the SQuAD, MS-MARCO BIBREF1 is the reading comprehension dataset which aims to answer the question given a set of passages. The answer in MS-MARCO is generated by human after reading all related passages and not necessarily sub-spans of the passages.", "id": 1415, "question": "Which framework they propose in this paper?", "title": "S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension" }, { "answers": [ "" ], "context": "Following the overview in Figure 1 , our approach consists of two parts as evidence extraction and answer synthesis. The two parts are trained in two stages. The evidence extraction part aims to extract evidence snippets related to the question and passage. The answer synthesis part aims to generate the answer based on the extracted evidence snippets. We propose a multi-task learning framework for the evidence extraction shown in Figure 15 , and use the sequence-to-sequence model with additional features of the start and end positions of the evidence snippet for the answer synthesis shown in Figure 3 .", "id": 1416, "question": "Why MS-MARCO is different from SQuAD?", "title": "S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension" }, { "answers": [ "" ], "context": "Sentiment analysis and emotion recognition, as two closely related subfields of affective computing, play a key role in the advancement of artificial intelligence BIBREF0 . However, the complexity and ambiguity of natural language constitutes a wide range of challenges for computational systems.", "id": 1417, "question": "Did they experiment with pre-training schemes?", "title": "IIIDYT at SemEval-2018 Task 3: Irony detection in English tweets" }, { "answers": [ "" ], "context": "For the shared task, a balanced dataset of 2,396 ironic and 2,396 non-ironic tweets is provided. The ironic corpus was constructed by collecting self-annotated tweets with the hashtags #irony, #sarcasm and #not. The tweets were then cleaned and manually checked and labeled, using a fine-grained annotation scheme BIBREF3 . The corpus comprises different types of irony:", "id": 1418, "question": "What were their results on the test set?", "title": "IIIDYT at SemEval-2018 Task 3: Irony detection in English tweets" }, { "answers": [ "" ], "context": "The goal of the subtask A was to build a binary classification system that predicts if a tweet is ironic or non-ironic. In the following sections, we first describe the dataset provided for the task and our pre-processing pipeline. Later, we lay out the proposed model architecture, our experiments and results.", "id": 1419, "question": "What is the size of the dataset?", "title": "IIIDYT at SemEval-2018 Task 3: Irony detection in English tweets" }, { "answers": [ "" ], "context": "Representation learning approaches usually require extensive amounts of data to derive proper results. Moreover, previous studies have shown that initializing representations using random values generally causes the performance to drop. For these reasons, we rely on pre-trained word embeddings as a means of providing the model the adequate setting. We experiment with GloVe BIBREF14 for small sizes, namely 25, 50 and 100. This is based on previous work showing that representation learning models based on convolutional neural networks perform well compared to traditional machine learning methods with a significantly smaller feature vector size, while at the same time preventing over-fitting and accelerates computation (e.g BIBREF2 .", "id": 1420, "question": "What was the baseline model?", "title": "IIIDYT at SemEval-2018 Task 3: Irony detection in English tweets" }, { "answers": [ "" ], "context": "Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and — crucially — factually correct. Recent progress in conditional text generation has led to models that can generate fluent, topical summaries BIBREF2. However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability BIBREF3.", "id": 1421, "question": "What models are evaluated with QAGS?", "title": "Asking and Answering Questions to Evaluate the Factual Consistency of Summaries" }, { "answers": [ "" ], "context": "Standard approaches to evaluating generated text are primarily based on counting $n$-gram overlap. These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference $n$-grams in the generated summary. We briefly describe the most common metrics in this family, and refer readers to BIBREF9 for further discussion.", "id": 1422, "question": "Do they use crowdsourcing to collect human judgements?", "title": "Asking and Answering Questions to Evaluate the Factual Consistency of Summaries" }, { "answers": [ "" ], "context": "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "id": 1423, "question": "Which dataset(s) do they evaluate on?", "title": "Deep Text-to-Speech System with Seq2Seq Model" }, { "answers": [ "Replacing attention mechanism to query-key attention, and adding a loss to make the attention mask as diagonal as possible" ], "context": "Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 .", "id": 1424, "question": "Which modifications do they make to well-established Seq2seq architectures?", "title": "Deep Text-to-Speech System with Seq2Seq Model" }, { "answers": [ "" ], "context": "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .", "id": 1425, "question": "How do they measure the size of models?", "title": "Deep Text-to-Speech System with Seq2Seq Model" }, { "answers": [ "" ], "context": "The encoder acts to encoder the input text sequence into a compact hidden representation, which is consumed by the decoder at every decoding step. The encoder is composed of a INLINEFORM0 -dim embedding layer that maps the input sequence into a dense vector. This is followed by a 1-layer bidirectional LSTM/GRU with INLINEFORM1 hidden dim ( INLINEFORM2 hidden dim total for both directions). two linear projections layers project the LSTM/GRU hidden output into two vectors INLINEFORM3 and INLINEFORM4 of the same INLINEFORM5 -dimension, these are the key and value vectors. DISPLAYFORM0 ", "id": 1426, "question": "Do they reduce the number of parameters in their architecture compared to other direct text-to-speech models?", "title": "Deep Text-to-Speech System with Seq2Seq Model" }, { "answers": [ "" ], "context": "Supervised text classification has achieved great success in the past decades due to the availability of rich training data and deep learning techniques. However, zero-shot text classification ($\\textsc {0shot-tc}$) has attracted little attention despite its great potential in real world applications, e.g., the intent recognition of bank consumers. $\\textsc {0shot-tc}$ is challenging because we often have to deal with classes that are compound, ultra-fine-grained, changing over time, and from different aspects such as topic, emotion, etc.", "id": 1427, "question": "Do they use pretrained models?", "title": "Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach" }, { "answers": [ "" ], "context": "The $\\textsc {0shot-tc}$ problem was modeled in a too restrictive vision. Firstly, most work only explored a single task, which was mainly topic categorization, e.g., BIBREF1, BIBREF2, BIBREF3. We argue that this is only the tiny tip of the iceberg for $\\textsc {0shot-tc}$. Secondly, there is often a precondition that a part of classes are seen and their labeled instances are available to train a model, as we define here as Definition-Restrictive:", "id": 1428, "question": "What are their baseline models?", "title": "Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach" }, { "answers": [ "how long it takes the system to lemmatize a set number of words" ], "context": "Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning.", "id": 1429, "question": "How was speed measured?", "title": "Build Fast and Accurate Lemmatization for Arabic" }, { "answers": [ "97.32%" ], "context": "Arabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word وسيفتحون> (wsyftHwn) “and they will open” has the triliteral root فتح> (ftH), which has the basic meaning of opening, has prefixes وس> (ws) “and will”, suffixes ون> (wn) “they”, stem يفتح> (yftH) “open”, and lemma فتح> (ftH) “the concept of opening”.", "id": 1430, "question": "What were their accuracy results on the task?", "title": "Build Fast and Accurate Lemmatization for Arabic" }, { "answers": [ "" ], "context": "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.", "id": 1431, "question": "What is the state of the art?", "title": "Build Fast and Accurate Lemmatization for Arabic" }, { "answers": [ "" ], "context": "We were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation.", "id": 1432, "question": "How was the dataset annotated?", "title": "Build Fast and Accurate Lemmatization for Arabic" }, { "answers": [ "" ], "context": "Data was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in هونغ كونغ، هونج كونج> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas في، في٠ا> (fy, fymA) are both valid for the function word في٠ا> (fymA) “while”).", "id": 1433, "question": "What is the size of the dataset?", "title": "Build Fast and Accurate Lemmatization for Arabic" }, { "answers": [ "" ], "context": "Most of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word أكادي٠ية> (AkAdymyp) can be either a noun and its lemma is أكادي٠ية> (AkAdymyp) “academy”, or an adjective and its lemma is أكادي٠ي> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities.", "id": 1434, "question": "Where did they collect their dataset from?", "title": "Build Fast and Accurate Lemmatization for Arabic" }, { "answers": [ "" ], "context": "Goal-oriented dialogue systems aim to automatically identify the intent of the user as expressed in natural language, extract associated arguments or slots, and take actions accordingly to satisfy the user’s requests BIBREF0. In such systems, the speakers' utterances are typically recognized using an ASR system. Then the intent of the speaker and related slots are identified from the recognized word sequence using an LU component. Finally, a dialogue manager (DM) interacts with the user (not necessarily in natural language) and helps the user achieve the task that the system is designed to support. As a result, the quality of ASR systems has a direct impact on downstream tasks such as LU and DM. This becomes more evident in cases where a generic ASR is used, instead of a domain-specific one BIBREF1.", "id": 1435, "question": "How much in-domain data is enough for joint models to outperform baselines?", "title": "Joint Contextual Modeling for ASR Correction and Language Understanding" }, { "answers": [ "" ], "context": "Word Confusion Networks: A compact and normalized class of word lattices, called word confusion networks (WCNs) were initially proposed for improving ASR performance BIBREF5. WCNs are much smaller than ASR lattices but have better or comparable word and oracle accuracy, and because of this they have been used for many tasks, including SLU BIBREF6. However, to the best of our knowledge they have not been used with Neural Semantic Parsers implemented by Recurrent Neural Networks (RNNs) or similar architectures. The closest work would be BIBREF7, who propose to traverse an input lattice in topological order and use the RNN hidden state of the lattice final state as the dense vector representing the entire lattice. However, word confusion networks provide a much better and more efficient solution thanks to token alignments. We use this idea to first infer WCNs from ASR n-best and then directly use them for ASR correction and LU in joint fashion.", "id": 1436, "question": "How many parameters does their proposed joint model have?", "title": "Joint Contextual Modeling for ASR Correction and Language Understanding" }, { "answers": [ "" ], "context": "Developing tools for processing many languages has long been an important goal in NLP BIBREF0 , BIBREF1 , but it was only when statistical methods became standard that massively multilingual NLP became economical. The mainstream approach for multilingual NLP is to design language-specific models. For each language of interest, the resources necessary for training the model are obtained (or created), and separate parameters are fit for each language separately. This approach is simple and grants the flexibility of customizing the model and features to the needs of each language, but it is suboptimal for theoretical and practical reasons. Theoretically, the study of linguistic typology tells us that many languages share morphological, phonological, and syntactic phenomena BIBREF3 ; therefore, the mainstream approach misses an opportunity to exploit relevant supervision from typologically related languages. Practically, it is inconvenient to deploy or distribute NLP tools that are customized for many different languages because, for each language of interest, we need to configure, train, tune, monitor, and occasionally update the model. Furthermore, code-switching or code-mixing (mixing more than one language in the same discourse), which is pervasive in some genres, in particular social media, presents a challenge for monolingually-trained NLP models BIBREF4 .", "id": 1437, "question": "How does the model work if no treebank is available?", "title": "Many Languages, One Parser" }, { "answers": [ "" ], "context": "Our goal is to train a dependency parser for a set of target languages ${L}^t$ , given universal dependency annotations in a set of source languages ${L}^s$ . Ideally, we would like to have training data in all target languages (i.e., $L^t \\subseteq L^s$ ), but we are also interested in the case where the sets of source and target languages are disjoint (i.e., $L^t \\cap L^s = \\emptyset $ ). When all languages in $L^t$ have a large treebank, the mainstream approach has been to train one monolingual parser per target language and route sentences of a given language to the corresponding parser at test time. In contrast, our approach is to train one parsing model with the union of treebanks in $L^s$ , then use this single trained model to parse text in any language in $L^t$ , hence the name “Many Languages, One Parser” (MaLOPa). MaLOPa strikes a balance between: (1) enabling cross-lingual model transfer via language-invariant input representations; i.e., coarse POS tags, multilingual word embeddings and multilingual word clusters, and (2) tweaking the behavior of the parser depending on the current input language via language-specific representations; i.e., fine-grained POS tags and language embeddings.", "id": 1438, "question": "How many languages have this parser been tried on?", "title": "Many Languages, One Parser" }, { "answers": [ "" ], "context": "Natural Language Generation (NLG) is an NLP task that consists in generating a sequence of natural language sentences from non-linguistic data. Traditional approaches of NLG consist in creating specific algorithms in the consensual NLG pipeline BIBREF0, but there has been recently a strong interest in End-to-End (E2E) NLG systems which are able to jointly learn sentence planning and surface realization BIBREF1, BIBREF2, BIBREF3, BIBREF4. Probably the most well known effort of this trend is the E2E NLG challenge BIBREF5 whose task was to perform sentence planing and realization from dialogue act-based Meaning Representation (MR) on unaligned data. For instance, Figure FIGREF1 presents, on the upper part, a meaning representation and on the lower part, one possible textual realization to convey this meaning. Although the challenge was a great success, the data used in the challenge contained a lot of redundancy of structure and a limited amount of concepts and several reference texts per MR input (8.1 in average). This is an ideal case for machine learning but is it the one that is encountered in all E2E NLG real-world applications?", "id": 1439, "question": "Do they use attention?", "title": "Semi-Supervised Neural Text Generation by Joint Learning of Natural Language Generation and Natural Language Understanding Models" }, { "answers": [ "" ], "context": "E2E Natural Language Generation systems are typically based on the Recurrent Neural Network (RNN) architecture consisting of an encoder and a decoder also known as seq2seq BIBREF8. The encoder takes a sequence of source words $\\mathbf {x}~=~\\lbrace {x_1},{x_2}, ..., {x_{T_x}}\\rbrace $ and encodes it to a fixed length vector. The decoder then decodes this vector into a sequence of target words $\\mathbf {y}~=~\\lbrace {y_1},{y_2}, ..., {y_{T_y}}\\rbrace $. Seq2seq models are able to treat variable sized source and target sequences making them a great choice for NLG and NLU tasks.", "id": 1440, "question": "What non-annotated datasets are considered?", "title": "Semi-Supervised Neural Text Generation by Joint Learning of Natural Language Generation and Natural Language Understanding Models" }, { "answers": [ "" ], "context": "Story generation is an important but challenging task because it requires to deal with logic and implicit knowledge BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Story ending generation aims at concluding a story and completing the plot given a story context. We argue that solving this task involves addressing the following issues: 1) Representing the context clues which contain key information for planning a reasonable ending; and 2) Using implicit knowledge (e.g., commonsense knowledge) to facilitate understanding of the story and better predict what will happen next.", "id": 1441, "question": "Did they compare to Transformer based large language models?", "title": "Story Ending Generation with Incremental Encoding and Commonsense Knowledge" }, { "answers": [ "" ], "context": "The corpus we used in this paper was first designed for Story Cloze Test (SCT) BIBREF10 , which requires to select a correct ending from two candidates given a story context. Feature-based BIBREF11 , BIBREF12 or neural BIBREF8 , BIBREF13 classification models are proposed to measure the coherence between a candidate ending and a story context from various aspects such as event, sentiment, and topic. However, story ending generation BIBREF14 , BIBREF15 , BIBREF16 is more challenging in that the task requires to modeling context clues and implicit knowledge to produce reasonable endings.", "id": 1442, "question": "Which baselines are they using?", "title": "Story Ending Generation with Incremental Encoding and Commonsense Knowledge" }, { "answers": [ "cloze-style reading comprehension and user query reading comprehension questions" ], "context": "Machine Reading Comprehension (MRC) has become enormously popular in recent research, which aims to teach the machine to comprehend human languages and answer the questions based on the reading materials. Among various reading comprehension tasks, the cloze-style reaing comprehension is relatively easy to follow due to its simplicity in definition, which requires the model to fill an exact word into the query to form a coherent sentence according to the document material. Several cloze-style reading comprehension datasets are publicly available, such as CNN/Daily Mail BIBREF0 , Children's Book Test BIBREF1 , People Daily and Children's Fairy Tale BIBREF2 .", "id": 1443, "question": "What two types the Chinese reading comprehension dataset consists of?", "title": "Dataset for the First Evaluation on Chinese Machine Reading Comprehension" }, { "answers": [ "English" ], "context": "In this section, we will introduce several public cloze-style reading comprehension dataset.", "id": 1444, "question": "For which languages most of the existing MRC datasets are created?", "title": "Dataset for the First Evaluation on Chinese Machine Reading Comprehension" }, { "answers": [ "" ], "context": "One of the ultimate goals of Natural Language Processing (NLP) is machine reading BIBREF0 , the automatic, unsupervised understanding of text. One way of pursuing machine reading is by semantic parsing, which transforms text into its meaning representation. However, capturing the meaning is not the final goal, the meaning representation needs to be predefined and structured in a way that supports reasoning. Ontologies provide a common vocabulary for meaning representations and support reasoning, which is vital for understanding the text. To enable flexibility when encountering new concepts and relations in text, in machine reading we want to be able to learn and extend the ontology while reading. Traditional methods for ontology learning BIBREF1 , BIBREF2 are only concerned with discovering the salient concepts from text. Thus, they work in a macro-reading fashion BIBREF3 , where the goal is to extract facts from a large collection of texts, but not necessarily all of them, as opposed to a micro-reading fashion, where the goal is to extract every fact from the input text. Semantic parsers operate in a micro-reading fashion. Consequently, the ontologies with only the salient concepts are not enough for semantic parsing. Furthermore, the traditional methods learn an ontology for a particular domain, where the text is used just as a tool. On the other hand, ontologies are used just as tool to represent meaning in the semantic parsing setting. When developing a semantic parser it is not trivial to get the best meaning representation for the observed text, especially if the content is not known yet. Semantic parsing datasets have been created by either selecting texts that can be expressed with a given meaning representation, like Free917 dataset BIBREF4 , or by manually deriving the meaning representation given the text, like Atis dataset BIBREF5 . In both datasets, each unit of text has its corresponding meaning representation. While Free917 uses Freebase BIBREF6 , which is a very big multi-domain ontology, it is not possible to represent an arbitrary sentence with Freebase or any other existing ontology.", "id": 1445, "question": "How did they induce the CFG?", "title": "Joint learning of ontology and semantic parser from text" }, { "answers": [ "" ], "context": "In this section, we propose a semi-automatic bootstrapping procedure for grammar induction, which searches for the most frequent patterns and constructs new production rules from them. One of the main challenges is to make the induction in a way that minimizes human involvement and maximizes the quality of semantic trees.", "id": 1446, "question": "How big is their dataset?", "title": "Joint learning of ontology and semantic parser from text" }, { "answers": [ "" ], "context": "[t] Standard Beam Search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 t = 0 to T i = 1 to k INLINEFORM4 INLINEFORM5 INLINEFORM6 is the local output scoring function INLINEFORM7 top-k-max INLINEFORM8 Top k values of the input matrix INLINEFORM9 top-k-argmax INLINEFORM10 Top INLINEFORM11 argmax index pairs of the input matrix i = 1 to k INLINEFORM12 embedding( INLINEFORM13 ) INLINEFORM14 INLINEFORM15 is a nonlinear recurrent function that returns state at next step INLINEFORM16 INLINEFORM17 follow-backpointer( INLINEFORM18 ) INLINEFORM19 Sequence-to-sequence (seq2seq) models have been successfully used for many sequential decision tasks such as machine translation BIBREF0 , BIBREF1 , parsing BIBREF2 , BIBREF3 , summarization BIBREF4 , dialog generation BIBREF5 , and image captioning BIBREF6 . Beam search is a desirable choice of test-time decoding algorithm for such models because it potentially avoids search errors made by simpler greedy methods. However, the typical approach to training neural sequence models is to use a locally normalized maximum likelihood objective (cross-entropy training) BIBREF0 . This objective does not directly reason about the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding BIBREF7 , BIBREF8 , BIBREF9 . These negative results are not unexpected. The training procedure was not search-aware: it was not able to consider the effect that changing the model's scores might have on the ease of search while using a beam decoding, greedy decoding, or otherwise.", "id": 1447, "question": "By how much do they outperform basic greedy and cross-entropy beam decoding?", "title": "A Continuous Relaxation of Beam Search for End-to-end Training of Neural Sequence Models" }, { "answers": [ "" ], "context": "We denote the seq2seq model parameterized by INLINEFORM0 as INLINEFORM1 . We denote the input sequence as INLINEFORM2 , the gold output sequence as INLINEFORM3 and the result of beam search over INLINEFORM4 as INLINEFORM5 . Ideally, we would like to directly minimize a final evaluation loss, INLINEFORM6 , evaluated on the result of running beam search with input INLINEFORM7 and model INLINEFORM8 . Throughout this paper we assume that the evaluation loss decomposes over time steps INLINEFORM9 as: INLINEFORM10 . We refer to this idealized training objective that directly evaluates prediction loss as the “direct loss” objective and define it as: DISPLAYFORM0 ", "id": 1448, "question": "Do they provide a framework for building a sub-differentiable for any final loss metric?", "title": "A Continuous Relaxation of Beam Search for End-to-end Training of Neural Sequence Models" }, { "answers": [ "" ], "context": "[t] continuous-top-k-argmax [1] INLINEFORM0 INLINEFORM1 , s.t. INLINEFORM2 INLINEFORM3 INLINEFORM4 = 1 to k peaked-softmax will be dominated by scores closer to INLINEFORM5 INLINEFORM6 The square operation is element-wise Formally, beam search is a procedure with hyperparameter INLINEFORM7 that maintains a beam of INLINEFORM8 elements at each time step and expands each of the INLINEFORM9 elements to find the INLINEFORM10 -best candidates for the next time step. The procedure finds an approximate argmax of a scoring function defined on output sequences.", "id": 1449, "question": "Do they compare partially complete sequences (created during steps of beam search) to gold/target sequences?", "title": "A Continuous Relaxation of Beam Search for End-to-end Training of Neural Sequence Models" }, { "answers": [ "" ], "context": "[t] Continuous relaxation to beam search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 t = 0 to T INLINEFORM5 i=1 to k INLINEFORM6 INLINEFORM7 is a local output scoring function INLINEFORM8 INLINEFORM9 is used to compute INLINEFORM10 INLINEFORM11 Call Algorithm 2 i = 1 to k INLINEFORM12 Soft back pointer computation INLINEFORM13 Contribution from vocabulary items INLINEFORM14 Peaked distribution over the candidates to compute INLINEFORM15 INLINEFORM16 INLINEFORM17 INLINEFORM18 j = 1 to k Get contributions from soft backpointers for each beam element INLINEFORM19 INLINEFORM20 INLINEFORM21 INLINEFORM22 is a nonlinear recurrent function that returns state at next step INLINEFORM23 Pick the loss for the sequence with highest model score on the beam in a soft manner.", "id": 1450, "question": "Which loss metrics do they try in their new training procedure evaluated on the output of beam search?", "title": "A Continuous Relaxation of Beam Search for End-to-end Training of Neural Sequence Models" }, { "answers": [ "" ], "context": "Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T).", "id": 1451, "question": "How are different domains weighted in WDIRL?", "title": "Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis" }, { "answers": [ "" ], "context": "For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\\rm {X} \\times \\rm {Y}$: the source domain $\\rm {P}_S(\\rm {X},\\rm {Y})$ and the target domain $\\rm {P}_T(\\rm {X},\\rm {Y})$. And there is a labeled data set $\\mathcal {D}_S$ drawn $i.i.d$ from $\\rm {P}_S(\\rm {X},\\rm {Y})$ and an unlabeled data set $\\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\\rm {P}_T(\\rm {X})$:", "id": 1452, "question": "How is DIRL evaluated?", "title": "Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis" }, { "answers": [ "12 binary-class classification and multi-class classification of reviews based on rating" ], "context": "Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25.", "id": 1453, "question": "Which sentiment analysis tasks are addressed?", "title": "Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis" }, { "answers": [ "" ], "context": "The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP.", "id": 1454, "question": "Which NLP area have the highest average citation for woman author?", "title": "The State of NLP Literature: A Diachronic Analysis of the ACL Anthology" }, { "answers": [ "machine translation, statistical machine, sentiment analysis" ], "context": "Q. How big is the ACL Anthology (AA)? How is it changing with time?", "id": 1455, "question": "Which 3 NLP areas are cited the most?", "title": "The State of NLP Literature: A Diachronic Analysis of the ACL Anthology" }, { "answers": [ "CL Journal and EMNLP conference" ], "context": "NLP, like most other areas of research, suffers from poor demographic diversity. There is very little to low representation from certain nationalities, race, gender, language, income, age, physical abilities, etc. This impacts the breadth of technologies we create, how useful they are, and whether they reach those that need it most. In this section, we analyze three specific attributes among many that deserve attention: gender (specifically, the number of women researchers in NLP), age (more precisely, the number of years of NLP paper publishing experience), and the amount of research in various languages (which loosely correlates with geographic diversity).", "id": 1456, "question": "Which journal and conference are cited the most in recent years?", "title": "The State of NLP Literature: A Diachronic Analysis of the ACL Anthology" }, { "answers": [ "English, Chinese, French, Japanese and Arabic" ], "context": "The ACL Anthology does not record demographic information about the paper authors. (Until recently, ACL and other NLP conferences did not record demographic information of the authors.) However, many first names have strong associations with a male or female gender. We will use these names to estimate the percentage of female first authors in NLP.", "id": 1457, "question": "Which 5 languages appear most frequently in AA paper titles?", "title": "The State of NLP Literature: A Diachronic Analysis of the ACL Anthology" }, { "answers": [ "" ], "context": "While the actual age of NLP researchers might be an interesting aspect to explore, we do not have that information. Thus, instead, we can explore a slightly different (and perhaps more useful) attribute: NLP academic age. We can define NLP academic age as the number of years one has been publishing in AA. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18.", "id": 1458, "question": "What aspect of NLP research is examined?", "title": "The State of NLP Literature: A Diachronic Analysis of the ACL Anthology" }, { "answers": [ "" ], "context": "Automatic systems with natural language abilities are growing to be increasingly pervasive in our lives. Not only are they sources of mere convenience, but are crucial in making sure large sections of society and the world are not left behind by the information divide. Thus, the limits of what automatic systems can do in a language, limit the world for the speakers of that language.", "id": 1459, "question": "Are the academically younger authors cited less than older?", "title": "The State of NLP Literature: A Diachronic Analysis of the ACL Anthology" }, { "answers": [ "" ], "context": "Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing. It encompasses many areas of research that have seen an ebb and flow of interest over the years. In this section, we examine the terms that have been used in the titles of ACL Anthology (AA) papers. The terms in a title are particularly informative because they are used to clearly and precisely convey what the paper is about. Some journals ask authors to separately include keywords in the paper or in the meta-information, but AA papers are largely devoid of this information. Thus titles are an especially useful source of keywords for papers—keywords that are often indicative of the area of research.", "id": 1460, "question": "How many papers are used in experiment?", "title": "The State of NLP Literature: A Diachronic Analysis of the ACL Anthology" }, { "answers": [ "" ], "context": "Through this CS224N Pre-trained Contextual Embeddings (PCE) project, we tackle the question answering problem which is one of the most popular in NLP and has been brought to the forefront by datasets such as SQUAD 2.0. This problem's success stems from both the challenge it presents and the recent successes in approaching human level function. As most, if not all, of the problems humans solve every day can be posed as a question, creating an deep learning based solution that has access to the entire internet is a critical milestone for NLP. Through our project, our group had tested the limits of applying attention in BERT BIBREF0 to improving the network's performance on the SQUAD2.0 dataset BIBREF1. BERT applies attention to the concatenation of the query and context vectors and thus attends these vectors in a global fashion. We propose BERTQA BIBREF2 which adds Context-to-Query (C2Q) and Query-to-Context (Q2C) attention in addition to localized feature extraction via 1D convolutions. We implemented the additions ourselves, while the Pytorch baseline BERT code was obtained from BIBREF3. The SQUAD2.0 answers span from a length of zero to multiple words and this additional attention provides hierarchical information that will allow the network to better learn to detect answer spans of varying sizes. We applied the empirical findings from this part of our project to the large BERT model, which has twice as many layers as the base BERT model. We also augmented the SQUAD2.0 dataset with additional backtranslated examples. This augmented dataset will be publicly available on our github BIBREF4 on the completion of this course. After performing hyperparameter tuning, we ensembled our two best networks to get F1 and EM scores of 82.317 and 79.442 respectively. The experiments took around 300 GPU hours to train.", "id": 1461, "question": "What ensemble methods are used for best model?", "title": "BERTQA -- Attention on Steroids" }, { "answers": [ "" ], "context": "The SQUAD2.0 creators proposed this dataset as a means for networks to actually understand the text they were being interrogated about rather than simply being extractive parsers. Many networks stepped up to the challenge including BERT, BIDAF, and QANET. BERT is a fully feed forward network that is based on the transformer architecture BIBREF5. The base BERT model has 12 transformer encoder layers that terminate in an interchangeable final layer which can be finetuned to the specific task. We chose this network as our baseline because of its use of contextual embeddings and global attention and because of the speed advantage derived from an RNN free architecture. We derived inspiration for our modifications from the BIDAF and QANET models. BIDAF is an LSTM based network that uses character, word, and contextual embeddings which are fed through Context-to-Query (C2Q) and Query-to-Context (Q2C) layers. The final logits are derived from separate Start and End output layers, as opposed to BERT which produces these logits together. Our C2Q/Q2C addition to BERT and the Dense Layer/LSTM based separate final Start and End logit prediction layer were inspired by this paper. We also refered to the QANET model, which is also a fully feed forward network that emphasizes the use of convolutions to capture the local structure of text. Based on this paper, we created a convolutional layer within the C2Q/Q2C architecture to add localized information to BERT's global attention and the C2Q/Q2C coattention.", "id": 1462, "question": "What hyperparameters have been tuned?", "title": "BERTQA -- Attention on Steroids" }, { "answers": [ "Simple Skip improves F1 from 74.34 to 74.81\nTransformer Skip improes F1 from 74.34 to 74.95 " ], "context": "We first focused on directed coattention via context to query and query to context attention as discussed in BIDAF BIBREF9. We then implemented localized feature extraction by 1D convolutions to add local information to coattention based on the QANET architecture BIBREF10. Subsequently, we experimented with different types of skip connections to inject BERT embedding information back into our modified network. We then applied what we learned using the base BERT model to the large BERT model. Finally, we performed hyperparameter tuning by adjusting the number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks. Each part of the project is discussed further in the subsections below.", "id": 1463, "question": "How much F1 was improved after adding skip connections?", "title": "BERTQA -- Attention on Steroids" }, { "answers": [ "" ], "context": "Over the last few years, neural sequence to sequence models BIBREF0 , BIBREF1 , BIBREF2 have revolutionized the field of machine translation by significantly improving translation quality over their phrase based counterparts BIBREF3 , BIBREF4 , BIBREF5 . With more gains arising from continued research on new neural network architectures and accompanying training techniques BIBREF6 , BIBREF7 , BIBREF8 , NMT researchers, both in industry and academia, have doubled down on their ability to train high capacity models on large corpora with gradient based optimization.", "id": 1464, "question": "Where do they retrieve neighbor n-grams from in their approach?", "title": "Non-Parametric Adaptation for Neural Machine Translation" }, { "answers": [ "" ], "context": "Standard approaches for Neural Machine Translation rely on seq2seq architectures BIBREF0 , BIBREF1 , where given a source sequence INLINEFORM0 and a target sequence INLINEFORM1 , the goal is to model the probability distribution, INLINEFORM2 .", "id": 1465, "question": "To which systems do they compare their results against?", "title": "Non-Parametric Adaptation for Neural Machine Translation" }, { "answers": [ "" ], "context": "Existing approaches have proposed using off the shelf search engines for the retrieval stage. However, our objective differs from traditional information retrieval, since the goal of retrieval in semi-parametric NMT is to find neighbors which might improve translation performance, which might not correlate with maximizing sentence similarity.", "id": 1466, "question": "Does their combination of a non-parametric retrieval and neural network get trained end-to-end?", "title": "Non-Parametric Adaptation for Neural Machine Translation" }, { "answers": [ "" ], "context": "To incorporate the retrieved neighbors, INLINEFORM0 , within the NMT model, we first encode them using Transformer layers, as described in subsection UID12 . This encoded memory is then used within the decoder via an attention mechanism, as described in subsection UID15 .", "id": 1467, "question": "Which similarity measure do they use in their n-gram retrieval approach?", "title": "Non-Parametric Adaptation for Neural Machine Translation" }, { "answers": [ "" ], "context": "Different sentence classification tasks are crucial for many Natural Language Processing (NLP) applications. Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. In addition, how to decode and compose the features of component units, including single words and variable-size phrases, is central to the sentence classification problem.", "id": 1468, "question": "Where is MVCNN pertained?", "title": "Multichannel Variable-Size Convolution for Sentence Classification" }, { "answers": [ "0.8 points on Binary; 0.7 points on Fine-Grained; 0.6 points on Senti140; 0.7 points on Subj" ], "context": "Much prior work has exploited deep neural networks to model sentences.", "id": 1469, "question": "How much gain does the model achieve with pretraining MVCNN?", "title": "Multichannel Variable-Size Convolution for Sentence Classification" }, { "answers": [ "" ], "context": "We now describe the architecture of our model MVCNN, illustrated in Figure 1 .", "id": 1470, "question": "What are the effects of extracting features of multigranular phrases?", "title": "Multichannel Variable-Size Convolution for Sentence Classification" }, { "answers": [ "" ], "context": "This part introduces two training tricks that enhance the performance of MVCNN in practice.", "id": 1471, "question": "What are the effects of diverse versions of pertained word embeddings? ", "title": "Multichannel Variable-Size Convolution for Sentence Classification" }, { "answers": [ "" ], "context": "We test the network on four classification tasks. We begin by specifying aspects of the implementation and the training of the network. We then report the results of the experiments.", "id": 1472, "question": "How is MVCNN compared to CNN?", "title": "Multichannel Variable-Size Convolution for Sentence Classification" }, { "answers": [ "82.0%" ], "context": "The challenge in Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is to correctly decide whether a sentence (referred to as a premise) entails or contradicts or is neutral in respect to another sentence (a hypothesis). This classification task requires various natural language comprehension skills. In this paper, we are focused on the following natural language generation task based on NLI. Given the premise the goal is to generate a stream of hypotheses that comply with the label (entailment, contradiction or neutral). In addition to reading capabilities this task also requires language generation capabilities.", "id": 1473, "question": "What is the highest accuracy score achieved?", "title": "Constructing a Natural Language Inference Dataset using Generative Neural Networks" }, { "answers": [ "" ], "context": "NLI has been the focal point of Recognizing Textual Entailment (RTE) Challenges, where the goal is to determine if the premise entails the hypothesis or not. The proposed approaches for RTE include bag-of-words matching approach BIBREF14 , matching predicate argument structure approach BIBREF15 and logical inference approach BIBREF16 , BIBREF17 . Another rule-based inference approach was proposed by BIBREF18 . This approach allows generation of new hypotheses by transforming parse trees of the premise while maintaining entailment. BIBREF19 proposes an approach for constructing training datasets by extracting sentences from news articles that tend to be in an entailment relationship.", "id": 1474, "question": "What is the size range of the datasets?", "title": "Constructing a Natural Language Inference Dataset using Generative Neural Networks" }, { "answers": [ "" ], "context": "The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers.", "id": 1475, "question": "Does the paper report F1-scores for the age and language variety tasks?", "title": "BERT-Based Arabic Social Media Author Profiling" }, { "answers": [ "" ], "context": "For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}.", "id": 1476, "question": "Are the models compared to some baseline models?", "title": "BERT-Based Arabic Social Media Author Profiling" }, { "answers": [ "" ], "context": "As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models.", "id": 1477, "question": "What are the in-house data employed?", "title": "BERT-Based Arabic Social Media Author Profiling" }, { "answers": [ "Data released for APDA shared task contains 3 datasets." ], "context": "Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\\mu =0$, and $\\sigma =1$, i.e., $W \\sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs.", "id": 1478, "question": "What are the three datasets used in the paper?", "title": "BERT-Based Arabic Social Media Author Profiling" }, { "answers": [ "" ], "context": "Practitioners in the development sector have long recognized the potential of qualitative data to inform programming and gain a better understanding of values, behaviors and attitudes of people and communities affected by their efforts. Some organizations mainly rely on interview or focus group data, some also consider policy documents and reports, and others have started tapping into social media data. Regardless of where the data comes from, analyzing it in a systematic way to inform quick decision-making poses challenges, in terms of high costs, time, or expertise required.", "id": 1479, "question": "What are the potentials risks of this approach?", "title": "Data Innovation for International Development: An overview of natural language processing for qualitative data analysis" }, { "answers": [ "" ], "context": "There are two broad approaches to NLP - supervised learning and unsupervised learning BIBREF0 . Supervised learning assumes that an outcome variable is known and an algorithm is used to predict the correct variable. Classifying email as spam based on how the user has classified previous mail is a classic example. In social science, we may want to predict voting behavior of a legislator with the goal of inferring ideological positions from such behavior. In development, interest may center around characteristics that predict successful completion of a training program based on a beneficiary's previous experience or demographic characteristics.", "id": 1480, "question": "What elements of natural language processing are proposed to analyze qualitative data?", "title": "Data Innovation for International Development: An overview of natural language processing for qualitative data analysis" }, { "answers": [ "" ], "context": "The financial performance of a corporation is correlated with its social responsibility such as whether their products are environmentally friendly, manufacturing safety procedures protect against accidents, or they use child labors in its third world country factories. Consumers care about these factors when making purchasing decisions in the supermarkets and investors integrate environmental, social and governance factors, known as ESG, in their investment decision-making. It has been shown that corporations financial results have a positive correlation with their sustainability business model and the ESG investment methodology can help reduce portfolio risk and generate competitive returns. However, one barrier for ESG evaluation is the lack of relatively complete and centralized information source. Currently, ESG analysts leverage financial reports to collect the the necessary data for proper evaluation such as greenhouse gas emissions or discrimination lawsuits, but this data is inconsistent and latent. In this study, we consider social media a crowdsourcing data feed to be a new data source for this task.", "id": 1481, "question": "How does the method measure the impact of the event on market prices?", "title": "Empirical Study on Detecting Controversy in Social Media" }, { "answers": [ "" ], "context": "There have been a few studies on assessing sustainability of entities. The UN Commission on Sustainable Development (CSD) published a list of about 140 indicators on various dimensions of sustainability BIBREF0 . In BIBREF1 , Singh et al. reviewed various methodologies, indicators, and indices on sustainability assessment, which includes environmental and social domains. All the data, on which the assessments were conducted, mentioned in their works are processed datasets, and some of them are collected from company annual reports and publications, newspaper clips, and management interviews. They stated that the large number of indicators or indices raises the need of data collection. Our work uses the social media data as a new alternative data source to complement the traditional data collection.", "id": 1482, "question": "How is sentiment polarity measured?", "title": "Empirical Study on Detecting Controversy in Social Media" }, { "answers": [ "" ], "context": "Recent advances in natural language processing and neural network architecture have allowed for widespread application of these methods in Text Summarization BIBREF0, Natural Language Generation BIBREF1, and Text Classification BIBREF2. Such advances have enabled scientists to study common language practices. One such area, humor, has garnered focus in classification BIBREF3, BIBREF4, generation BIBREF5, BIBREF6, and in social media BIBREF7.", "id": 1483, "question": "Which part of the joke is more important in humor?", "title": "Humor Detection: A Transformer Gets the Last Laugh" }, { "answers": [ "It had the highest accuracy comparing to all datasets 0.986% and It had the highest improvement comparing to previous methods on the same dataset by 8%" ], "context": "In the related work of joke identification, we find a myriad of methods employed over the years: statistical and N-gram analysis BIBREF13, Regression Trees BIBREF14, Word2Vec combined with K-NN Human Centric Features BIBREF15, and Convolutional Neural Networks BIBREF4.", "id": 1484, "question": "What is improvement in accuracy for short Jokes in relation other types of jokes?", "title": "Humor Detection: A Transformer Gets the Last Laugh" }, { "answers": [ "" ], "context": "We gathered jokes from a variety of sources, each covering a different type of humor. These datasets include jokes of multiple sentences (the Short Jokes dataset), jokes with only one sentence (the Puns dataset), and more mixed jokes (the Reddit dataset). We have made our code and datasets open source for others to use.", "id": 1485, "question": "What kind of humor they have evaluated?", "title": "Humor Detection: A Transformer Gets the Last Laugh" }, { "answers": [ "" ], "context": "Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.", "id": 1486, "question": "How they evaluate if joke is humorous or not?", "title": "Humor Detection: A Transformer Gets the Last Laugh" }, { "answers": [ "" ], "context": "The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer BIBREF3 and BERT BIBREF2 achieve the state of the art on the GLUE benchmark BIBREF4 . Among the GLUE tasks, these state-of-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability BIBREF0 . CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability.", "id": 1487, "question": "Do they report results only on English data?", "title": "Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments" }, { "answers": [ "" ], "context": "We introduce a grammatically annotated version of the entire CoLA development set to facilitate detailed error analysis of acceptability classifiers. These 1043 sentences are expert-labeled for the presence of 63 minor grammatical features organized into 15 major features. Each minor feature belongs to a single major feature. A sentence belongs to a major feature if it belongs to one or more of the relevant minor features. The Appendix includes descriptions of each feature along with examples and the criteria used for annotation.", "id": 1488, "question": "Do the authors have a hypothesis as to why morphological agreement is hardly learned by any model?", "title": "Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments" }, { "answers": [ "" ], "context": "The sentences were annotated manually by one of the authors, who is a PhD student with extensive training in formal linguistics. The features were developed in a trial stage, in which the annotator performed a similar annotation with different annotation schema for several hundred sentences from CoLA not belonging to the development set.", "id": 1489, "question": "Which models are best for learning long-distance movement?", "title": "Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments" }, { "answers": [ "" ], "context": "Here we briefly summarize the feature set in order of the major features. Many of these constructions are well-studied in syntax, and further background can be found in textbooks such as adger2003core and sportiche2013introduction.", "id": 1490, "question": "Where does the data in CoLA come from?", "title": "Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments" }, { "answers": [ "" ], "context": "We wish to emphasize that these features are overlapping and in many cases are correlated, thus not all results from using this analysis set will be independent. We analyzed the pairwise Matthews Correlation Coefficient BIBREF17 of the 63 minor features (giving 1953 pairs), and of the 15 major features (giving 105 pairs). MCC is a special case of Pearson's INLINEFORM0 for Boolean variables. These results are summarized in Table TABREF25 . Regarding the minor features, 60 pairs had a correlation of 0.2 or greater, 17 had a correlation of 0.4 or greater, and 6 had a correlation of 0.6 or greater. None had an anti-correlation of greater magnitude than -0.17. Turning to the major features, 6 pairs had a correlation of 0.2 or greater, and 2 had an anti-correlation of greater magnitude than -0.2.", "id": 1491, "question": "How is the CoLA grammatically annotated?", "title": "Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments" }, { "answers": [ "Human, FastQA, BiDAF, Coref-GRU, MHPGM, Weaver / Jenga, MHQA-GRN" ], "context": "The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance.", "id": 1492, "question": "What baseline did they compare Entity-GCN to?", "title": "Question Answering by Reasoning Across Documents with Graph Convolutional Networks" }, { "answers": [ "" ], "context": "In this section we explain our method. We first introduce the dataset we focus on, WikiHop by BIBREF0 , as well as the task abstraction. We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph.", "id": 1493, "question": "How many documents at a time can Entity-GCN handle?", "title": "Question Answering by Reasoning Across Documents with Graph Convolutional Networks" }, { "answers": [ "" ], "context": "The WikiHop dataset comprises of tuples $\\langle q, S_q, C_q, a^\\star \\rangle $ where: $q$ is a query/question, $S_q$ is a set of supporting documents, $C_q$ is a set of candidate answers (all of which are entities mentioned in $S_q$ ), and $a^\\star \\in C_q$ is the entity that correctly answers the question. WikiHop is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other. The KB contains triples $\\langle s, r, o \\rangle $ where $s$ is a subject entity, $o$ an object entity, and $r$ a unidirectional relation between them. BIBREF0 used Wikipedia as corpus and Wikidata BIBREF15 as KB. The KB is only used for constructing WikiHop: BIBREF0 retrieved the supporting documents $q$0 from the corpus looking at mentions of subject and object entities in the text. Note that the set $q$1 (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors. Queries, on the other hand, are not expressed in natural language, but instead consist of tuples $q$2 where the object entity is unknown and it has to be inferred by reading the support documents. Therefore, answering a query corresponds to finding the entity $q$3 that is the object of a tuple in the KB with subject $q$4 and relation $q$5 among the provided set of candidate answers $q$6 .", "id": 1494, "question": "Did they use a relation extraction method to construct the edges in the graph?", "title": "Question Answering by Reasoning Across Documents with Graph Convolutional Networks" }, { "answers": [ "Assign a value to the relation based on whether mentions occur in the same document, if mentions are identical, or if mentions are in the same coreference chain." ], "context": "In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \\langle s, r, ? \\rangle $ , we identify mentions in $S_q$ of the entities in $C_q \\cup \\lbrace s\\rbrace $ and create one node per mention. This process is based on the following heuristic:", "id": 1495, "question": "How did they get relations between mentions?", "title": "Question Answering by Reasoning Across Documents with Graph Convolutional Networks" }, { "answers": [ "Exact matches to the entity string and predictions from a coreference resolution system" ], "context": "Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder. Specifically, we use ELMo BIBREF20 , a pre-trained bi-directional language model that relies on character-based input representation. ELMo representations, differently from other pre-trained word-based models (e.g., word2vec BIBREF21 or GloVe BIBREF22 ), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).", "id": 1496, "question": "How did they detect entity mentions?", "title": "Question Answering by Reasoning Across Documents with Graph Convolutional Networks" }, { "answers": [ "" ], "context": "Our model uses a gated version of the original R-GCN propagation rule. At the first layer, all hidden node representation are initialized with the query-aware encodings $\\mathbf {h}_i^{(0)} = \\mathbf {\\hat{x}}_i$ . Then, at each layer $0\\le \\ell \\le L$ , the update message $\\mathbf {u}_i^{(\\ell )}$ to the $i$ th node is a sum of a transformation $f_s$ of the current node representation $\\mathbf {h}^{(\\ell )}_i$ and transformations of its neighbours: ", "id": 1497, "question": "What is the metric used with WIKIHOP?", "title": "Question Answering by Reasoning Across Documents with Graph Convolutional Networks" }, { "answers": [ "During testing: 67.6 for single model without coreference, 66.4 for single model with coreference, 71.2 for ensemble of 5 models" ], "context": "In this section, we compare our method against recent work as well as preforming an ablation study using the WikiHop dataset BIBREF0 . See Appendix \"Implementation and experiments details\" in the supplementary material for a description of the hyper-parameters of our model and training details.", "id": 1498, "question": "What performance does the Entity-GCN get on WIKIHOP?", "title": "Question Answering by Reasoning Across Documents with Graph Convolutional Networks" }, { "answers": [ "" ], "context": "Affective computing has raised a great deal of interest in the last years. Picard picard1995affective introduced it as a computing paradigm that relates to, arises from, or influences emotions, letting computers be both more effective in assisting humans and successful in making decisions.", "id": 1499, "question": "Do they evaluate only on English datasets?", "title": "Effectiveness of Data-Driven Induction of Semantic Spaces and Traditional Classifiers for Sarcasm Detection" }, { "answers": [ "" ], "context": "The problem of sarcasm detection has been tackled using a wide range of supervised or semi-supervised techniques applied to corpora from different social media sources.", "id": 1500, "question": "What baseline models are used?", "title": "Effectiveness of Data-Driven Induction of Semantic Spaces and Traditional Classifiers for Sarcasm Detection" }, { "answers": [ "" ], "context": "We focused our research on the role that fully data-driven models can play in detecting sarcasm. To reach this goal, we exploited the Latent Semantic Analysis paradigm both in its traditional formulation (Landauer et al. 1998) and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as shown in Pilato et al. pilato2015tsvd. We have chosen to use the LSA paradigm to exploit a well-known and well-founded approach for inducing semantic spaces that have been effectively used in natural language understanding, cognitive modeling, speech recognition, smart indexing, and other statistical natural language processing problems. The sub-symbolic codings of documents obtained by the aforementioned LSA-based approaches are then used as inputs by a set of classifiers to evaluate the differences of performances obtained by using different machine learning approaches and testing them on different sarcasm-detection datasets.", "id": 1501, "question": "What classical machine learning algorithms are used?", "title": "Effectiveness of Data-Driven Induction of Semantic Spaces and Traditional Classifiers for Sarcasm Detection" }, { "answers": [ "" ], "context": "The first step of preprocessing for texts is the tokenization using spaces, punctuation and special characters (e.g., $, , @) as separators. Thus one token is a sequence of alphanumeric characters or of punctuation symbols. The set of all the extracted tokens constitutes a “vocabulary” named INLINEFORM0 .", "id": 1502, "question": "What are the different methods used for different corpora?", "title": "Effectiveness of Data-Driven Induction of Semantic Spaces and Traditional Classifiers for Sarcasm Detection" }, { "answers": [ "" ], "context": "The matrix INLINEFORM0 is used and further processed to induce proper Semantic Spaces where terms and documents can be mapped. To generate these semantic spaces, we have used both the traditional LSA algorithm (Deerwester et al. 1990, Landauer et al. 1998) and the approach which uses T-SVD as a statistical estimator as proposed in Pilato et al. pilato2015tsvd. For the sake of brevity, we call this last approach Statistical LSA to differentiate it by the Traditional LSA. It is worthwhile to point out that, in the Latent Semantic Analysis paradigm (i.e., both “general” and “statistical”), the corpus used for building the semantic space plays a key role in performances. As a matter of fact, large and heterogeneous corpora may give more noise or too much specific information from a single domain, decreasing the accuracy of the induced models BIBREF36 .", "id": 1503, "question": "In which domains is sarcasm conveyed in different ways?", "title": "Effectiveness of Data-Driven Induction of Semantic Spaces and Traditional Classifiers for Sarcasm Detection" }, { "answers": [ "" ], "context": "Humans communicate using a highly complex structure of multimodal signals. We employ three modalities in a coordinated manner to convey our intentions: language modality (words, phrases and sentences), vision modality (gestures and expressions), and acoustic modality (paralinguistics and changes in vocal tones) BIBREF0 . Understanding this multimodal communication is natural for humans; we do it subconsciously in the cerebrum of our brains everyday. However, giving Artificial Intelligence (AI) the capability to understand this form of communication the same way humans do, by incorporating all involved modalities, is a fundamental research challenge. Giving AI the capability to understand human communication narrows the gap in computers' understanding of humans and opens new horizons for the creation of many intelligent entities.", "id": 1504, "question": "What modalities are being used in different datasets?", "title": "Multi-attention Recurrent Network for Human Communication Comprehension" }, { "answers": [ "" ], "context": "Modeling multimodal human communication has been studied previously. Past approaches can be categorized as follows:", "id": 1505, "question": "What is the difference between Long-short Term Hybrid Memory and LSTMs?", "title": "Multi-attention Recurrent Network for Human Communication Comprehension" }, { "answers": [ "" ], "context": "A metaphor is a way of forcing the normal boundaries of a word's meaning in order to better express an experience, a concept or an idea. To a native speaker's ear some metaphors sound more conventional (like the usage of the words ear and sound in this sentence), others more original. This is not the only dimension along which to judge a metaphor. One of the most important qualities of a metaphor is its appropriateness, its aptness: how good is a metaphor for conveying a given experience or concept. While a metaphor's degree of conventionality can be measured through probabilistic methods, like language models, it is harder to represent its aptness. BIBREF0 define aptness as “the extent to which a comparison captures important features of the topic\".", "id": 1506, "question": "Do they report results only on English data?", "title": "The Effect of Context on Metaphor Paraphrase Aptness Judgments" }, { "answers": [ "" ], "context": " BIBREF3 have recently produced a dataset of paraphrases containing metaphors designed to allow both supervised binary classification and gradient ranking. This dataset contains several pairs of sentences, where in each pair the first sentence contains a metaphor, and the second is a literal paraphrase candidate.", "id": 1507, "question": "What provisional explanation do the authors give for the impact of document context?", "title": "The Effect of Context on Metaphor Paraphrase Aptness Judgments" }, { "answers": [ "Preceding and following sentence of each metaphor and paraphrase are added as document context" ], "context": "We found a Pearson correlation of 0.81 between the in-context and out-of-context mean human paraphrase ratings for our two corpora. This correlation is virtually identical to the one that BIBREF5 report for mean acceptability ratings of out-of-context to in-context sentences in their crowd source experiment. It is interesting that a relatively high level of ranking correspondence should occur in mean judgments for sentences presented out of and within document contexts, for two entirely distinct tasks.", "id": 1508, "question": "What document context was added?", "title": "The Effect of Context on Metaphor Paraphrase Aptness Judgments" }, { "answers": [ "Best performance achieved is 0.72 F1 score" ], "context": "We use the DNN model described in BIBREF3 to predict aptness judgments for in-context paraphrase pairs. It has three main components:", "id": 1509, "question": "What were the results of the first experiment?", "title": "The Effect of Context on Metaphor Paraphrase Aptness Judgments" }, { "answers": [ "" ], "context": "Drug-drug interaction (DDI) is a situation when one drug increases or decreases the effect of another drug BIBREF0 . Adverse drug reactions may cause severe side effect, if two or more medicines were taken and their DDI were not investigated in detail. DDI is a common cause of illness, even a cause of death BIBREF1 . Thus, DDI databases for clinical medication decisions are proposed by some researchers. These databases such as SFINX BIBREF2 , KEGG BIBREF3 , CredibleMeds BIBREF4 help physicians and pharmacists avoid most adverse drug reactions.", "id": 1510, "question": "How big is the evaluated dataset?", "title": "Drug-drug Interaction Extraction via Recurrent Neural Network with Multiple Attention Layers" }, { "answers": [ "Answer with content missing: (Table II) Proposed model has F1 score of 0.7220 compared to 0.7148 best state-state-of-the-art result." ], "context": "In DDI extraction task, NLP methods or machine learning approaches are proposed by most of the work. Chowdhury BIBREF14 and Thomas et al. BIBREF11 proposed methods that use linguistic phenomenons and two-stage SVM to classify DDIs. FBK-irst BIBREF10 is a follow-on work which applies kernel method to the existing model and outperforms it.", "id": 1511, "question": "By how much does their model outperform existing methods?", "title": "Drug-drug Interaction Extraction via Recurrent Neural Network with Multiple Attention Layers" }, { "answers": [ "Answer with content missing: (Table II) Proposed model has F1 score of 0.7220." ], "context": "In this section, we present our bidirectional recurrent neural network with multiple attention layer model. The overview of our architecture is shown in figure FIGREF15 . For a given instance, which describes the details about two or more drugs, the model represents each word as a vector in embedding layer. Then the bidirectional RNN layer generates a sentence matrix, each column vector in which is the semantic representation of the corresponding word. The word level attention layer transforms the sentence matrix to vector representation. Then sentence level attention layer generates final representation for the instance by combining several relevant sentences in view of the fact that these sentences have the same drug pair. Followed by a softmax classifier, the model classifies the drug pair in the given instance as specific DDI.", "id": 1512, "question": "What is the performance of their model?", "title": "Drug-drug Interaction Extraction via Recurrent Neural Network with Multiple Attention Layers" }, { "answers": [ "" ], "context": "The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. We replace the interested two drugs with “drug1” and “drug2” while the other drugs are replaced by “durg0”, as in BIBREF9 did. This step is called drug blinding. For example, the sentence in figure FIGREF5 generates 3 instances after drug blinding: “drug1: an increased risk of hepatitis has been reported to result from combined use of drug2 and drug0”, “drug1: an increased risk of hepatitis has been reported to result from combined use of drug0 and drug2”, “drug0: an increased risk of hepatitis has been reported to result from combined use of drug1 and drug2”. The drug blinded sentences are the instances that are fed to our model.", "id": 1513, "question": "What are the existing methods mentioned in the paper?", "title": "Drug-drug Interaction Extraction via Recurrent Neural Network with Multiple Attention Layers" }, { "answers": [ "" ], "context": "Studies of Broca's and Wernicke's aphasia provide evidence that our brains understand an utterance by creating separate representations for word meaning and word arrangement BIBREF0. There is a related thesis about human language, present across many theories of semantics, which is that syntactic categories are partially agnostic to the identity of words BIBREF1. This regularity in how humans derive meaning from an utterance is applicable to the task of natural language translation. This is because, by definition, translation necessitates the creation of a meaning representation for an input. According to the cognitive and neural imperative, we introduce new units to regularize an artificial neural encoder and decoder BIBREF2. These are called the Lexicon and Lexicon-Adversary units (collectively, LLA). Tests are done on a diagnostic task, and naturalistic tasks including semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We evaluate a Long Short-Term Memory (LSTM) BIBREF3 encoder and decoder, with and without the LLA units, and show that the LLA version achieves superior translation performance. In addition, we examine our model's weights, and its performance when some of its neurons are damaged. We find that the model exhibits the knowledge and the lack thereof expected of a Broca's aphasic BIBREF0 when one module's weights are corrupted. It also exhibits that expected of a Wernicke's aphasic BIBREF0 when another module's weights are corrupted.", "id": 1514, "question": "Does having constrained neural units imply word meanings are fixed across different context?", "title": "Compositional Neural Machine Translation by Removing the Lexicon from Syntax" }, { "answers": [ "" ], "context": "BIBREF0 BIBREF0 showed that Broca's aphasics were able to understand “the apple that the boy is eating is red” with significantly higher accuracy than “the cow that the monkey is scaring is yellow,” along with similar pairs. The critical difference between these sentences is that, due to semantic constraints from the words, the first can be understood if it is presented as a set of words. The second cannot. This experiment provides evidence that the rest of the language neurons in the brain (largely Wernicke's area) can yield an understanding of word meanings but not how words are arranged. This also suggests that Broca's area builds a representation of the syntax.", "id": 1515, "question": "Do they perform a quantitative analysis of their model displaying knowledge distortions?", "title": "Compositional Neural Machine Translation by Removing the Lexicon from Syntax" }, { "answers": [ "Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information." ], "context": "A tenet of generative grammar theories is that different words can share the same syntactic category BIBREF1. It is possible, for example, to know that the syntax for an utterance is a noun phrase that is composed of a determiner and a noun, followed by a verb phrase that is composed of a verb. One can know this without knowing the words. This also means that there are aspects of a word's meaning that the syntax does not determine; by definition, these aspects are invariant to word arrangement.", "id": 1516, "question": "How do they damage different neural modules?", "title": "Compositional Neural Machine Translation by Removing the Lexicon from Syntax" }, { "answers": [ "" ], "context": "In a natural language translation setting, suppose that an input word corresponds to a set of output tokens independently of its context. Even though this information might be useful to determine the syntax of the input utterance in the first place, the syntax does not determine this knowledge at all (by supposition). So, we can impose the constraint that our model's representation of the input's syntax cannot contain this context-invariant information. This regularization is strictly preferable to allowing all aspects of word meaning to propagate into the input's syntax representation. Without such a constraint, all inputs could, in principle, be given their own syntactic categories. This scenario is refuted by cognitive and neural theories. We incorporate the regularization with neural units that can separate representations of word meaning and arrangement.", "id": 1517, "question": "Which weights from their model do they analyze?", "title": "Compositional Neural Machine Translation by Removing the Lexicon from Syntax" }, { "answers": [ "" ], "context": "With the penetration of internet among masses, the content being posted on social media channels has uptaken. Specifically, in the Indian subcontinent, number of Internet users has crossed 500 mi, and is rising rapidly due to inexpensive data. With this rise, comes the problem of hate speech, offensive and abusive posts on social media. Although there are many previous works which deal with Hindi and English hate speech (the top two languages in India), but very few on the code-switched version (Hinglish) of the two BIBREF0 . This is partially due to the following reasons: (i) Hinglish consists of no-fixed grammar and vocabulary. It derives a part of its semantics from Devnagari and another part from the Roman script. (ii) Hinglish speech and written text consists of a concoction of words spoken in Hindi as well as English, but written in the Roman script. This makes the spellings variable and dependent on the writer of the text. Hence code-switched languages present tough challenges in terms of parsing and getting the meaning out of the text. For instance, the sentence, “Modiji foreign yatra par hai”, is in the Hinglish language. Somewhat correct translation of this would be, “Mr. Modi is on a foriegn tour”. However, even this translation has some flaws due to no direct translation available for the word ji, which is used to show respect. Verbatim translation would lead to “Mr. Modi foreign tour on is”. Moreover, the word yatra here, can have phonetic variations, which would result in multiple spellings of the word as yatra, yaatra, yaatraa, etc. Also, the problem of hate speech has been rising in India, and according to the policies of the government and the various social networks, one is not allowed to misuse his right to speech to abuse some other community or religion. Due to the various difficulties associated with the Hinglish language, it is challenging to automatically detect and ban such kind of speech.", "id": 1518, "question": "Do all the instances contain code-switching?", "title": "Mind Your Language: Abuse and Offense Detection for Code-Switched Languages" }, { "answers": [ "" ], "context": "Our methodology primarily consists of these steps: Pre-processing of the dataset, training of word embeddings, training of the classifier model and then using that on HEOT dataset.", "id": 1519, "question": "What embeddings do they use?", "title": "Mind Your Language: Abuse and Offense Detection for Code-Switched Languages" }, { "answers": [ "" ], "context": "In this work, we use the datasets released by BIBREF1 and HEOT dataset provided by BIBREF0 . The datasets obtained pass through these steps of processing: (i) Removal of punctuatios, stopwords, URLs, numbers, emoticons, etc. This was then followed by transliteration using the Xlit-Crowd conversion dictionary and translation of each word to English using Hindi to English dictionary. To deal with the spelling variations, we manually added some common variations of popular Hinglish words. Final dictionary comprised of 7200 word pairs. Additionally, to deal with profane words, which are not present in Xlit-Crowd, we had to make a profanity dictionary (with 209 profane words) as well. Table TABREF3 gives some examples from the dictionary.", "id": 1520, "question": "Do they perform some annotation?", "title": "Mind Your Language: Abuse and Offense Detection for Code-Switched Languages" }, { "answers": [ "" ], "context": "We tried Glove BIBREF2 and Twitter word2vec BIBREF3 code for training embeddings for the processed tweets. The embeddings were trained on both the datasets provided by BIBREF1 and HEOT. These embeddings help to learn distributed representations of tweets. After experimentation, we kept the size of embeddings fixed to 100.", "id": 1521, "question": "Do they use dropout?", "title": "Mind Your Language: Abuse and Offense Detection for Code-Switched Languages" }, { "answers": [ "" ], "context": "Both the HEOT and BIBREF1 datasets contain tweets which are annotated in three categories: offensive, abusive and none (or benign). Some examples from the dataset are shown in Table TABREF4 . We use a LSTM based classifier model for training our model to classify these tweets into these three categories. An overview of the model is given in the Figure FIGREF12 . The model consists of one layer of LSTM followed by three dense layers. The LSTM layer uses a dropout value of 0.2. Categorical crossentropy loss was used for the last layer due to the presence of multiple classes. We use Adam optimizer along with L2 regularisation to prevent overfitting. As indicated by the Figure FIGREF12 , the model was initially trained on the dataset provided by BIBREF1 , and then re-trained on the HEOT dataset so as to benefit from the transfer of learned features in the last stage. The model hyperparameters were experimentally selected by trying out a large number of combinations through grid search.", "id": 1522, "question": "What definition of hate speech do they use?", "title": "Mind Your Language: Abuse and Offense Detection for Code-Switched Languages" }, { "answers": [ "" ], "context": "Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to input texts, namely, sentence, paragraph, and document levels. This study focuses on sentence and paragraph levels.", "id": 1523, "question": "What are the other models they compare to?", "title": "$\\rho$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis" }, { "answers": [ "1" ], "context": "Sentiment analysis aims to predict the sentiment polarity of an input text sample. Sentiment polarity can be divided into negative, neutral, and positive in many applications.", "id": 1524, "question": "What is the agreement value for each dataset?", "title": "$\\rho$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis" }, { "answers": [ "" ], "context": "Lexicon-based methods are actually in implemented in an unsupervised manner. They infer the sentiment categories of input texts on the basis of polar and privative words. The primary advantage of these methods is that they do not require labeled training data. The key of lexicon-based methods is the lexical resource construction, which maps words into a category (positive, negative, neutral, or privative). Senti-WordNet BIBREF11 is a lexical resource for English text sentiment classification. For Chinese texts, Senti-HowNet is usually used.", "id": 1525, "question": "How many annotators participated?", "title": "$\\rho$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis" }, { "answers": [ "Travel dataset contains 4100 raw samples, 11291 clauses, Hotel dataset contains 3825 raw samples, 11264 clauses, and the Mobile dataset contains 3483 raw samples and 8118 clauses" ], "context": "Deep learning (including word embedding BIBREF12 ) has been applied to almost all text-related applications, such as translation BIBREF13 , quality assurance BIBREF14 , recommendation BIBREF15 , and categorization BIBREF16 . Popular deep neural networks are divided into convolutional neural networks (CNNs) BIBREF17 and recurrent neural network (RNNs) BIBREF18 BIBREF19 . Both are utilized in sentiment classification BIBREF20 . Kim investigated the use of CNN in sentence sentiment classification and achieved promising results BIBREF2 . LSTM BIBREF21 , a classical type of RNN, is the most popular network used for sentiment classification. A binary-directional LSTM BIBREF22 with an attention mechanism is demonstrated to be effective in sentiment analysis.", "id": 1526, "question": "How long are the datasets?", "title": "$\\rho$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis" }, { "answers": [ "User reviews written in Chinese collected online for hotel, mobile phone, and travel domains" ], "context": "This section first introduces our two-stage labeling procedure. A two-level LSTM is then proposed. Lexicon embedding is finally leveraged to incorporate lexical cues.", "id": 1527, "question": "What are the sources of the data?", "title": "$\\rho$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis" }, { "answers": [ "They use a two-stage labeling strategy where in the first stage single annotators label a large number of short texts with relatively pure sentiment orientations and in the second stage multiple annotators label few text samples with mixed sentiment orientations" ], "context": "As stated earlier, sentiment is subjective, and texts usually contain mixed sentiment orientations. Therefore, texts¡¯ sentiment orientations are difficult to label. In our study, three sentiment labels, namely, positive, neutral, and negative, are used. The following sentences are taken as examples.", "id": 1528, "question": "What is the new labeling strategy?", "title": "$\\rho$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis" }, { "answers": [ "" ], "context": "Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU.", "id": 1529, "question": "Which future direction in NLG are discussed?", "title": "Unsupervised Pre-training for Natural Language Generation: A Literature Review" }, { "answers": [ "" ], "context": "Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training.", "id": 1530, "question": "What experimental phenomena are presented?", "title": "Unsupervised Pre-training for Natural Language Generation: A Literature Review" }, { "answers": [ "" ], "context": "NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation.", "id": 1531, "question": "How strategy-based methods handle obstacles in NLG?", "title": "Unsupervised Pre-training for Natural Language Generation: A Literature Review" }, { "answers": [ "" ], "context": "Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs).", "id": 1532, "question": "How architecture-based method handle obstacles in NLG?", "title": "Unsupervised Pre-training for Natural Language Generation: A Literature Review" }, { "answers": [ "The changes are evaluated based on accuracy of intent and entity recognition on SNIPS dataset" ], "context": "There is no shortage of services that are marketed as natural language understanding (nlu) solutions for use in chatbots, digital personal assistants, or spoken dialogue systems (sds). Recently, Braun2017 systematically evaluated several such services, including Microsoft LUIS, IBM Watson Conversation, API.ai, wit.ai, Amazon Lex, and RASA BIBREF0 . More recently, Liu2019b evaluated LUIS, Watson, RASA, and DialogFlow using some established benchmarks. Some nlu services work better than others in certain tasks and domains with a perhaps surprising pattern: RASA, the only fully open-source nlu service among those evaluated, consistently performs on par with the commercial services.", "id": 1533, "question": "How are their changes evaluated?", "title": "Incrementalizing RASA's Open-Source Natural Language Understanding Pipeline" }, { "answers": [ "" ], "context": "Playing a key role in conveying the meaning of a sentence, verbs are famously complex. They display a wide range of syntactic-semantic behaviour, expressing the semantics of an event as well as relational information among its participants BIBREF0 , BIBREF1 , BIBREF2 .", "id": 1534, "question": "What baseline is used for the verb classification experiments?", "title": "Cross-Lingual Induction and Transfer of Verb Classes Based on Word Vector Space Specialisation" }, { "answers": [ "" ], "context": "Our departure point is a state-of-the-art specialisation model for fine-tuning vector spaces termed Paragram BIBREF49 . The Paragram procedure injects similarity constraints between word pairs in order to make their vector space representations more similar; we term these the Attract constraints. Let INLINEFORM0 be the vocabulary consisting of the source language and target language vocabularies INLINEFORM1 and INLINEFORM2 , respectively. Let INLINEFORM3 be the set of word pairs standing in desirable lexical relations; these include: 1) verb pairs from the same VerbNet class (e.g. (en_transport, en_transfer) from verb class send-11.1); and 2) the cross-lingual synonymy pairs (e.g. (en_peace, fi_rauha)). Given the initial distributional space and collections of such Attract pairs INLINEFORM4 , the model gradually modifies the space to bring the designated word vectors closer together, working in mini-batches of size INLINEFORM5 . The method's cost function can be expressed as:", "id": 1535, "question": "What clustering algorithm is used on top of the VerbNet-specialized representations?", "title": "Cross-Lingual Induction and Transfer of Verb Classes Based on Word Vector Space Specialisation" }, { "answers": [ "" ], "context": "Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details.", "id": 1536, "question": "How many words are translated between the cross-lingual translation pairs?", "title": "Cross-Lingual Induction and Transfer of Verb Classes Based on Word Vector Space Specialisation" }, { "answers": [ "Answer with content missing: (3 Experimental Setup) We experiment with six target languages: French (FR), Brazilian Portuguese (PT), Italian (IT), Polish (PL), Croatian (HR), and Finnish (FI)." ], "context": "Cross-Lingual Transfer Model F-1 verb classification scores for the six target languages with different sets of constraints are summarised in Fig. FIGREF29 . We can draw several interesting conclusions. First, the strongest results on average are obtained with the model which transfers the VerbNet knowledge from English (as a resource-rich language) to the resource-lean target language (providing an answer to question Q3, Sect. SECREF1 ). These improvements are visible across all target languages, empirically demonstrating the cross-lingual nature of VerbNet-style classifications. Second, using cross-lingual constraints alone (XLing) yields strong gains over initial distributional spaces (answering Q1 and Q2). Fig. FIGREF29 also shows that cross-lingual similarity constraints are more beneficial than the monolingual ones, despite a larger total number of the monolingual constraints in each language (see Tab. TABREF18 ). This suggests that such cross-lingual similarity links are strong implicit indicators of class membership. Namely, target language words which map to the same source language word are likely to be synonyms and consequently end up in the same verb class in the target language. However, the cross-lingual links are even more useful as means for transferring the VerbNet knowledge, as evidenced by additional gains with XLing+VerbNet-EN.", "id": 1537, "question": "What are the six target languages?", "title": "Cross-Lingual Induction and Transfer of Verb Classes Based on Word Vector Space Specialisation" }, { "answers": [ "" ], "context": "Fake news are written and published with the intent to mislead in order to gain financially or politically, often targeting specific user groups. Another type of harmful content on the Internet are the so-called click-baits, which are distinguished by their sensational, exaggerated, or deliberately false headlines that grab attention and deceive the user into clicking an article with questionable content.", "id": 1538, "question": "what classifiers were used in this paper?", "title": "We Built a Fake News&Click-bait Filter: What Happened Next Will Blow Your Mind!" }, { "answers": [ "" ], "context": "Trustworthiness and veracity analytics of on-line statements is an emerging research direction BIBREF0 . This includes predicting credibility of information shared in social media BIBREF1 , stance classification BIBREF2 and contradiction detection in rumours BIBREF3 . For example, Castillo:2011:ICT:1963405.1963500 studied the problem of finding false information about a newsworthy event. They compiled their own dataset, focusing on tweets using a variety of features including user reputation, author writing style, and various time-based features. Canini:2011 analysed the interaction of content and social network structure, and Morris:2012:TBU:2145204.2145274 studied how Twitter users judge truthfulness. They found that this is hard to do based on content alone, and instead users are influenced by heuristics such as user name.", "id": 1539, "question": "what are their evaluation metrics?", "title": "We Built a Fake News&Click-bait Filter: What Happened Next Will Blow Your Mind!" }, { "answers": [ "" ], "context": "We use a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically collected, and then annotated by students of journalism. Each entry in the dataset consists of the following elements: URL of the original article, date of publication, article heading, article content, a label indicating whether the article is fake or not, and another label indicating whether it is a click-bait.", "id": 1540, "question": "what types of features were used?", "title": "We Built a Fake News&Click-bait Filter: What Happened Next Will Blow Your Mind!" }, { "answers": [ "" ], "context": "We propose a general framework for finding fake news focusing on the text only. We first create some resources, e.g., dictionaries of words strongly correlated with fake news, which are needed for feature extraction. Then, we design features that model a number of interesting aspects about an article, e.g., style, intent, etc. Moreover, we use a deep neural network to learn task-specific representations of the articles, which includes an attention mechanism that can focus on the most discriminative sentences and words.", "id": 1541, "question": "what lexical features did they experiment with?", "title": "We Built a Fake News&Click-bait Filter: What Happened Next Will Blow Your Mind!" }, { "answers": [ "" ], "context": "As our work is the first attempt at predicting click-baits in Bulgarian, it is organized around building new language-specific resources and analyzing the task.", "id": 1542, "question": "what is the size of the dataset?", "title": "We Built a Fake News&Click-bait Filter: What Happened Next Will Blow Your Mind!" }, { "answers": [ "" ], "context": "Fake news are written with the intent to deceive, and their authors often use a different style of writing compared to authors that create genuine content. This could be either deliberately, e.g., if the author wants to adapt the text to a specific target group or wants to provoke some particular emotional reaction in the reader, or unintentionally, e.g., because the authors of fake news have different writing style and personality compared to journalists in mainstream media. Disregarding the actual reason, we use features from author profiling and style detection BIBREF28 .", "id": 1543, "question": "what datasets were used?", "title": "We Built a Fake News&Click-bait Filter: What Happened Next Will Blow Your Mind!" }, { "answers": [ "" ], "context": "As we mentioned above, our method is purely text-based. Thus, we ignored the publishing date of the article. In future work, it could be explored as a useful piece of information about the credibility of the article, as there is interesting research in this direction BIBREF24 . We also disregarded the article source (the URL) because websites that specialize in producing and distributing fake content are often banned and then later reappear under another name. We recognize that the credibility of a specific website could be a very informative feature, but, for the sake of creating a robust method for fake news detection, our system relies only on the text when predicting whether the target article is likely to be fake. We describe our features in more detail below.", "id": 1544, "question": "what are the three reasons everybody hates them?", "title": "We Built a Fake News&Click-bait Filter: What Happened Next Will Blow Your Mind!" }, { "answers": [ "" ], "context": "The wide use and success of monolingual word embeddings in NLP tasks BIBREF0 , BIBREF1 has inspired further research focus on the induction of cross-lingual word embeddings (CLWEs). CLWE methods learn a shared cross-lingual word vector space where words with similar meanings obtain similar vectors regardless of their actual language. CLWEs benefit cross-lingual NLP, enabling multilingual modeling of meaning and supporting cross-lingual transfer for downstream tasks and resource-lean languages. CLWEs provide invaluable cross-lingual knowledge for, inter alia, bilingual lexicon induction BIBREF2 , BIBREF3 , information retrieval BIBREF4 , BIBREF5 , machine translation BIBREF6 , BIBREF7 , document classification BIBREF8 , cross-lingual plagiarism detection BIBREF9 , domain adaptation BIBREF10 , cross-lingual POS tagging BIBREF11 , BIBREF12 , and cross-lingual dependency parsing BIBREF13 , BIBREF14 .", "id": 1545, "question": "How are seed dictionaries obtained by fully unsupervised methods?", "title": "Do We Really Need Fully Unsupervised Cross-Lingual Embeddings?" }, { "answers": [ "" ], "context": "We now dissect a general framework for unsupervised CLWE learning, and show that the “bag of tricks of the trade” used to increase their robustness (which often slips under the radar) can be equally applied to (weakly) supervised projection-based approaches, leading to their fair(er) comparison.", "id": 1546, "question": "How does BLI measure alignment quality?", "title": "Do We Really Need Fully Unsupervised Cross-Lingual Embeddings?" }, { "answers": [ "" ], "context": "In short, projection-based CLWE methods learn to (linearly) align independently trained monolingual spaces $\\mathbf {X}$ and $\\mathbf {Z}$ , using a word translation dictionary $D_0$ to guide the alignment process. Let $\\mathbf {X}_D \\subset \\mathbf {X}$ and $\\mathbf {Z}_D \\subset \\mathbf {Z}$ be the row-aligned subsets of monolingual spaces containing vectors of aligned words from $D_0$ . Alignment matrices $\\mathbf {X}_D$ and $\\mathbf {Z}_D$ are then used to learn orthogonal transformations $\\mathbf {W}_x$ and $\\mathbf {W}_z$ that define the joint bilingual space $\\mathbf {Z}$0 . While supervised projection-based CLWE models learn the mapping using a provided external (clean) dictionary $\\mathbf {Z}$1 , their unsupervised counterparts automatically induce the seed dictionary in an unsupervised way (C1) and then refine it in an iterative fashion (C2).", "id": 1547, "question": "What methods were used for unsupervised CLWE?", "title": "Do We Really Need Fully Unsupervised Cross-Lingual Embeddings?" }, { "answers": [ "440 sentences, 2247 triples extracted from those sentences, and 11262 judgements on those triples." ], "context": " This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/", "id": 1548, "question": "What is the size of the released dataset?", "title": "Open Information Extraction on Scientific Text: An Evaluation" }, { "answers": [ "" ], "context": "OIE systems analyze sentences and emit relations between one predicate and two or more arguments (e.g. Washington :: was :: president). The arguments and predicates are not fixed to a given domain. (Note, that throughout this paper we use the word `triple” to refer interchangeably to binary relations.) Existing evaluation approaches for OIE systems have primarily taken a ground truth-based approach. Human annotators analyze sentences and determine correct relations to be extracted. Systems are then evaluated with respect to the overlap or similarity of their extractions to the ground truth annotations, allowing the standard metrics of precision and recall to be reported.", "id": 1549, "question": "Were the OpenIE systems more accurate on some scientific disciplines than others?", "title": "Open Information Extraction on Scientific Text: An Evaluation" }, { "answers": [ "" ], "context": "We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore.", "id": 1550, "question": "What is the most common error type?", "title": "Open Information Extraction on Scientific Text: An Evaluation" }, { "answers": [ "OpenIE4 and MiniIE" ], "context": "We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . These sentences were randomly selected by the creators of the dataset. This choice allows for a rough comparison between our results and theirs.", "id": 1551, "question": "Which OpenIE systems were used?", "title": "Open Information Extraction on Scientific Text: An Evaluation" }, { "answers": [ "" ], "context": "We employed the following annotation process. Each OIE extractor was applied to both datasets with the settings described above. This resulted in the generation of triples for 199 of the 200 WIKI sentences and 206 of the 220 SCI sentences. That is there were some sentences in which no triples were extracted. We discuss later the sentences in which no triples were extracted. In total 2247 triples were extracted.", "id": 1552, "question": "What is the role of crowd-sourcing?", "title": "Open Information Extraction on Scientific Text: An Evaluation" }, { "answers": [ "" ], "context": "Keywords are terms (i.e. expressions) that best describe the subject of a document BIBREF0 . A good keyword effectively summarizes the content of the document and allows it to be efficiently retrieved when needed. Traditionally, keyword assignment was a manual task, but with the emergence of large amounts of textual data, automatic keyword extraction methods have become indispensable. Despite a considerable effort from the research community, state-of-the-art keyword extraction algorithms leave much to be desired and their performance is still lower than on many other core NLP tasks BIBREF1 . The first keyword extraction methods mostly followed a supervised approach BIBREF2 , BIBREF3 , BIBREF4 : they first extract keyword features and then train a classifier on a gold standard dataset. For example, KEA BIBREF4 , a state of the art supervised keyword extraction algorithm is based on the Naive Bayes machine learning algorithm. While these methods offer quite good performance, they rely on an annotated gold standard dataset and require a (relatively) long training process. In contrast, unsupervised approaches need no training and can be applied directly without relying on a gold standard document collection. They can be further divided into statistical and graph-based methods. The former, such as YAKE BIBREF5 , BIBREF6 , KP-MINER BIBREF7 and RAKE BIBREF8 , use statistical characteristics of the texts to capture keywords, while the latter, such as Topic Rank BIBREF9 , TextRank BIBREF10 , Topical PageRank BIBREF11 and Single Rank BIBREF12 , build graphs to rank words based on their position in the graph. Among statistical approaches, the state-of-the-art keyword extraction algorithm is YAKE BIBREF5 , BIBREF6 , which is also one of the best performing keyword extraction algorithms overall; it defines a set of five features capturing keyword characteristics which are heuristically combined to assign a single score to every keyword. On the other hand, among graph-based approaches, Topic Rank BIBREF9 can be considered state-of-the-art; candidate keywords are clustered into topics and used as vertices in the final graph, used for keyword extraction. Next, a graph-based ranking model is applied to assign a significance score to each topic and keywords are generated by selecting a candidate from each of the top-ranked topics. Network-based methodology has also been successfully applied to the task of topic extraction BIBREF13 .", "id": 1553, "question": "How are meta vertices computed?", "title": "RaKUn: Rank-based Keyword extraction via Unsupervised learning and Meta vertex aggregation" }, { "answers": [ "" ], "context": "We first discuss how the texts are transformed to graphs, on which RaKUn operates. Next, we formally state the problem of keyword extraction and discuss its relation to graph centrality metrics.", "id": 1554, "question": "How are graphs derived from a given text?", "title": "RaKUn: Rank-based Keyword extraction via Unsupervised learning and Meta vertex aggregation" }, { "answers": [ "" ], "context": "In this work we consider directed graphs. Let INLINEFORM0 represent a graph comprised of a set of vertices INLINEFORM1 and a set of edges ( INLINEFORM2 ), which are ordered pairs. Further, each edge can have a real-valued weight assigned. Let INLINEFORM3 represent a document comprised of tokens INLINEFORM4 . The order in which tokens in text appear is known, thus INLINEFORM5 is a totally ordered set. A potential way of constructing a graph from a document is by simply observing word co-occurrences. When two words co-occur, they are used as an edge. However, such approaches do not take into account the sequence nature of the words, meaning that the order is lost. We attempt to take this aspect into account as follows. The given corpus is traversed, and for each element INLINEFORM6 , its successor INLINEFORM7 , together with a given element, forms a directed edge INLINEFORM8 . Finally, such edges are weighted according to the number of times they appear in a given corpus. Thus the graph, constructed after traversing a given corpus, consists of all local neighborhoods (order one), merged into a single joint structure. Global contextual information is potentially kept intact (via weights), even though it needs to be detected via network analysis as proposed next.", "id": 1555, "question": "In what sense if the proposed method interpretable?", "title": "RaKUn: Rank-based Keyword extraction via Unsupervised learning and Meta vertex aggregation" }, { "answers": [ "They pre-train forward and backward LMs separately, remove top layer softmax, and concatenate to obtain the bidirectional LMs." ], "context": "Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information BIBREF0 , BIBREF1 and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks BIBREF2 .", "id": 1556, "question": "how are the bidirectional lms obtained?", "title": "Semi-supervised sequence tagging with bidirectional language models" }, { "answers": [ "micro-averaged F1" ], "context": "The main components in our language-model-augmented sequence tagger (TagLM) are illustrated in Fig. FIGREF4 . After pre-training word embeddings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3).", "id": 1557, "question": "what metrics are used in evaluation?", "title": "Semi-supervised sequence tagging with bidirectional language models" }, { "answers": [ "91.93% F1 score on CoNLL 2003 NER task and 96.37% F1 score on CoNLL 2000 Chunking task" ], "context": "Our baseline sequence tagging model is a hierarchical neural tagging model, closely following a number of recent studies BIBREF4 , BIBREF5 , BIBREF3 , BIBREF8 (left side of Figure FIGREF5 ).", "id": 1558, "question": "what results do they achieve?", "title": "Semi-supervised sequence tagging with bidirectional language models" }, { "answers": [ "Chiu and Nichols (2016), Lample et al. (2016), Ma and Hovy (2016), Yang et al. (2017), Hashimoto et al. (2016), Søgaard and Goldberg (2016) " ], "context": "A language model computes the probability of a token sequence INLINEFORM0 INLINEFORM1 ", "id": 1559, "question": "what previous systems were compared to?", "title": "Semi-supervised sequence tagging with bidirectional language models" }, { "answers": [ "" ], "context": "Our combined system, TagLM, uses the LM embeddings as additional inputs to the sequence tagging model. In particular, we concatenate the LM embeddings INLINEFORM0 with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the first layer performed the best. More formally, we simply replace ( EQREF6 ) with DISPLAYFORM0 ", "id": 1560, "question": "what are the evaluation datasets?", "title": "Semi-supervised sequence tagging with bidirectional language models" }, { "answers": [ "" ], "context": "The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition BIBREF0. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery BIBREF1 since this highly specific interaction is still difficult to predict BIBREF2. Protein structure based computational strategies such as docking BIBREF3, ultra-large library docking for discovering new chemotypes BIBREF4, and molecular dynamics simulations BIBREF3 or ligand based strategies such as quantitative structure-activity relationship (QSAR) BIBREF5, BIBREF6, and molecular similarity BIBREF7 have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery BIBREF8. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind BIBREF9 (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot BIBREF10 (accessed on Nov 13, 2019) or 97 million compounds in Pubchem BIBREF11 (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest.", "id": 1561, "question": "Are datasets publicly available?", "title": "Exploring Chemical Space using Natural Language Processing Methodologies for Drug Discovery" }, { "answers": [ "Both supervised and unsupervised, depending on the task that needs to be solved." ], "context": "BIBREF20 describes NLP on three levels: (i) the word level in which the smallest meaningful unit is extracted to define the morphological structure, (ii) the sentence level where grammar and syntactic validity are determined, and (iii) the domain or context level in which the sentences have global meaning. Similarly, our review is organized in three parts in which bio-chemical data is investigated at: (i) word level, (ii) sentence (text) level, and (iii) understanding text and generating meaningful sequences. Table TABREF37 summarizes important NLP concepts related to the processing of biochemical data. We refer to these concepts and explain their applications in the following sections.", "id": 1562, "question": "Are this models usually semi/supervised or unsupervised?", "title": "Exploring Chemical Space using Natural Language Processing Methodologies for Drug Discovery" }, { "answers": [ "" ], "context": "The language-like properties of text-based representations of chemicals were recognized more than 50 years ago by Garfield BIBREF21. He proposed a “chemico-linguistic\" approach to representing chemical nomenclature with the aim of instructing the computer to draw chemical diagrams. Protein sequence has been an important source of information about protein structure and function since Anfinsen's experiment BIBREF22. Alignment algorithms, such as Needleman-Wunsh BIBREF23 and Smith-Waterman BIBREF24, rely on sequence information to identify functionally or structurally critical elements of proteins (or genes).", "id": 1563, "question": "Is there any concrete example in the paper that shows that this approach had huge impact on drug discovery?", "title": "Exploring Chemical Space using Natural Language Processing Methodologies for Drug Discovery" }, { "answers": [ "" ], "context": "Knowledge Graphs such as Freebase, WordNet etc. have become important resources for supporting many AI applications like web search, Q&A etc. They store a collection of facts in the form of a graph. The nodes in the graph represent real world entities such as Roger Federer, Tennis, United States etc while the edges represent relationships between them.", "id": 1564, "question": "Do the authors analyze what kinds of cases their new embeddings fail in where the original, less-interpretable embeddings didn't?", "title": "Inducing Interpretability in Knowledge Graph Embeddings" }, { "answers": [ "Performance was comparable, with the proposed method quite close and sometimes exceeding performance of baseline method." ], "context": "Several methods have been proposed for learning KG embeddings. They differ on the modeling of entities and relations, usage of text data and interpretability of the learned embeddings. We summarize some of these methods in following sections.", "id": 1565, "question": "When they say \"comparable performance\", how much of a performance drop do these new embeddings result in?", "title": "Inducing Interpretability in Knowledge Graph Embeddings" }, { "answers": [ "" ], "context": "A very effective and powerful set of models are based on translation vectors. These models represent entities as vectors in $d$ -dimensional space, $\\mathbb {R}^d$ and relations as translation vectors from head entity to tail entity, in either same or a projected space. TransE BIBREF2 is one of the initial works, which was later improved by many works [ BIBREF3 , BIBREF4 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ]. Also, there are methods which are able to incorporate text data while learning KG embeddings. BIBREF0 is one such method, which assumes a combined universal schema of relations from KG as well as text. BIBREF1 further improves the performance by sharing parameters among similar textual relations.", "id": 1566, "question": "How do they evaluate interpretability?", "title": "Inducing Interpretability in Knowledge Graph Embeddings" }, { "answers": [ "GloVE; SGNS" ], "context": "Commonsense reasoning is fundamental for natural language agents to generalize inference beyond their training corpora. Although the natural language inference (NLI) task BIBREF0 , BIBREF1 has proved a good pre-training objective for sentence representations BIBREF2 , commonsense coverage is limited and most models are still end-to-end, relying heavily on word representations to provide background world knowledge.", "id": 1567, "question": "What types of word representations are they evaluating?", "title": "CA-EHN: Commonsense Word Analogy from E-HowNet" }, { "answers": [ "GRU" ], "context": "Spoken dialog systems (SDSs) allow users to naturally interact with machines through speech and are nowadays an important research direction, especially with the great success of automatic speech recognition (ASR) systems BIBREF0 , BIBREF1 . SDSs can be designed for generic purposes, e.g. smalltalk BIBREF2 , BIBREF3 ) or a specific task such as finding restaurants or booking flights BIBREF4 , BIBREF5 . Here, we focus on task-oriented dialog systems, which assist the users to reach a certain goal.", "id": 1568, "question": "What type of recurrent layers does the model use?", "title": "Encoding Word Confusion Networks with Recurrent Neural Networks for Dialog State Tracking" }, { "answers": [ "It is a network used to encode speech lattices to maintain a rich hypothesis space." ], "context": "Our model depicted in Figure FIGREF3 is based on an incremental DST system BIBREF12 . It consists of an embedding layer for the words in the system and user utterances, followed by a fully connected layer composed of Rectified Linear Units (ReLUs) BIBREF17 , which yields the input to a recurrent layer to encode the system and user outputs in each turn with a softmax classifier on top. INLINEFORM0 denotes a weighted sum INLINEFORM1 of the system dialog act INLINEFORM2 and the user utterance INLINEFORM3 , where INLINEFORM4 , and INLINEFORM5 are learned parameters: DISPLAYFORM0 ", "id": 1569, "question": "What is a word confusion network?", "title": "Encoding Word Confusion Networks with Recurrent Neural Networks for Dialog State Tracking" }, { "answers": [ "" ], "context": "Since the early 1970s, healthcare information technology has moved toward comprehensive electronic medical records (EMR) in which almost every aspect of the patient's healthcare has been digitized and retained indefinitelyBIBREF0, which has vastly improved the efficiency with which patient information can be retained, communicated, and analyzed. At the same time, the healthcare industry has moved from a fee-for-service model to a value-based model, facilitated in part by the existence of such a record and in part by public policy, such as the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 BIBREF1, which provided financial incentives for the \"meaningful use\" of electronic medical records.", "id": 1570, "question": "What type of simulations of real-time data feeds are used for validaton?", "title": "Semantic Enrichment of Streaming Healthcare Data" }, { "answers": [ "" ], "context": "HL7v2 is a healthcare messaging standard developed by the standards organization Health Level Seven International. It first emerged in 1988 and today is the most widely used such standard, having been adopted by over ninety-five percent of health systems in the United States and thirty-five countries worldwide BIBREF4. As such, it is something of a universal medium in the field of healthcare interoperability, yet it is terse and, without specialized training and access to the standard reference, cryptic.", "id": 1571, "question": "How are FHIR and RDF combined?", "title": "Semantic Enrichment of Streaming Healthcare Data" }, { "answers": [ "" ], "context": "FHIR BIBREF5 is a new open standard for healthcare data developed by the same company that developed HL7v2. However, whereas HL7v2 uses an idiosyncratic data exchange format, FHIR uses data exchange formats based on those already in wide use on the World-Wide Web such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON) BIBREF6, as well as the web's familiar transfer control protocols such as HyperText Transfer Protocol Secure (HTTPS) and Representational State Transfer (REST) BIBREF6 and system of contextual hyperlinks implemented with Uniform Resource Locators / Identifiers (URL/URI) BIBREF7. This design choice simplifies interoperability and discoverability and enables applications to be built rapidly on top of FHIR by the large number of engineers already familiar with web application design without a steep learning curve.", "id": 1572, "question": "What are the differences between FHIR and RDF?", "title": "Semantic Enrichment of Streaming Healthcare Data" }, { "answers": [ "" ], "context": "The term Semantic Web BIBREF8 denotes an interconnected machine-readable network of information. In some ways it is analogous to the World-Wide Web, but with some crucial differences. The most important similarity is in the vision for the two technologies: Like the World-Wide Web, the Semantic Web was envisioned as a way for users from different institutions, countries, disciplines, etc. to exchange information openly and in doing so to add to the sum of human knowledge. The difference, however, is in the different emphases put on human readability versus machine readability: Whereas the World-Wide Web was intended to be visually rendered by one of any number of web browsers before being read by humans and therefore prioritizes fault tolerance and general compatibility over precision, the semantic web prioritizes precision and logical rigor in order for the information contained in it to be machine readable and used for logical inference.", "id": 1573, "question": "What do FHIR and RDF stand for?", "title": "Semantic Enrichment of Streaming Healthcare Data" }, { "answers": [ "" ], "context": "Existing question generating systems reported in the literature involve human-generated templates, including cloze type BIBREF0, rule-based BIBREF1, BIBREF2, or semi-automatic questions BIBREF3, BIBREF4, BIBREF5. On the other hand, machine learned models developed recently have used recurrent neural networks (RNNs) to perform sequence transduction, i.e. sequence-to-sequence BIBREF6, BIBREF7. In this work, we investigated an automatic question generation system based on a machine learning model that uses transformers instead of RNNs BIBREF8, BIBREF9. Our goal was to generate questions without templates and with minimal human involvement using machine learning transformers that have been demonstrated to train faster and better than RNNs. Such a system would benefit educators by saving time to generate quizzes and tests.", "id": 1574, "question": "What is the motivation behind the work? Why question generation is an important task?", "title": "Question Generation by Transformers" }, { "answers": [ "" ], "context": "A relatively simple method for question generation is the fill-in-the-blank approach, which is also known as cloze tasks. Such a method typically involves the sentence first being tokenized and tagged for part-of-speech with the named entity or noun part of the sentence masked out. These generated questions are an exact match to the one in the reading passage except for the missing word or phrase. Although fill-in-the-blank questions are often used for reading comprehension, answering such questions correctly may not necessarily indicate comprehension if it is too easy to match the question to the relevant sentence in the passage. To improve fill in the blank type questions, a prior study used a supervised machine learning model to generate fill-in-the-blank type questions. The model paraphrases the sentence from the passage with the missing word by anonymizing entity markers BIBREF0.", "id": 1575, "question": "Why did they choose WER as evaluation metric?", "title": "Question Generation by Transformers" }, { "answers": [ "For sentence-level prediction they used tolerance accuracy, for segment retrieval accuracy and MRR and for the pipeline approach they used overall accuracy" ], "context": "Video is the fastest growing medium to create and deliver information today. Consequentially, videos have been increasingly used as main data sources in many question answering problems BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF2, BIBREF5. These previous studies have mostly focused on factoid questions, each of which can be answered in a few words or phrases generated by understanding multimodal contents in a short video clip.", "id": 1576, "question": "What evaluation metrics were used in the experiment?", "title": "TutorialVQA: Question Answering Dataset for Tutorial Videos" }, { "answers": [ "tutorial videos for a photo-editing software" ], "context": "Most relevant to our proposed work is the reading comprehension task, which is a question answering task involving a piece of text such as a paragraph or article. Such datasets for the reading comprehension task, such as SQuAD BIBREF6 based on Wikipedia, TriviaQA BIBREF7 constructed from trivia questions with answer evidence from Wikipedia, or those from Hermann et al. based on CNN and Daily Mail articles BIBREF8 are factoid-based, meaning the answers typically involve a single entity. Differing from video transcripts, the structures of these data sources, namely paragraphs from Wikipedia and news sources, are typically straightforward since they are meant to be read. In contrast, video transcripts originate from spoken dialogue, which can verbose, unstructured, and disconnected. Furthermore, the answers in instructional video transcripts can be longer, spanning multiple sentences if the process is multi-step or even fragmented into multiple segments throughout the video.", "id": 1577, "question": "What kind of instructional videos are in the dataset?", "title": "TutorialVQA: Question Answering Dataset for Tutorial Videos" }, { "answers": [ "a sentence-level prediction algorithm, a segment retrieval algorithm and a pipeline segment retrieval algorithm" ], "context": "In this section, we introduce the TutorialVQA dataset and describe the data collection process .", "id": 1578, "question": "What baseline algorithms were presented?", "title": "TutorialVQA: Question Answering Dataset for Tutorial Videos" }, { "answers": [ "" ], "context": "Our dataset consists of 76 tutorial videos pertaining to an image editing software. All of the videos include spoken instructions which are transcribed and manually segmented into multiple segments. Specifically, we asked the annotators to manually divide each video into multiple segments such that each of the segments can serve as an answer to any question. For example, Fig. FIGREF1 shows example segments marked in red (each which are a complete unit as an answer span). Each sentence is associated with the starting and ending time-stamps, which can be used to access the relevant visual information.", "id": 1579, "question": "What is the source of the triples?", "title": "TutorialVQA: Question Answering Dataset for Tutorial Videos" }, { "answers": [ "" ], "context": "Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: \"The dessert at this restaurant is delicious but the service is poor,\" the full-designed model for ABSA needs to extract the aspects \"dessert\" and \"service\" and correctly reason about their polarity. In this review, the consumers' opinions on \"dessert\" and \"service\" are not consistent, with positive and negative sentiment polarity respectively.", "id": 1580, "question": "How much better is performance of the proposed model compared to the state of the art in these various experiments?", "title": "A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction" }, { "answers": [ "" ], "context": "Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts.", "id": 1581, "question": "What was state of the art on SemEval-2014 task4 Restaurant and Laptop dataset?", "title": "A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction" }, { "answers": [ "" ], "context": "The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically.", "id": 1582, "question": "What was previous state-of-the-art on four Chinese reviews datasets?", "title": "A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction" }, { "answers": [ "" ], "context": "Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques.", "id": 1583, "question": "In what four Chinese review datasets does LCF-ATEPC achieves state of the art?", "title": "A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction" }, { "answers": [ "" ], "context": "Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC.", "id": 1584, "question": "Why authors think that researches do not pay attention to the research of the Chinese-oriented ABSA task?", "title": "A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction" }, { "answers": [ "" ], "context": "Similar to name entity recognition (NER) task, the ATE task is a kind of sequence labeling task, and prepare the input based on IOB labels. We design the IOB labels as $B_{asp}, I_{asp}, O$, and the labels indicate the beginning, inside and outside of the aspect terms, respectively. For ATE task, the input of the example review “The price is reasonable although the service is poor.” will be prepared as $S=\\lbrace w_1,w_2 \\cdots w_n\\rbrace $, and $w$ stands for a token after tokenization, $n=10$ is the total number of tokens. The example will be labeled in $Y=\\lbrace O, B_{asp}, O, O, O, O, B_{asp}, O, O, O\\rbrace $.", "id": 1585, "question": "What is specific to Chinese-oriented ABSA task, how is it different from other languages?", "title": "A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction" }, { "answers": [ "" ], "context": "Traditionally, a word is represented as a sparse vector indicating the word itself (one-hot vector) or the context of the word (distributional vector). However, both the one-hot notation and distributional notation suffer from data sparseness since dimensions of the word vector do not interact with each other. Distributed word representation addresses the data sparseness problem by constructing a dense vector of a fixed length, wherein contexts are shared (or distributed) across dimensions. Distributed word representation is known to improve the performance of many NLP applications such as machine translation BIBREF0 and sentiment analysis BIBREF1 to name a few. The task to learn a distributed representation is called representation learning.", "id": 1586, "question": "what is the size of this dataset?", "title": "Construction of a Japanese Word Similarity Dataset" }, { "answers": [ "" ], "context": "In general, distributed word representations are evaluated using a word similarity task. For instance, WordSim353 2002:PSC:503104.503110, MC BIBREF2 , RG BIBREF3 , and SCWS Huang:2012:IWR:2390524.2390645 have been used to evaluate word similarities in English. Moreover, baker-reichart-korhonen:2014:EMNLP2014 built a verb similarity dataset (VSD) based on WordSim353 because there was no dataset of verbs in the word-similarity task. Recently, SimVerb-3500 was introduced to evaluate human understanding of verb meaning Gerz:2016:EMNLP. It provides human ratings for the similarity of 3,500 verb pairs so that it enables robust evaluation of distributed representation for verbs. However, most of these datasets include English words only. There has been no Japanese dataset for the word-similarity task.", "id": 1587, "question": "did they use a crowdsourcing platform for annotations?", "title": "Construction of a Japanese Word Similarity Dataset" }, { "answers": [ "" ], "context": "What makes a pair of words similar? Most of the previous datasets do not concretely define the similarity of word pairs. The difference in the similarity of word pairs originates from each annotator's mind, resulting in different scales of a word. Thus, we propose to use an example-based approach (Table TABREF9 ) to control the variance of the similarity ratings. We remove the context of word when we extracted the word. So, we consider that an ambiguous word has high variance of the similarity, but we can get low variance of the similarity when the word is monosemous.", "id": 1588, "question": "where does the data come from?", "title": "Construction of a Japanese Word Similarity Dataset" }, { "answers": [ "" ], "context": "Evaluation metrics play a central role in the machine learning community. They direct the efforts of the research community and are used to define the state of the art models. In machine translation and summarization, the two most common metrics used for evaluating similarity between candidate and reference texts are BLEU BIBREF0 and ROUGE BIBREF1. Both approaches rely on counting the matching n-grams in the candidates summary to n-grams in the reference text. BLEU is precision focused while ROUGE is recall focused. These metrics have posed serious limitations and have already been criticized by the academic community.In this work we formulate three criticisms of BLEU and ROUGE, establish criteria that a sound metric should have and propose concrete ways to use recent advances in NLP to design data-driven metric addressing the weaknesses found in BLEU and ROUGE.", "id": 1589, "question": "What is the criteria for a good metric?", "title": "Towards Neural Language Evaluators" }, { "answers": [ "" ], "context": "BLEU (Bilingual Evaluation Understudy) BIBREF0 and ROUGE BIBREF1 have been used to evaluate many NLP tasks for almost two decades. The general acceptance of these methods depend on many factors including their simplicity and the intuitive interpretability. Yet the main factor is the claim that they highly correlate with human judgement BIBREF0. This has been criticised extensively by the literature and the shortcomings of these methods have been widely studied. Reiter BIBREF2 , in his structured review of BLEU, finds a low correlation between BLEU and human judgment. Callison et al BIBREF3 examines BLEU in the context of machine translation and find that BLEU does neither correlate with human judgment on adequacy(whether the hypothesis sentence adequately captures the meaning of the reference sentence) nor fluency(the quality of language in a sentence). Sulem et al BIBREF4 examines BLEU in the context of text simplification on grammaticality, meaning preservation and simplicity and report BLEU has very low or in some cases negative correlation with human judgment. Considering these results it is a natural step to pursue new avenues for natural language evaluation and with the advent of deep learning using neural networks for this task is a promising step forward.", "id": 1590, "question": "What are the three limitations?", "title": "Towards Neural Language Evaluators" }, { "answers": [ "" ], "context": "In a task-oriented dialogue system, the dialogue policy determines the next action to perform and next utterance to say based on the current dialogue state. A dialogue state defined by frame-and-slot semantics is a set of (key, value) pairs specified by the domain ontology BIBREF0. A key is a (domain, slot) pair and a value is a slot value provided by the user. Figure FIGREF1 shows a dialogue and state in three domain contexts. Dialogue state tracking (DST) in multiple domains is a challenging problem. First of all, in production environments, the domain ontology is being continuously updated such that the model must generalize to new values, new slots, or even new domains during inference. Second, the number of slots and values in the training data are usually quite large. For example, the MultiWOZ $2.0/2.1$ datasets BIBREF1, BIBREF2 have 30 (domain, slot) pairs and more than $4,500$ values BIBREF3. As the model must understand slot and value paraphrases, it is infeasible to train each slot or value independently. Third, multi-turn inferences are often required as shown in the underlined areas of Figure FIGREF1.", "id": 1591, "question": "What is current state-of-the-art model?", "title": "Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering" }, { "answers": [ "" ], "context": "Word sense disambiguation (WSD) automatically assigns a pre-defined sense to a word in a text. Different senses of a word reflect different meanings a word has in different contexts. Identifying the correct word sense given a context is crucial in natural language processing (NLP). Unfortunately, while it is easy for a human to infer the correct sense of a word given a context, it is a challenge for NLP systems. As such, WSD is an important task and it has been shown that WSD helps downstream NLP tasks, such as machine translation BIBREF0 and information retrieval BIBREF1.", "id": 1592, "question": "Which language(s) are found in the WSD datasets?", "title": "Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations" }, { "answers": [ "" ], "context": "Continuous word representations in real-valued vectors, or commonly known as word embeddings, have been shown to help improve NLP performance. Initially, exploiting continuous representations was achieved by adding real-valued vectors as classification features BIBREF14. BIBREF6 fine-tuned non-contextualized word embeddings by a feed-forward neural network such that those word embeddings were more suited for WSD. The fine-tuned embeddings were incorporated into an SVM classifier. BIBREF7 explored different strategies of incorporating word embeddings and found that their best strategy involved exponential decay that decreased the contribution of surrounding word features as their distances to the target word increased.", "id": 1593, "question": "What datasets are used for testing?", "title": "Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations" }, { "answers": [ "Full Testing Set accuracy: 84.02\nCleaned Testing Set accuracy: 93.48" ], "context": "When people perform explicit reasoning, they can typically describe the way to the conclusion step by step via relational descriptions. There is ample evidence that relational representations are important for human cognition (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4). Although a rapidly growing number of researchers use deep learning to solve complex symbolic reasoning and language tasks (a recent review is BIBREF5), most existing deep learning models, including sequence models such as LSTMs, do not explicitly capture human-like relational structure information.", "id": 1594, "question": "How does TP-N2F compare to LSTM-based Seq2Seq in terms of training and inference speed?", "title": "Natural- to formal-language generation using Tensor Product Representations" }, { "answers": [ "Full Testing Set Accuracy: 84.02\nCleaned Testing Set Accuracy: 93.48" ], "context": "The TPR mechanism is a method to create a vector space embedding of complex symbolic structures. The type of a symbol structure is defined by a set of structural positions or roles, such as the left-child-of-root position in a tree, or the second-argument-of-$R$ position of a given relation $R$. In a particular instance of a structural type, each of these roles may be occupied by a particular filler, which can be an atomic symbol or a substructure (e.g., the entire left sub-tree of a binary tree can serve as the filler of the role left-child-of-root). For now, we assume the fillers to be atomic symbols.", "id": 1595, "question": "What is the performance proposed model achieved on AlgoList benchmark?", "title": "Natural- to formal-language generation using Tensor Product Representations" }, { "answers": [ "Operation accuracy: 71.89\nExecution accuracy: 55.95" ], "context": "We propose a general TP-N2F neural network architecture operating over TPRs to solve N2F tasks under a proposed role-level description of those tasks. In this description, natural-language input is represented as a straightforward order-2 role structure, and formal-language relational representations of outputs are represented with a new order-3 recursive role structure proposed here. Figure FIGREF3 shows an overview diagram of the TP-N2F model. It depicts the following high-level description.", "id": 1596, "question": "What is the performance proposed model achieved on MathQA?", "title": "Natural- to formal-language generation using Tensor Product Representations" }, { "answers": [ "" ], "context": "Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context.", "id": 1597, "question": "How do previous methods perform on the Switchboard Dialogue Act and DailyDialog datasets?", "title": "Local Contextual Attention with Hierarchical Structure for Dialogue Act Recognition" }, { "answers": [ "BLSTM+Attention+BLSTM\nHierarchical BLSTM-CRF\nCRF-ASN\nHierarchical CNN (window 4)\nmLSTM-RNN\nDRLM-Conditional\nLSTM-Softmax\nRCNN\nCNN\nCRF\nLSTM\nBERT" ], "context": "DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Early studies rely heavily on handcrafted features such as lexical, syntactic, contextual, prosodic and speaker information and achieve good results BIBREF13, BIBREF4, BIBREF16.", "id": 1598, "question": "What previous methods is the proposed method compared against?", "title": "Local Contextual Attention with Hierarchical Structure for Dialogue Act Recognition" }, { "answers": [ "" ], "context": "Self-attention BIBREF11 achieves great success for its efficiently parallel computation and long-range dependency modeling.", "id": 1599, "question": "What is dialogue act recognition?", "title": "Local Contextual Attention with Hierarchical Structure for Dialogue Act Recognition" }, { "answers": [ "" ], "context": "Before we describe the proposed model in detail, we first define the mathematical notation for the DA recognition task in this paper. Given the dataset, $X = (D_1,D_2,... D_L)$ with corresponding DA labels $(Y_1,Y_2,...Y_L)$. Each dialogue is a sequence of $ N_l $ utterances $ D_l = (u_1,u_2,...u_{N_l})$ with $ Y_l = (y_1,y_2,...y_{N_l}) $. Each utterance is padded or truncated to the length of $ M $ words, $u_j = (w_1,w_2,...w_{M})$.", "id": 1600, "question": "Which natural language(s) are studied?", "title": "Local Contextual Attention with Hierarchical Structure for Dialogue Act Recognition" }, { "answers": [ "" ], "context": "Many text generation tasks, e.g., data-to-text, summarization and image captioning, can be naturally divided into two steps: content selection and surface realization. The generations are supposed to have two levels of diversity: (1) content-level diversity reflecting multiple possibilities of content selection (what to say) and (2) surface-level diversity reflecting the linguistic variations of verbalizing the selected contents (how to say) BIBREF0 , BIBREF1 . Recent neural network models handle these tasks with the encoder-decoder (Enc-Dec) framework BIBREF2 , BIBREF3 , which simultaneously performs selecting and verbalizing in a black-box way. Therefore, both levels of diversity are entangled within the generation. This entanglement, however, sacrifices the controllability and interpretability, making it diffifcult to specify the content to be conveyed in the generated text BIBREF4 , BIBREF5 .", "id": 1601, "question": "Does the performance necessarily drop when more control is desired?", "title": "Select and Attend: Towards Controllable Content Selection in Text Generation" }, { "answers": [ "" ], "context": "Let INLINEFORM0 denote a source-target pair. INLINEFORM1 is a sequence of INLINEFORM2 and can be either some structured data or unstructured text/image depending on the task. INLINEFORM3 corresponds to INLINEFORM4 which is a text description of INLINEFORM5 . The goal of text generation is to learn a distribution INLINEFORM6 to automatically generate proper text.", "id": 1602, "question": "How does the model perform in comparison to end-to-end headline generation models?", "title": "Select and Attend: Towards Controllable Content Selection in Text Generation" }, { "answers": [ "" ], "context": "Our goal is to decouple the content selection from the decoder by introducing an extra content selector. We hope the content-level diversity can be fully captured by the content selector for a more interpretable and controllable generation process. Following BIBREF6 , BIBREF16 , we define content selection as a sequence labeling task. Let INLINEFORM0 denote a sequence of binary selection masks. INLINEFORM1 if INLINEFORM2 is selected and 0 otherwise. INLINEFORM3 is assumed to be independent from each other and is sampled from a bernoulli distribution INLINEFORM4 . INLINEFORM6 is the bernoulli parameter, which we estimate using a two-layer feedforward network on top of the source encoder. Text are generated by first sampling INLINEFORM7 from INLINEFORM8 to decide which content to cover, then decode with the conditional distribution INLINEFORM9 . The text is expected to faithfully convey all selected contents and drop unselected ones. Fig. FIGREF8 depicts this generation process. Note that the selection is based on the token-level context-aware embeddings INLINEFORM10 and will maintain information from the surrounding contexts. It encourages the decoder to stay faithful to the original information instead of simply fabricating random sentences by connecting the selected tokens.", "id": 1603, "question": "How is the model trained to do only content selection?", "title": "Select and Attend: Towards Controllable Content Selection in Text Generation" }, { "answers": [ "The baseline models used are DrQA modified to support answering no answer questions, DrQA+CoQA which is pre-tuned on CoQA dataset, vanilla BERT, BERT+review tuned on domain reviews, BERT+CoQA tuned on the supervised CoQA data" ], "context": "Seeking information to assess whether some products or services suit one's needs is a vital activity for consumer decision making. In online businesses, one major hindrance is that customers have limited access to answers to their specific questions or concerns about products and user experiences. Given the ever-changing environment of products and services, it is very hard, if not impossible, to pre-compile an up-to-date knowledge base to answer user questions as in KB-QA BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . As a compromise, community question-answering (CQA) BIBREF4 is leveraged to enable existing customers or sellers to answer customer questions. However, one obvious drawback of this approach is that many questions are not answered, and even if they are answered, the answers and the following up questions are delayed, which is not suitable for interactive QA. Although existing studies have used information retrieval (IR) techniques BIBREF4 , BIBREF5 to identify a whole review as an answer to a question, it is time-consuming to read a whole review and the approach has difficulty to answer questions in multiple turns.", "id": 1604, "question": "What is the baseline model used?", "title": "Review Conversational Reading Comprehension" }, { "answers": [ "" ], "context": "Reading and writing comments, method names and variable names is a crucial part of software engineering and as such, programs have both a human language, the language of identifiers and comments, in addition to the source-code language (eg Java or Python). This has meant that non-English speakers are often second class citizens when learning to program BIBREF0. In this paper we present a tool for translating a program from one human-language to another to assist in code education, which could reduce the barrier to computer science education for non-English speakers.", "id": 1605, "question": "Is this auto translation tool based on neural networks?", "title": "Human Languages in Source Code: Auto-Translation for Localized Instruction" }, { "answers": [ "" ], "context": "To the best of our knowledge, automatic translation of code between human languages, did not appear in literature, making us hypothesize: it is either difficult, or had remained ignored. Nonetheless, we summarize related work that motivate our contribution.", "id": 1606, "question": "What are results of public code repository study?", "title": "Human Languages in Source Code: Auto-Translation for Localized Instruction" }, { "answers": [ "" ], "context": "Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. need to support a large and constantly increasing number of services, over a wide variety of domains. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, Taskmaster-1 BIBREF1, M2M BIBREF2 and FRAMES BIBREF3.", "id": 1607, "question": "Where is the dataset from?", "title": "Schema-Guided Dialogue State Tracking Task at DSTC8" }, { "answers": [ "" ], "context": "Dialogue systems have constituted an active area of research for the past few decades. The advent of commercial personal assistants has provided further impetus to dialogue systems research. As virtual assistants incorporate diverse domains, zero-shot modeling BIBREF4, BIBREF5, BIBREF6, domain adaptation and transfer learning techniques BIBREF7, BIBREF8, BIBREF9 have been explored to support new domains in a data efficient manner.", "id": 1608, "question": "What data augmentation techniques are used?", "title": "Schema-Guided Dialogue State Tracking Task at DSTC8" }, { "answers": [ "" ], "context": "The primary task of this challenge is to develop multi-domain models for DST suitable for the scale and complexity of large scale virtual assistants. Supporting a wide variety of APIs or services with possibly overlapping functionality is an important requirement of such assistants. A common approach to do this involves defining a large master schema that lists all intents and slots supported by the assistant. Each service either adopts this master schema for the representation of the underlying data, or provides logic to translate between its own schema and the master schema.", "id": 1609, "question": "Do all teams use neural networks for their models?", "title": "Schema-Guided Dialogue State Tracking Task at DSTC8" }, { "answers": [ "" ], "context": "Under the Schema-Guided approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF2 shows an example). The dialogue annotations are guided by the schema of the underlying service or API, as shown in Figure FIGREF3. In this example, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.", "id": 1610, "question": "How are the models evaluated?", "title": "Schema-Guided Dialogue State Tracking Task at DSTC8" }, { "answers": [ "" ], "context": "As shown in Table TABREF9, our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.", "id": 1611, "question": "What is the baseline model?", "title": "Schema-Guided Dialogue State Tracking Task at DSTC8" }, { "answers": [ "Alarm, Banks, Buses, Calendar, Events, Flights, Homes, Hotels, Media, Messaging, Movies, Music, Payment, Rental Cars, Restaurants, Ride Sharing, Services, Train, Travel, Weather" ], "context": "The dataset consists of conversations between a virtual assistant and a user. Each conversation can span multiple services across various domains. The dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.", "id": 1612, "question": "What domains are present in the data?", "title": "Schema-Guided Dialogue State Tracking Task at DSTC8" }, { "answers": [ "Total number of annotated data:\nSemeval'15: 10712\nSemeval'16: 28632\nTass'15: 69000\nSentipol'14: 6428" ], "context": "Sentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between English state-of-the-art methods and other languages. It is expected that some researchers decided to test the straightforward approach which consists in, first, translating the messages to English, and, then, use a high performing English sentiment classifier (for instance, see BIBREF0 and BIBREF1 ) instead of creating a sentiment classifier optimized for a given language. However, the advantages of a properly tuned sentiment classifier have been studied for different languages (for instance, see BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ).", "id": 1613, "question": "How many texts/datapoints are in the SemEval, TASS and SENTIPOLC datasets?", "title": "A Simple Approach to Multilingual Polarity Classification in Twitter" }, { "answers": [ "Arabic, German, Portuguese, Russian, Swedish" ], "context": "We propose a method for multilingual polarity classification that can serve as a baseline as well as a framework to build more complex sentiment analysis systems due to its simplicity and availability as an open source software. As we mentioned, this baseline algorithm for multilingual Sentiment Analysis (B4MSA) was designed with the purpose of being multilingual and easy to implement. B4MSA is not a naïve baseline which is experimentally proved by evaluating it on several international competitions.", "id": 1614, "question": "In which languages did the approach outperform the reported results?", "title": "A Simple Approach to Multilingual Polarity Classification in Twitter" }, { "answers": [ "" ], "context": "We defined cross-language features as a set of features that could be applied in most similar languages, not only related language families such as Germanic languages (English, German, etc.), Romance languages (Spanish, Italian, etc.), among others; but also similar surface features such as punctuation, diacritics, symbol duplication, case sensitivity, etc. Later, the combination of these features will be explored to find the best configuration for a given classifier.", "id": 1615, "question": "What eight language are reported on?", "title": "A Simple Approach to Multilingual Polarity Classification in Twitter" }, { "answers": [ "" ], "context": "The following features are language dependent because they use specific information from the language concerned. Usually, the use of stopwords, stemming and negations are traditionally used in Sentiment Analysis. The users of this approach could add other features such as part of speech, affective lexicons, etc. to improve the performance BIBREF13 .", "id": 1616, "question": "What are the components of the multilingual framework?", "title": "A Simple Approach to Multilingual Polarity Classification in Twitter" }, { "answers": [ "" ], "context": "Bayesian inference of phylogeny has great impact on evolutionary biology. It is believed that all the species are related through a history of a common descent BIBREF0, that is to say, the reason we have various wildlife, including human beings, is because of evolution. We can show the process of evolution and solve the problems, like what is the phylogeny of life, by showing a phylogenetic tree (see Figure FIGREF1).", "id": 1617, "question": "Is the proposed method compared to previous methods?", "title": "Markov Chain Monte-Carlo Phylogenetic Inference Construction in Computational Historical Linguistics" }, { "answers": [ "" ], "context": "A great number of algorithms and mechanisms to antomatic cognate detection which could be applied in historical linguistics have been used and tested if they are working by many linguists and computer scientists BIBREF2, BIBREF4, BIBREF5, BIBREF6, BIBREF7. In detail, many of these works are very similar to each other, which consist of two main stages. For the first stage, they first extract the words with same meaning from the wordlists of different languages, either same or different language families, and compare them and use the distance calculation matrix to compute how similar they are. Regarding the second stage, a flat cluster algorithm or a network partitioning algorithm is used to partition all words into cognate sets, and also take the information in the matrix of word pairs as basis BIBREF4, BIBREF5. However, the methods in which those researchers use to compare the word pairs are totally different in that people could use different methods to pre-process their language datasets, or even use different algorithms to finish the comparison and clustering task. For example, intuitively, people will start the automated word comparison by computing the distance between the words, such as word embedding in NLP, GloVe BIBREF8, which computes the semantic similarities between the two words. In computational historical linguistics, phonetic segments are used to calculate how close the two words are instead, because the semantics of a single word is not changing as easy as phonetics is. The problem is since the program involves the data pre-processing, then the whole dataset would be traverse twice and the computation would be a big problem when the dataset is about to be over 100 languages. Consequently, people began to think about a faster method to help.", "id": 1618, "question": "What metrics are used to evaluate results?", "title": "Markov Chain Monte-Carlo Phylogenetic Inference Construction in Computational Historical Linguistics" }, { "answers": [ "The three baseline models are the i-vector model, a standard RNN LID system and a multi-task RNN LID system. " ], "context": "Language identification (LID) lends itself to a wide range of applications, such as mixed-lingual (code-switching) speech recognition. Humans use many cues to discriminate languages, and better accuracy can be achieved with the use of more cues. Various LID approaches have been developed, based on different types of cues.", "id": 1619, "question": "Which is the baseline model?", "title": "Phonetic Temporal Neural Model for Language Identification" }, { "answers": [ "" ], "context": "There are more than 5000 languages in the world, and each language has distinct properties at different levels, from acoustic to semantics BIBREF0 , BIBREF1 , BIBREF2 . A number of studies have investigated how humans use these properties as cues to distinguish between languages BIBREF3 . For example, Muthusamy BIBREF4 found that familiarity with a language is an important factor affecting LID accuracy, and that longer speech samples are easier to identify. Moreover, people can easily tell what cues they use for identification, including phonemic inventory, word usage, and prosody. More thorough investigations were conducted by others by modifying speech samples to promote one or several factors. For example, Mori et al. BIBREF5 found that people are able to identify Japanese and English fairly reliably even when phone information is reduced. They argued that other non-linguistic cues such as intensity and pitch were used to decide the language. Navratil BIBREF6 evaluated the importance of various types of knowledge, including lexical, phonotactic and prosodic, by asking humans to identify five languages, Chinese, English, French, German and Japanese. Subjects were presented with unaltered speech samples, samples with randomly altered syllables, and samples with the vocal-tract information removed to leave only the F0 and amplitude. Navratil found that the speech samples with random syllables are more difficult to identify compared to the original samples (73.9% vs 96%), and removing vocal-tract information leads to significant performance reduction (73.9% vs 49.4%). This means that with this 5-language LID task, the lexical and phonotactic information is important for human decision making.", "id": 1620, "question": "How big is the Babel database?", "title": "Phonetic Temporal Neural Model for Language Identification" }, { "answers": [ "Proposing an improved RNN model, the phonetic temporal neural LID approach, based on phonetic features that results in better performance" ], "context": "Based on the different types of cues, multiple LID approaches have been proposed. Early work generally focused on feature-level cues. Feature-based methods use strong statistical models built on raw acoustic features to make the LID decision. For instance, Cimarusti used LPC features BIBREF7 , and Foil et al. BIBREF8 investigated formant features. Dynamic features that involve temporal information were also demonstrated to be effective BIBREF9 . The statistical models used include Gaussian mixture models (GMMs) BIBREF10 , BIBREF11 , hidden Markov models (HMMs) BIBREF12 , BIBREF13 , neural networks (NNs) BIBREF14 , BIBREF15 , and support vector machines (SVMs) BIBREF16 . More recently, a low-rank GMM model known as the i-vector model was proposed and achieved significant success BIBREF17 , BIBREF18 . This model constrains the mean vectors of the GMM components in a low-dimensional space to improve the statistical strength for model training, and uses a task-oriented discriminative model (e.g., linear discriminative analysis, LDA) to improve the decision quality at run-time, leading to improved LID performance. Due to the short-time property of the features, most feature-based methods model the distributional characters rather than the temporal characters of speech signals.", "id": 1621, "question": "What is the main contribution of the paper? ", "title": "Phonetic Temporal Neural Model for Language Identification" }, { "answers": [ "" ], "context": "Recurrent Neural Networks (RNNs) are powerful machine learning models that can capture and exploit sequential data. They have become standard in important natural language processing tasks such as machine translation BIBREF0 , BIBREF1 and speech recognition BIBREF2 . Despite the ubiquity of various RNN architectures in natural language processing, there still lies an unanswered fundamental question: What classes of languages can, empirically or theoretically, be learned by neural networks? This question has drawn much attention in the study of formal languages, with previous results on both the theoretical BIBREF3 , BIBREF4 and empirical capabilities of RNNs, showing that different RNN architectures can learn certain regular BIBREF5 , BIBREF6 , context-free BIBREF7 , BIBREF8 , and context-sensitive languages BIBREF9 .", "id": 1622, "question": "What training settings did they try?", "title": "On Evaluating the Generalization of LSTM Models in Formal Languages" }, { "answers": [ "These are well-known formal languages some of which was used in the literature to evaluate the learning capabilities of RNNs." ], "context": "It has been shown that RNNs with a finite number of states can process regular languages by acting like a finite-state automaton using different units in their hidden layers BIBREF5 , BIBREF6 . RNNs, however, are not limited to recognizing only regular languages. BIBREF3 and BIBREF4 showed that first-order RNNs (with rational state weights and infinite numeric precision) can simulate a pushdown automaton with two-stacks, thereby demonstrating that RNNs are Turing-complete. In theory, RNNs with infinite numeric precision are capable of expressing recursively enumerable languages. Yet, in practice, modern machine architectures do not contain computational structures that support infinite numeric precision. Thus, the computational power of RNNs with finite precision may not necessarily be the same as that of RNNs with infinite precision.", "id": 1623, "question": "How do they get the formal languages?", "title": "On Evaluating the Generalization of LSTM Models in Formal Languages" }, { "answers": [ "" ], "context": "Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\\dashv $ , in addition to the symbols in $\\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions. During testing, we use an output threshold criterion of $0.5$ for the sigmoid output layer to indicate which characters were predicted by the model. We then turn this prediction task into a classification task by accepting a sample if our model predicts all of its output values correctly and rejecting it otherwise.", "id": 1624, "question": "Are the unobserved samples from the same distribution as the training data?", "title": "On Evaluating the Generalization of LSTM Models in Formal Languages" }, { "answers": [ "" ], "context": "Sentiment Analysis (SA) is an active field of research in Natural Language Processing and deals with opinions in text. A typical application of classical SA in an industrial setting would be to classify a document like a product review into positve, negative or neutral sentiment polarity.", "id": 1625, "question": "By how much does their model outperform the baseline in the cross-domain evaluation?", "title": "Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification" }, { "answers": [ "" ], "context": "We separate our discussion of related work into two areas: First, neural methods applied to ATSC that have improved performance solely by model architecture improvements. Secondly, methods that additionally aim to transfer knowledge from semantically related tasks or domains.", "id": 1626, "question": "What are the performance results?", "title": "Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification" }, { "answers": [ "graph-like structures where arcs connect nodes representing multiple hypothesized words, thus allowing multiple incoming arcs unlike 1-best sequences" ], "context": " Recent years have seen an increased usage of spoken language technology in applications ranging from speech transcription BIBREF0 to personal assistants BIBREF1 . The quality of these applications heavily depends on the accuracy of the underlying automatic speech recognition (ASR) system yielding 1-best hypotheses and how well ASR errors are mitigated. The standard approach to ASR error mitigation is confidence scores BIBREF2 , BIBREF3 . A low confidence can give a signal to downstream applications about the high uncertainty of the ASR in its prediction and measures can be taken to mitigate the risk of making a wrong decision. However, confidence scores can also be used in upstream applications such as speaker adaptation BIBREF4 and semi-supervised training BIBREF5 , BIBREF6 to reflect uncertainty among multiple possible alternative hypotheses. Downstream applications, such as machine translation and information retrieval, could similarly benefit from using multiple hypotheses.", "id": 1627, "question": "What is a confusion network or lattice?", "title": "Bi-Directional Lattice Recurrent Neural Networks for Confidence Estimation" }, { "answers": [ "" ], "context": "Tonal languages use pitch to distinguish different words, for example, yi in Mandarin may mean `one', `to move', `already', or `art', depending on the pitch contour. Of over 6000 languages in the world, it is estimated that as many as 60-70% are tonal BIBREF0, BIBREF1. A few of these are national languages (e.g., Mandarin Chinese, Vietnamese, and Thai), but many tonal languages have a small number of speakers and are scarcely documented. There is a limited availability of trained linguists to perform language documentation before these languages become extinct, hence the need for better tools to assist linguists in these tasks.", "id": 1628, "question": "What dataset is used for training?", "title": "Representation Learning for Discovering Phonemic Tone Contours" }, { "answers": [ "NMI between cluster assignments and ground truth tones for all sylables is:\nMandarin: 0.641\nCantonese: 0.464" ], "context": "Mandarin Chinese (1.1 billion speakers) and Cantonese (74 million speakers) are two tonal languages in the Sinitic family BIBREF0. Mandarin has four lexical tones: high (55), rising (25), low-dipping (214), and falling (51). The third tone sometimes undergoes sandhi, addressed in section SECREF3. We exclude a fifth, neutral tone, which can only occur in word-final positions and has no fixed pitch.", "id": 1629, "question": "How close do clusters match to ground truth tone categories?", "title": "Representation Learning for Discovering Phonemic Tone Contours" }, { "answers": [ "Precision, Recall, F1" ], "context": "A named entity can be mentioned using a great variety of surface forms (Barack Obama, President Obama, Mr. Obama, B. Obama, etc.) and the same surface form can refer to a variety of named entities. For example, according to the English Wikipedia, the form `Europe' can ambiguously be used to refer to 18 different entities, including the continent, the European Union, various Greek mythological entities, a rock band, some music albums, a magazine, a short story, etc. Furthermore, it is possible to refer to a named entity by means of anaphoric pronouns and co-referent expressions such as `he', `her', `their', `I', `the 35 year old', etc. Therefore, in order to provide an adequate and comprehensive account of named entities in text it is necessary to recognize the mention of a named entity and to classify it by a pre-defined type (e.g, person, location, organization). Named Entity Recognition and Classification (NERC) is usually a required step to perform Named Entity Disambiguation (NED), namely to link `Europe' to the right Wikipedia article, and to resolve every form of mentioning or co-referring to the same entity.", "id": 1630, "question": "what are the evaluation metrics?", "title": "Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features" }, { "answers": [ "CoNLL 2003, GermEval 2014, CoNLL 2002, Egunkaria, MUC7, Wikigold, MEANTIME, SONAR-1, Ancora 2.0" ], "context": "The main contributions of this paper are the following: First, we show how to easily develop robust NERC systems across datasets and languages with minimal human intervention, even for languages with declension and/or complex morphology. Second, we empirically show how to effectively use various types of simple word representation features thereby providing a clear methodology for choosing and combining them. Third, we demonstrate that our system still obtains very competitive results even when the supervised data is reduced by half (even less in some cases), alleviating the dependency on costly hand annotated data. These three main contributions are based on:", "id": 1631, "question": "which datasets were used in evaluation?", "title": "Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features" }, { "answers": [ "Perceptron model using the local features." ], "context": "The Named Entity Recognition and Classification (NERC) task was first defined for the Sixth Message Understanding Conference (MUC 6) BIBREF39 . The MUC 6 tasks focused on Information Extraction (IE) from unstructured text and NERC was deemed to be an important IE sub-task with the aim of recognizing and classifying nominal mentions of persons, organizations and locations, and also numeric expressions of dates, money, percentage and time. In the following years, research on NERC increased as it was considered to be a crucial source of information for other Natural Language Processing tasks such as Question Answering (QA) and Textual Entailment (RTE) BIBREF39 . Furthermore, while MUC 6 was solely devoted to English as target language, the CoNLL shared tasks (2002 and 2003) boosted research on language independent NERC for 3 additional target languages: Dutch, German and Spanish BIBREF40 , BIBREF41 .", "id": 1632, "question": "what are the baselines?", "title": "Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features" }, { "answers": [ "" ], "context": "Figurative language makes use of figures of speech to convey non-literal meaning BIBREF0, BIBREF1. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm.", "id": 1633, "question": "What multilingual word representations are used?", "title": "Irony Detection in a Multilingual Context" }, { "answers": [ "" ], "context": "Arabic dataset (Ar=$11,225$ tweets). Our starting point was the corpus built by BIBREF13 that we extended to different political issues and events related to the Middle East and Maghreb that hold during the years 2011 to 2018. Tweets were collected using a set of predefined keywords (which targeted specific political figures or events) and containing or not Arabic ironic hashtags (سخرية>#, مسخرة>#, تهكم>#, استهزاء>#) . The collection process resulted in a set of $6,809$ ironic tweets ($I$) vs. $15,509$ non ironic ($NI$) written using standard (formal) and different Arabic language varieties: Egypt, Gulf, Levantine, and Maghrebi dialects.", "id": 1634, "question": "Do the authors identify any cultural differences in irony use?", "title": "Irony Detection in a Multilingual Context" }, { "answers": [ "" ], "context": "It is important to note that our aim is not to outperform state-of-the-art models in monolingual ID but to investigate which of the monolingual architectures (neural or feature-based) can achieve comparable results with existing systems. The result can show which kind of features works better in the monolingual settings and can be employed to detect irony in a multilingual setting. In addition, it can show us to what extend ID is language dependent by comparing their results to multilingual results. Two models have been built, as explained below. Prior to learning, basic preprocessing steps were performed for each language (e.g., removing foreign characters, ironic hashtags, mentions, and URLs).", "id": 1635, "question": "What neural architectures are used?", "title": "Irony Detection in a Multilingual Context" }, { "answers": [ "" ], "context": "We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries BIBREF35 or unsupervised methods relying on monolingual corpora BIBREF36, BIBREF37, BIBREF38. For our experiments, we use Conneau et al 's approach as it showed superior results with respect to the literature BIBREF36. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table TABREF11 presents the results.", "id": 1636, "question": "What text-based features are used?", "title": "Irony Detection in a Multilingual Context" }, { "answers": [ "AraVec for Arabic, FastText for French, and Word2vec Google News for English." ], "context": "This paper proposes the first multilingual ID in tweets. We show that simple monolingual architectures (either neural or feature-based) trained separately on each language can be successfully used in a multilingual setting providing a cross-lingual word representation or basic surface features. Our monolingual results are comparable to state of the art for the three languages. The CNN architecture trained on cross-lingual word representation shows that irony has a certain similarity between the languages we targeted despite the cultural differences which confirm that irony is a universal phenomena, as already shown in previous linguistic studies BIBREF39, BIBREF25, BIBREF40. The manual analysis of the common misclassified tweets across the languages in the multilingual setup, shows that classification errors are due to three main factors. (1) First, the absence of context where writers did not provide sufficient information to capture the ironic sense even in the monolingual setting, as in نبدا تاني يسقط يسقط حسني مبارك !! > (Let's start again, get off get off Mubarak!!) where the writer mocks the Egyptian revolution, as the actual president \"Sisi\" is viewed as Mubarak's fellows. (2) Second, the presence of out of vocabulary (OOV) terms because of the weak coverage of the mutlilingual embeddings which make the system fails to generalize when the OOV set of unseen words is large during the training process. We found tweets in all the three languages written in a very informal way, where some characters of the words were deleted, duplicated or written phonetically (e.g phat instead of fat). (3) Another important issue is the difficulty to deal with the Arabic language. Arabic tweets are often characterized by non-diacritised texts, a large variations of unstandardized dialectal Arabic (recall that our dataset has 4 main varieties, namely Egypt, Gulf, Levantine, and Maghrebi), presence of transliterated words (e.g. the word table becomes طابلة> (tabla)), and finally linguistic code switching between Modern Standard Arabic and several dialects, and between Arabic and other languages like English and French. We found some tweets contain only words from one of the varieties and most of these words do not exist in the Arabic embeddings model. For example in مبارك بقاله كام يوم مامتش .. هو عيان ولاه ايه #مصر > (Since many days Mubarak didn't die .. is he sick or what? #Egypt), only the words يوم> (day), مبارك> (Mubarak), and هو> (he) exist in the embeddings. Clearly, considering only these three available words, we are not able to understand the context or the ironic meaning of the tweet. To conclude, our multilingual experiments confirmed that the door is open towards multilingual approaches for ID. Furthermore, our results showed that ID can be applied to languages that lack of annotated data. Our next step is to experiment with other languages such as Hindi and Italian.", "id": 1637, "question": "What monolingual word representations are used?", "title": "Irony Detection in a Multilingual Context" }, { "answers": [ "" ], "context": "With the expansion of micro blogging platforms such as Twitter, the Internet is progressively being utilized to spread health information instead of similarly as a wellspring of data BIBREF0 , BIBREF1 . Twitter allows users to share their status messages typically called as tweets, restricted to 140 characters. Most of the time, these tweets expresses the opinions about the topics. Thus analysis of tweets has been considered as a significant task in many of the applications, here for health related applications.", "id": 1638, "question": "Does the proposed method outperform a baseline?", "title": "Deep Health Care Text Classification" }, { "answers": [ "" ], "context": "This section discusses the concepts of tweet representation and deep learning algorithms particularly recurrent neural network (RNN) and long short-term memory (LSTM) in a mathematical way.", "id": 1639, "question": "What type of RNN is used?", "title": "Deep Health Care Text Classification" }, { "answers": [ "" ], "context": "Summarization is a promising technique for reducing information overload. It aims at converting long text documents to short, concise summaries conveying the essential content of the source documents BIBREF0 . Extractive methods focus on selecting important sentences from the source and concatenating them to form a summary, whereas abstractive methods can involve a number of high-level text operations such as word reordering, paraphrasing, and generalization BIBREF1 . To date, summarization has been successfully exploited for a number of text domains, including news articles BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , product reviews BIBREF6 , online forum threads BIBREF7 , meeting transcripts BIBREF8 , scientific articles BIBREF9 , BIBREF10 , student course responses BIBREF11 , BIBREF12 , and many others.", "id": 1640, "question": "What do they constrain using integer linear programming?", "title": "A Novel ILP Framework for Summarizing Content with High Lexical Variety" }, { "answers": [ "One model per topic." ], "context": "Extractive summarization has undergone great development over the past decades. It focuses on extracting relevant sentences from a single document or a cluster of documents related to a particular topic. Various techniques have been explored, including maximal marginal relevance BIBREF22 , submodularity BIBREF23 , integer linear programming BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF4 , minimizing reconstruction error BIBREF28 , graph-based models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , determinantal point processes BIBREF33 , neural networks and reinforcement learning BIBREF34 , BIBREF35 among others. Nonetheless, most studies are bound to a single dataset and few approaches have been evaluated in a cross-domain setting. In this paper, we propose an enhanced ILP framework and evaluate it on a broad range of datasets. We present an in-depth analysis of the dataset characteristics derived from both source documents and reference summaries to understand how domain-specific factors may affect the applicability of the proposed approach.", "id": 1641, "question": "Do they build one model per topic or on all topics?", "title": "A Novel ILP Framework for Summarizing Content with High Lexical Variety" }, { "answers": [ "They evaluate quantitatively." ], "context": "Let INLINEFORM0 be a set of documents that consist of INLINEFORM1 sentences in total. Let INLINEFORM2 , INLINEFORM3 indicate if a sentence INLINEFORM4 is selected ( INLINEFORM5 ) or not ( INLINEFORM6 ) in the summary. Similarly, let INLINEFORM7 be the number of unique concepts in INLINEFORM8 . INLINEFORM9 , INLINEFORM10 indicate the appearance of concepts in the summary. Each concept INLINEFORM11 is assigned a weight of INLINEFORM12 , often measured by the number of sentences or documents that contain the concept. The ILP-based summarization approach BIBREF20 searches for an optimal assignment to the sentence and concept variables so that the selected summary sentences maximize coverage of important concepts. The relationship between concepts and sentences is captured by a co-occurrence matrix INLINEFORM13 , where INLINEFORM14 indicates the INLINEFORM15 -th concept appears in the INLINEFORM16 -th sentence, and INLINEFORM17 otherwise. In the literature, bigrams are frequently used as a surrogate for concepts BIBREF24 , BIBREF21 . We follow the convention and use `concept' and `bigram' interchangeably in this paper.", "id": 1642, "question": "Do they quantitavely or qualitatively evalute the output of their low-rank approximation to verify the grouping of lexical items?", "title": "A Novel ILP Framework for Summarizing Content with High Lexical Variety" }, { "answers": [ "" ], "context": "Because of the lexical diversity problem, we suspect the co-occurrence matrix INLINEFORM0 may not establish a faithful correspondence between sentences and concepts. A concept may be conveyed using multiple bigram expressions; however, the current co-occurrence matrix only captures a binary relationship between sentences and bigrams. For example, we ought to give partial credit to “bicycle parts” given that a similar expression “bike elements” appears in the sentence. Domain-specific synonyms may be captured as well. For example, the sentence “I tried to follow along but I couldn't grasp the concepts” is expected to partially contain the concept “understand the”, although the latter did not appear in the sentence.", "id": 1643, "question": "Do they evaluate their framework on content of low lexical variety?", "title": "A Novel ILP Framework for Summarizing Content with High Lexical Variety" }, { "answers": [ "" ], "context": "Street gangs are defined as “a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise” BIBREF0 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Today, over 1.4 million people, belonging to more than 33,000 gangs, are active in the United States BIBREF2 , of which 88% identify themselves as being members of a street gang. They are also active users of social media BIBREF2 ; according to 2007 National Assessment Center's survey of gang members, 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF3 . More recent studies report approximately 45% of gang members participate in online offending activities such as threatening, harassing individuals, posting violent videos or attacking someone on the street for something they said online BIBREF4 , BIBREF5 . They confirm that gang members use social media to express themselves in ways similar to their offline behavior on the streets BIBREF6 .", "id": 1644, "question": "Do the authors report on English datasets only?", "title": "Word Embeddings to Enhance Twitter Gang Member Profile Identification" }, { "answers": [ "" ], "context": "Researchers have begun investigating the gang members' use of social media and have noticed the importance of identifying gang members' Twitter profiles a priori BIBREF6 , BIBREF7 . Before analyzing any textual context retrieved from their social media posts, knowing that a post has originated from a gang member could help systems to better understand the message conveyed by that post. Wijeratne et al. developed a framework to analyze what gang members post on social media BIBREF7 . Their framework could only extract social media posts from self identified gang members by searching for pre-identified gang names in a user's Twitter profile description. Patton et al. developed a method to collect tweets from a group of gang members operating in Detroit, MI BIBREF11 . However, their approach required the gang members' Twitter profile names to be known beforehand, and data collection was localized to a single city in the country. These studies investigated a small set of manually curated gang member profiles, often from a small geographic area that may bias their findings.", "id": 1645, "question": "Which supervised learning algorithms are used in the experiments?", "title": "Word Embeddings to Enhance Twitter Gang Member Profile Identification" }, { "answers": [ "" ], "context": "A word embedding model is a neural network that learns rich representations of words in a text corpus. It takes data from a large, INLINEFORM0 -dimensional `word space' (where INLINEFORM1 is the number of unique words in a corpus) and learns a transformation of the data into a lower INLINEFORM2 -dimensional space of real numbers. This transformation is developed in a way that similarities between the INLINEFORM3 -dimensional vector representation of two words reflects semantic relationships among the words themselves. These semantics are not captured by typical bag-of-words or INLINEFORM4 -gram models for classification tasks on text data BIBREF14 , BIBREF10 .", "id": 1646, "question": "How in YouTube content translated into a vector format?", "title": "Word Embeddings to Enhance Twitter Gang Member Profile Identification" }, { "answers": [ "" ], "context": "Gang member tweets and profile descriptions tend to have few textual indicators that demonstrate their gang affiliations or their tweets/profile text may carry acronyms which can only be deciphered by others involved in gang culture BIBREF9 . These gang-related terms are often local to gangs operating in neighborhoods and change rapidly when they form new gangs. Consequently, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, we use heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music preferences and affinity. In this section, we briefly discuss the feature types and their broad differences in gang and non-gang member profiles. An in-depth explanation of these feature selection can be found in BIBREF9 .", "id": 1647, "question": "How is the ground truth of gang membership established in this dataset?", "title": "Word Embeddings to Enhance Twitter Gang Member Profile Identification" }, { "answers": [ "" ], "context": "While NER tasks across domains share similar problems of ambiguous abbreviations, homonyms, and other entity variations, the domain of biomedical text poses some unique challenges. While, in principle, there is a known set of biomedical entities (e.g., all known proteins), there is a surprising amount of variation for any given entity. For example, PPCA, C4 PEPC, C4 PEPCase, and Photosynthetic PEPCase all refer to the same entity. Additionally, certain entities such as proteins and genes can naturally span less than a “word\" (e.g., HA and APG12 are separate proteins in pHA-APG12). Most state-of-the-art NER methods tag entities at the “word\" level, and rely on pre- or post-processing rules to extract subword entities. Our goal is to develop a subword approach that does not rely on ad hoc processing steps.", "id": 1648, "question": "Do they evaluate ablated versions of their CNN+RNN model?", "title": "A Byte-sized Approach to Named Entity Recognition" }, { "answers": [ "" ], "context": "Compared to previous years, the 2016 NIST speaker recognition evaluation (SRE) marked a major shift from English towards Austronesian and Chinese languages. The task like previous years is to perform speaker detection with the focus on telephone speech data recorded over a variety of handset types. The main challenges introduced in this evaluation are duration and language variability. The potential variation of languages addressed in this evaluation, recording environment, and variability of test segments duration influenced the design of our system. Our goal was to utilize recent advances in language normalization, domain adaptation, speech activity detection and session compensation techniques to mitigate the adverse bias introduced in this year's evaluation.", "id": 1649, "question": "Do they single out a validation set from the fixed SRE training set?", "title": "The Intelligent Voice 2016 Speaker Recognition System" }, { "answers": [ "EER 16.04, Cmindet 0.6012, Cdet 0.6107" ], "context": "The fixed training condition is used to build our speaker recognition system. Only conversational telephone speech data from datasets released through the linguistic data consortium (LDC) have been used, including NIST SRE 2004-2010 and the Switchboard corpora (Switchboard Cellular Parts I and II, Switchboard2 Phase I,II and III) for different steps of system training. A more detailed description of the data used in the system training is presented in Table TABREF1 . We have also included the unlabelled set of 2472 telephone calls from both minor (Cebuano and Mandarin) and major (Tagalog and Cantonese) languages provided by NIST in the system training. We will indicate when and how we used this set in the training in the following sections.", "id": 1650, "question": "How well does their system perform on the development set of SRE?", "title": "The Intelligent Voice 2016 Speaker Recognition System" }, { "answers": [ "" ], "context": "In this section we will provide a description of the main steps in front-end processing of our speaker recognition system including speech activity detection, acoustic and i-vector feature extraction.", "id": 1651, "question": "Which are the novel languages on which SRE placed emphasis on?", "title": "The Intelligent Voice 2016 Speaker Recognition System" }, { "answers": [ "" ], "context": "Word embedding is a very active area of research. It consists of using a text corpus to characterize and embed words into rich high-dimensional vector spaces. By mining a text corpus, it is possible to embed words in a continuous space where semantically similar words are embedded close together. By encoding words into vectors, it is possible to represent semantic properties of these words in a way that is more expressive and useful for tasks of natural language processing. Word embeddings have been effectively used for sentiment analysis, machine translation, and other and other language-related tasks BIBREF0 , BIBREF1 .", "id": 1652, "question": "Does this approach perform better than context-based word embeddings?", "title": "Synonym Discovery with Etymology-based Word Embeddings" }, { "answers": [ "" ], "context": "There exists limited research on etymological networks in the English language. Particularly BIBREF7 , and BIBREF8 use an etymological network-based approach to study movie scripts and reviews in English.", "id": 1653, "question": "Have the authors tried this approach on other languages?", "title": "Synonym Discovery with Etymology-based Word Embeddings" }, { "answers": [ "" ], "context": "Word embeddings are most effective when they learn from both unstructured text and a graph of general knowledge BIBREF0 . ConceptNet 5 BIBREF1 is an open-data knowledge graph that is well suited for this purpose. It is accompanied by a pre-built word embedding model known as ConceptNet Numberbatch, which combines skip-gram embeddings learned from unstructured text with the relational knowledge in ConceptNet.", "id": 1654, "question": "What features did they train on?", "title": "Luminoso at SemEval-2018 Task 10: Distinguishing Attributes Using Text Corpora and Relational Knowledge" }, { "answers": [ "" ], "context": "A requirement of scalable and practical question answering (QA) systems is the ability to reason over multiple documents and combine their information to answer questions. Although existing datasets enabled the development of effective end-to-end neural question answering systems, they tend to focus on reasoning over localized sections of a single document BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . For example, BIBREF4 find that 90% of the questions in the Stanford Question Answering Dataset are answerable given 1 sentence in a document. In this work, we instead focus on multi-evidence QA, in which answering the question requires aggregating evidence from multiple documents BIBREF5 , BIBREF6 .", "id": 1655, "question": "How big is the test set?", "title": "Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering" }, { "answers": [ "" ], "context": "The coarse-grain module and fine-grain module of the correspond to and strategies. The coarse-grain module summarizes support documents without knowing the candidates: it builds codependent representations of support documents and the query using coattention, then produces a coarse-grain summary using self-attention. In contrast, the fine-grain module retrieves specific contexts in which each candidate occurs: it identifies coreferent mentions of the candidate, then uses coattention to build codependent representations between these mentions and the query. While low-level encodings of the inputs are shared between modules, we show that this division of labour allows the attention hierarchies in each module to focus on different parts of the input. This enables the model to more effectively represent a large number of potentially long support documents.", "id": 1656, "question": "What is coattention?", "title": "Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering" }, { "answers": [ "" ], "context": "Question answering (QA) systems have become remarkably good at answering simple, single-hop questions but still struggle with compositional, multi-hop questions BIBREF0, BIBREF1. In this work, we examine if we can answer hard questions by leveraging our ability to answer simple questions. Specifically, we approach QA by breaking a hard question into a series of sub-questions that can be answered by a simple, single-hop QA system. The system's answers can then be given as input to a downstream QA system to answer the hard question, as shown in Fig. FIGREF1. Our approach thus answers the hard question in multiple, smaller steps, which can be easier than answering the hard question all at once. For example, it may be easier to answer “What profession do H. L. Mencken and Albert Camus have in common?” when given the answers to the sub-questions “What profession does H. L. Mencken have?” and “Who was Albert Camus?”", "id": 1657, "question": "What off-the-shelf QA model was used to answer sub-questions?", "title": "Unsupervised Question Decomposition for Question Answering" }, { "answers": [ "" ], "context": "We now formulate the problem and overview our high-level approach, with details in the following section. We aim to leverage a QA model that is accurate on simple questions to answer hard questions, without using supervised question decompositions. Here, we consider simple questions to be “single-hop” questions that require reasoning over one paragraph or piece of evidence, and we consider hard questions to be “multi-hop.” Our aim is then to train a multi-hop QA model $M$ to provide the correct answer $a$ to a multi-hop question $q$ about a given a context $c$ (e.g., several paragraphs). Normally, we would train $M$ to maximize $\\log p_M(a | c, q)$. To help $M$, we leverage a single-hop QA model that may be queried with sub-questions $s_1, \\dots , s_N$, whose “sub-answers” to each sub-question $a_1, \\dots , a_N$ may be provided to the multi-hop QA model. $M$ may then instead maximize the (potentially easier) objective $\\log p_M(a | c, q, [s_1, a_1], \\dots , [a_N, s_N])$.", "id": 1658, "question": "How large is the improvement over the baseline?", "title": "Unsupervised Question Decomposition for Question Answering" }, { "answers": [ "" ], "context": "To train a decomposition model, we need appropriate training data. We assume access to a hard question corpus $Q$ and a simple question corpus $S$. Instead of using supervised $(q, d)$ training examples, we design an algorithm that constructs pseudo-decompositions $d^{\\prime }$ to form $(q, d^{\\prime })$ pairs from $Q$ and $S$ using an unsupervised approach (§SECREF4). We then train a model to map $q$ to a decomposition. We explore learning to decompose with standard and unsupervised sequence-to-sequence learning (§SECREF6).", "id": 1659, "question": "What is the strong baseline that this work outperforms?", "title": "Unsupervised Question Decomposition for Question Answering" }, { "answers": [ "" ], "context": "Simultaneous translation is a translation task where the translation process starts before the end of an input. It helps real-time spoken language communications such as human conversations and public talks. A usual machine translation system works in the sentence level and starts its translation process after it reads the end of a sentence. It would not be appropriate for spoken languages due to roughly two issues: (1) sentence boundaries are not clear and (2) a large latency occurs for a long input.", "id": 1660, "question": "Which dataset do they use?", "title": "Simultaneous Neural Machine Translation using Connectionist Temporal Classification" }, { "answers": [ "" ], "context": "First, we review a general NMT model following the formulation by BIBREF1 and the “Wait-k\" model BIBREF2 that is the baseline model for simultaneous NMT.", "id": 1661, "question": "Do they trim the search space of possible output sequences?", "title": "Simultaneous Neural Machine Translation using Connectionist Temporal Classification" }, { "answers": [ "" ], "context": "The encoder takes a sequence of a source sentence $X$ as inputs and returns forward hidden vectors $\\overrightarrow{\\textbf {h}_i}(1 \\le i \\le I)$ of the forward RNNs:", "id": 1662, "question": "Which model architecture do they use to build a model?", "title": "Simultaneous Neural Machine Translation using Connectionist Temporal Classification" }, { "answers": [ "" ], "context": "The decoder takes source hidden vectors as inputs and returns target language words one-by-one with the attention mechanism. The decoder RNNs recurrently generates target words using its hidden state and an output context. The conditional generation probability of the target word $\\textbf {y}_i$ defined as follows:", "id": 1663, "question": "Do they compare simultaneous translation performance to regular machine translation?", "title": "Simultaneous Neural Machine Translation using Connectionist Temporal Classification" }, { "answers": [ "" ], "context": "In this work, we proposed the method to decide the output timing adaptively. The proposed method introduces a special token <wait> which is output instead of delaying translation to target-side vocabulary.", "id": 1664, "question": "Which metrics do they use to evaluate simultaneous translation?", "title": "Simultaneous Neural Machine Translation using Connectionist Temporal Classification" }, { "answers": [ "" ], "context": "Datasets: Over the past few years several large scale datasets for Visual Question Answering have been released. These include datasets such as COCO-QA BIBREF3, DAQUAR BIBREF4, VQA BIBREF5, BIBREF6 which contain questions asked over natural images. On the other hand, datasets such as CLEVR BIBREF7 and NVLR BIBREF8 contain complex reasoning based questions on synthetic images having 2D and 3D geometric objects. There are some datasets BIBREF9, BIBREF10 which contain questions asked over diagrams found in text books but these datasets are smaller and contain multiple-choice questions. FigureSeer BIBREF11 is another dataset which contains images extracted from research papers but this is also a relatively small (60,000 images) dataset. Further, FigureSeer focuses on answering questions based on line plots as opposed to other types of plots such as bar charts, scatter plots, etc. as seen in FigureQA BIBREF0 and DVQA BIBREF1.", "id": 1665, "question": "How big are FigureQA and DVQA datasets?", "title": "Data Interpretation over Plots" }, { "answers": [ "" ], "context": "In this section, we describe the PlotQA dataset and the process to build it. Specifically, we discuss the four main stages, viz., (i) curating data such as year-wise rainfall statistics, country-wise mortality rates, etc., (ii) creating different types of plots with a variation in the number of elements, legend positions, fonts, etc., (iii) crowd-sourcing to generate questions, and (iv) extracting templates from the crowd-sourced questions and instantiating these templates using appropriate phrasing suggested by human annotators.", "id": 1666, "question": "What models other than SAN-VOES are trained on new PlotQA dataset?", "title": "Data Interpretation over Plots" }, { "answers": [ "" ], "context": "Analysing sentiment from text is a well-known NLP problem. Several state-of-the-art tools exist that can achieve this with reasonable accuracy. However most of the existing tools perform well on well-formatted text. In case of tweets, the user generated content is short, noisy, and in many cases ( INLINEFORM0 ) doesn't follow proper grammatical structure. Additionally, numerous internet slangs, abbreviations, urls, emoticons, and unconventional style of capitalization are found in the tweets. As a result, the accuracy of the state-of-the art NLP tools decreases sharply. In this project, we develop new features to incorporate the styles salient in short, informal user generated contents like tweets. We achieve an F1-accuracy of INLINEFORM1 for predicting the sentiment of tweets in our data-set. We also propose a method to discover new sentiment terms from the tweets.", "id": 1667, "question": "Do the authors report only on English language data?", "title": "Sentiment Analysis for Twitter : Going Beyond Tweet Text" }, { "answers": [ "" ], "context": "Tweets are short messages, restricted to 140 characters in length. Due to the nature of this microblogging service (quick and short messages), people use acronyms, make spelling mistakes, use emoticons and other characters that express special meanings. Following is a brief terminology associated with tweets:", "id": 1668, "question": "What dataset of tweets is used?", "title": "Sentiment Analysis for Twitter : Going Beyond Tweet Text" }, { "answers": [ "" ], "context": "Since tweets are informal in nature, some pre-processing is required. Consider the following tweet.", "id": 1669, "question": "What external sources of information are used?", "title": "Sentiment Analysis for Twitter : Going Beyond Tweet Text" }, { "answers": [ "" ], "context": "Normalization is done as follows:", "id": 1670, "question": "What linguistic features are used?", "title": "Sentiment Analysis for Twitter : Going Beyond Tweet Text" }, { "answers": [ "" ], "context": "Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications.", "id": 1671, "question": "What are the key issues around whether the gold standard data produced in such an annotation is reliable? ", "title": "Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From?" }, { "answers": [ "" ], "context": "All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more.", "id": 1672, "question": "How were the machine learning papers from ArXiv sampled?", "title": "Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From?" }, { "answers": [ "" ], "context": "In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8.", "id": 1673, "question": "What are the core best practices of structured content analysis?", "title": "Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From?" }, { "answers": [ "" ], "context": "Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17.", "id": 1674, "question": "In what sense is data annotation similar to structured content analysis? ", "title": "Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From?" }, { "answers": [ "" ], "context": "On December 31, 2019, Chinese public health authorities reported several cases of a respiratory syndrome caused by an unknown disease, which subsequently became known as COVID-19 in the city of Wuhan, China. This highly contagious disease continued to spread worldwide, leading the World Health Organization (WHO) to declare a global health emergency on January 30, 2020. On March 11, 2020 the disease has been identified as pandemic by WHO, and many countries around the world including Saudi Arabia, United States, United Kingdom, Italy, Canada, and Germany have continued reporting more cases of the disease BIBREF0. As the time of writing this paper, this pandemic is affecting more than 208 countries around the globe with more than one million and half confirmed cases BIBREF1.", "id": 1675, "question": "What additional information is found in the dataset?", "title": "Large Arabic Twitter Dataset on COVID-19" }, { "answers": [ "" ], "context": "We collected COVID-19 related Arabic tweets from January 1, 2020 until April 15, 2020, using Twitter streaming API and the Tweepy Python library. We have collected more than 3,934,610 million tweets so far. In our dataset, we store the full tweet object including the id of the tweet, username, hashtags, and geolocation of the tweet. We created a list of the most common Arabic keywords associated with COVID-19. Using Twitter’s streaming API, we searched for any tweet containing the keyword(s) in the text of the tweet. Table TABREF1 shows the list of keywords used along with the starting date of tracking each keyword. Furthermore, Table TABREF2 shows the list of hashtags we have been tracking along with the number of tweets collected from each hashtag. Indeed, some tweets were irrelevant, and we kept only those that were relevant to the pandemic.", "id": 1676, "question": "Is the dataset focused on a region?", "title": "Large Arabic Twitter Dataset on COVID-19" }, { "answers": [ "" ], "context": "The dataset is accessible on GitHub at this address: https://github.com/SarahAlqurashi/COVID-19-Arabic-Tweets-Dataset", "id": 1677, "question": "Over what period of time were the tweets collected?", "title": "Large Arabic Twitter Dataset on COVID-19" }, { "answers": [ "" ], "context": "We are continuously updating the dataset to maintain more aspects of COVID-19 Arabic conversations and discussions happening on Twitter. We also plan to study how different groups respond to the pandemic and analyze information sharing behavior among the users.", "id": 1678, "question": "Are the tweets location-specific?", "title": "Large Arabic Twitter Dataset on COVID-19" }, { "answers": [ "" ], "context": "The authors wish to express their thanks to Batool Mohammed Hmawi for her help in data collection.", "id": 1679, "question": "How big is the dataset?", "title": "Large Arabic Twitter Dataset on COVID-19" }, { "answers": [ "" ], "context": "Event detection is important for emergency services to react rapidly and minimize damage. For example, terrorist attacks, protests, or bushfires may require the presence of ambulances, firefighters, and police as soon as possible to save people. This research aims to detect events as soon as they occur and are reported via some Twitter user. The event detection process requires to know the keywords associated with each event and to assess the minimal count of each word to decide confidently that an event has occurred. In this research, we propose a novel method of spike matching to identify keywords, and use probabilistic classification to assess the probability of having an event given the volume of each word.", "id": 1680, "question": "Do the authors suggest any future extensions to this work?", "title": "Event detection in Twitter: A keyword volume approach" }, { "answers": [ "Logistic regression" ], "context": "Analyzing social networks for event detection is approached from multiple perspectives depending on the research objective. This can be predicting election results, a contest winner, or predicting peoples' reaction to a government decision through protest. The main perspectives to analyze the social networks are (1) content analysis, where the textual content of each post is analyzed using natural language processing to identify the topic or the sentiment of the authors. (2) Network structure analysis, where the relation between the users are described in a tree structure for the follower-followee patterns, or in a graph structure for friendship and interaction patterns. These patterns can be used to know the political preference of people prior to elections. (3) Behavioural analysis of each user including sentiment, response, likes, retweets, location, to identify responses toward specific events. This might be useful to identify users with terrorist intentions. In this section, we will focus on textual content-based models, where text analysis and understanding can be achieved using keywords, topic modelling or sentiment analysis.", "id": 1681, "question": "Which of the classifiers showed the best performance?", "title": "Event detection in Twitter: A keyword volume approach" }, { "answers": [ "" ], "context": "Keyword-based approaches focus on sequence analysis of the time series for each keyword. They also consider different forms for each keyword, including n-gram, skip-gram, and word-pairs BIBREF6 . The keyword-based approaches use the concept of the distributional semantics to group semantically-related words as synonyms to be used as a single feature BIBREF7 . In this approach, keywords are usually associated with events by correlation, entropy or distance metrics. Also, Hossny et al. proposed using SVD with K-Means to strengthen keyword signals, by grouping words having similar temporal patterns, then mapping them into one central word that has minimum distance to the other members of the cluster BIBREF8 .", "id": 1682, "question": "Were any other word similar metrics, besides Jaccard metric, tested?", "title": "Event detection in Twitter: A keyword volume approach" }, { "answers": [ "By using a Bayesian approach and by using word-pairs, where they extract all the pairs of co-occurring words within each tweet. They search for the words that achieve the highest number of spikes matching the days of events." ], "context": "Topic modelling approaches focus on clustering related words according to their meaning, and indexing them using some similarity metric such as cosine similarity or Euclidean distance. The most recognized techniques are (1) Latent Semantic Indexing (LSI), where the observation matrix is decomposed using singular value decomposition and the data are clustered using K-Means BIBREF7 ,(2) Latent Dirichlet Allocation (LDA), where the words are clustered using Gaussian mixture models (GMM) according to the likelihood of term co-occurrence within the same context BIBREF14 , (3) Word2Vec, which uses a very large corpus to compute continuous vector representations, where we can apply standard vector operations to map one vector to another BIBREF15 .", "id": 1683, "question": "How are the keywords associated with events such as protests selected?", "title": "Event detection in Twitter: A keyword volume approach" }, { "answers": [ "5575 speeches" ], "context": "As the world moves towards increasing forms of digitization, the creation of text corpora has become an important activity for NLP and other fields of research. Parliamentary data is a rich corpus of discourse on a wide array of topics. The Lok Sabha website provides access to all kinds of reports, debates, bills related to the proceedings of the house. Similarly, the Rajya Sabha website also contains debates, bills, reports introduced in the house. The Lok Sabha website also contains information about members of the parliament who are elected by the people and debate in the house. Since the data is unstructured , it cannot be computationally analyzed. There is a need to shape the data into a structured format for analysis. This data is important as it can be used to visualize person, party and agenda level semantics in the house.", "id": 1684, "question": "How many speeches are in the dataset?", "title": "Analysis of Speeches in Indian Parliamentary Debates" }, { "answers": [ "" ], "context": "Many linguists around the globe are concentrating on creation of parliamentary datasets. BIBREF1 gives an overview of the parliamentary records and corpora from countries with a focus on their availability through Clarin infrastructure. A dataset of Japanese Local Assembly minutes was created and analyzed for statistical data such as number of speakers, characters and words BIBREF2 . BIBREF3 created a highly multilingual parallel corpus of European parliament and demonstrated that it is useful for statistical machine translation. Parliamentary debates are full of arguments. Ruling party members refute the claims made by opposition party members and vice versa. Members provide strong arguments for supporting their claim or refuting other's claim. Analyzing argumentation from a computational linguistics point of view has led very recently to a new field called argumentation mining BIBREF4 . One can perform argument mining on these debates and analyze the results. BIBREF5 worked on detecting perspectives in UK political debates using a Bayesian modelling approach. BIBREF6 worked on claim detection from UK political debates using both linguistic features text and features from speech.", "id": 1685, "question": "What classification models were used?", "title": "Analysis of Speeches in Indian Parliamentary Debates" }, { "answers": [ "" ], "context": "Our dataset consists of synopsis of debates in the lower house of the Indian Parliament (Lok Sabha). The dataset consists of :", "id": 1686, "question": "Do any speeches not fall in these categories?", "title": "Analysis of Speeches in Indian Parliamentary Debates" }, { "answers": [ "They use a left-to-right attention mask so that the input tokens can only attend to other input tokens, and the target tokens can only attend to the input tokens and already generated target tokens." ], "context": "The BERT language model BIBREF0 is a Deep Bidirectional Transformer BIBREF1 pre-trained on textual corpora (BookCorpus and Wikipedia) using a Masked Language Model (MLM) objective – predicting some words that are randomly masked in the sentence, along with a sentence entailment loss. Recent research efforts BIBREF2 have shown how BERT encodes abstractions that generalize across languages, even when trained on monolingual data only. This contradicts the common belief BIBREF3, BIBREF4 that a shared vocabulary and joint training on multiple languages are essential to achieve cross-lingual generalization capabilities. In this work, we further investigate the generalization potentials of large pre-trained LMs, this time moving to a cross-modal setup: does BERT contain abstractions that generalize beyond text?", "id": 1687, "question": "What is different in BERT-gen from standard BERT?", "title": "BERT Can See Out of the Box: On the Cross-modal Transferability of Text Representations" }, { "answers": [ "The image feature vectors are mapped into BERT embedding dimensions and treated like a text sequence afterwards." ], "context": "Learning unsupervised textual representations that can be applied to downstream tasks is a widely investigated topic in the literature. Text representations have been learned at different granularities: words with Word2vec BIBREF20, sentences with SkipThought BIBREF21, paragraphs with ParagraphVector BIBREF22 and contextualized word vectors with ELMo BIBREF23. Other methods leverage a transfer-learning approach by fine-tuning all parameters of a pre-trained model on a target task, a paradigm which has become mainstream since the introduction of BERT BIBREF0. BERT alleviates the problem of the uni-directionality of most language models (i.e. where the training objective aims at predicting the next word) by proposing a new objective called Masked Language Model (MLM). Under MLM, some words, that are randomly selected, are masked; the training objective aims at predicting them.", "id": 1688, "question": "How are multimodal representations combined?", "title": "BERT Can See Out of the Box: On the Cross-modal Transferability of Text Representations" }, { "answers": [ "Answer with content missing: (whole introduction) However, recent\nstudies observe the limits of ROUGE and find in\nsome cases, it fails to reach consensus with human.\njudgment (Paulus et al., 2017; Schluter, 2017)." ], "context": "In this section, we describe the procedure of annotating CNN/Daily Mail. For each facet (sentence) in the reference summary, we find all its support sentences in the document that can cover its meaning. Note that the support sentences are likely to be more verbose, but we only consider if the sentences cover the semantics of the facet regardless of their length. The reason is that we believe extractive summarization should focus on information coverage and once salient sentences are extracted, one can then compress them in an abstractive way BIBREF0, BIBREF1. Formally, we denote one document-summary pair as $\\lbrace d, r\\rbrace $, where $d = \\lbrace d^j\\rbrace _{j=1}^D$, $r = \\lbrace r^j\\rbrace _{j=1}^R$, and $D$, $R$ denote the number of sentences. We define one support group of facet $\\mathcal {F}$ as a minimum set of sentences in the document that express the meaning of $\\mathcal {F}$. For each $r^j$, we annotate a FAM $r^j \\rightarrow \\lbrace \\lbrace d^{s_{j, 1}^k}\\rbrace _{k=1}^{\\textrm {K}_1}, \\lbrace d^{s_{j, 2}^k}\\rbrace _{k=1}^{\\textrm {K}_2}, ..., \\lbrace d^{s_{j, N}^k}\\rbrace _{k=1}^{\\textrm {K}_N}\\rbrace $ in which each $\\lbrace d^{s_{j, n}^k}\\rbrace _{k=1}^{\\textrm {K}_n}$ is a support group and $s_{j, n}^k$ is the index of the $k$-th support sentence in group $n$.", "id": 1689, "question": "What is the problem with existing metrics that they are trying to address?", "title": "Facet-Aware Evaluation for Extractive Text Summarization" }, { "answers": [ "" ], "context": "By utilizing the FAMs, we revisit extractive methods to see how well they perform on facet coverage. Specifically, we compare Lead-3, Refresh BIBREF3, FastRL(E) (E for extractive only) BIBREF0, UnifiedSum(E) BIBREF1, NeuSum BIBREF4, and BanditSum BIBREF5 using both ROUGE and FAMs. As these methods are facet-agnostic (i.e., their outputs are not organized by facets but flat extract sets), we consider one facet is covered as long as one of its support groups is extracted and measure the Facet-Aware Recall ($\\textbf {FAR} = \\frac{\\textrm {\\#covered}}{R}$). For a fair comparison, each method extracts three sentences since extracting all would result in a perfect FAR.", "id": 1690, "question": "How do they evaluate their proposed metric?", "title": "Facet-Aware Evaluation for Extractive Text Summarization" }, { "answers": [ "" ], "context": "Although the FAMs only need to be annotated once, we investigate whether such human efforts can be further reduced by evaluating approximate approaches that generate extractive labels. Approximate approaches typically transform one abstractive summary to extractive labels heuristically using ROUGE. Previously one could only estimate the quality of these labels by evaluating the extractive models trained using such labels, i.e., comparing the extracted and reference summaries (also approximately via ROUGE). Now that the FAMs serve as ground-truth extractive labels, we can evaluate how well each approach performs accurately. Since the approximate approaches do not have the notion of support group, we flatten all the support sentences in one FAM to a label set.", "id": 1691, "question": "What is a facet?", "title": "Facet-Aware Evaluation for Extractive Text Summarization" }, { "answers": [ "They perform t-SNE clustering to analyze discourse embeddings" ], "context": "Authorship attribution (AA) is the task of identifying the author of a text, given a set of author-labeled training texts. This task typically makes use of stylometric cues at the surface lexical and syntactic level BIBREF0 , although BIBREF1 and BIBREF2 go beyond the sentence level, showing that discourse information can help. However, they achieve limited performance gains and lack an in-depth analysis of discourse featurization techniques. More recently, convolutional neural networks (CNNs) have demonstrated considerable success on AA relying only on character-level INLINEFORM0 -grams BIBREF3 , BIBREF4 . The strength of these models is evidenced by findings that traditional stylometric features such as word INLINEFORM1 -grams and POS-tags do not improve, and can sometimes even hurt performance BIBREF3 , BIBREF5 . However, none of these CNN models make use of discourse.", "id": 1692, "question": "How are discourse embeddings analyzed?", "title": "Leveraging Discourse Information Effectively for Authorship Attribution" }, { "answers": [ "" ], "context": "Entity-grid model. Typical lexical features for AA are relatively superficial and restricted to within the same sentence. F&H14 hypothesize that discourse features beyond the sentence level also help authorship attribution. In particular, they propose an author has a particular style for representing entities across a discourse. Their work is based on the entity-grid model of BIBREF6 (henceforth B&L).", "id": 1693, "question": "What was the previous state-of-the-art?", "title": "Leveraging Discourse Information Effectively for Authorship Attribution" }, { "answers": [ "They derive entity grid with grammatical relations and RST discourse relations and concatenate them with pooling vector for the char-bigrams before feeding to the resulting vector to the softmax layer." ], "context": "Building on shrestha2017's work, we employ their character-bigram CNN (CNN2), and propose two extensions which utilize discourse information: (i) CNN2 enhanced with relation probability vectors (CNN2-PV), and (ii) CNN2 enhanced with discourse embeddings (CNN2-DE). The CNN2-PV allows us to conduct a comparison with F&H14 and F15, which also use relation probability vectors.", "id": 1694, "question": "How are discourse features incorporated into the model?", "title": "Leveraging Discourse Information Effectively for Authorship Attribution" }, { "answers": [ "Entity grid with grammatical relations and RST discourse relations." ], "context": "We begin by introducing the datasets (Section SECREF15 ), followed by detailing the featurization methods (Section SECREF17 ), the experiments (Section SECREF22 ), and finally reporting results (Section SECREF26 ).", "id": 1695, "question": "What discourse features are used?", "title": "Leveraging Discourse Information Effectively for Authorship Attribution" }, { "answers": [ "" ], "context": "Recent advancements in deep learning intensified the long-standing interests in integrating symbolic reasoning with connectionist models BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . The attraction of said integration stems from the complementing properties of these systems. Symbolic reasoning models offer interpretability, efficient generalisation from a small number of examples, and the ability to leverage knowledge provided by an expert. However, these systems are unable to handle ambiguous and noisy high-dimensional data such as sensory inputs BIBREF5 . On the other hand, representation learning models exhibit robustness to noise and ambiguity, can learn task-specific representations, and achieve state-of-the-art results on a wide variety of tasks BIBREF6 . However, being universal function approximators, these models require vast amounts of training data and are treated as non-interpretable black boxes.", "id": 1696, "question": "How are proof scores calculated?", "title": "Towards Neural Theorem Proving at Scale" }, { "answers": [ "A sequence of logical statements represented in a computational graph" ], "context": "In NTP, the neural network structure is built recursively, and its construction is defined in terms of modules similarly to dynamic neural module networks BIBREF19 . Each module, given a goal, a KB, and a current proof state as inputs, produces a list of new proof states, where the proof states are neural networks representing partial proof success scores.", "id": 1697, "question": "What are proof paths?", "title": "Towards Neural Theorem Proving at Scale" }, { "answers": [ "" ], "context": "There has been a recent shift of research attention in the word segmentation literature from statistical methods to deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Neural network models have been exploited due to their strength in non-sparse representation learning and non-linear power in feature combination, which have led to advances in many NLP tasks. So far, neural word segmentors have given comparable accuracies to the best statictical models.", "id": 1698, "question": "What is the size of the model?", "title": "Neural Word Segmentation with Rich Pretraining" }, { "answers": [ "Raw data from Gigaword, Automatically segmented text from Gigaword, Heterogenous training data from People's Daily, POS data from People's Daily" ], "context": "Work on statistical word segmentation dates back to the 1990s BIBREF21 . State-of-the-art approaches include character sequence labeling models BIBREF22 using CRFs BIBREF23 , BIBREF24 and max-margin structured models leveraging word features BIBREF25 , BIBREF26 , BIBREF27 . Semi-supervised methods have been applied to both character-based and word-based models, exploring external training data for better segmentation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF28 . Our work belongs to recent neural word segmentation.", "id": 1699, "question": "What external sources are used?", "title": "Neural Word Segmentation with Rich Pretraining" }, { "answers": [ "" ], "context": "Our segmentor works incrementally from left to right, as the example shown in Table TABREF1 . At each step, the state consists of a sequence of words that have been fully recognized, denoted as INLINEFORM0 , a current partially recognized word INLINEFORM1 , and a sequence of next incoming characters, denoted as INLINEFORM2 , as shown in Figure FIGREF4 . Given an input sentence, INLINEFORM3 and INLINEFORM4 are initialized to INLINEFORM5 and INLINEFORM6 , respectively, and INLINEFORM7 contains all the input characters. At each step, a decision is made on INLINEFORM8 , either appending it as a part of INLINEFORM9 , or seperating it as the beginning of a new word. The incremental process repeats until INLINEFORM10 is empty and INLINEFORM11 is null again ( INLINEFORM12 , INLINEFORM13 ). Formally, the process can be regarded as a state-transition process, where a state is a tuple INLINEFORM14 , and the transition actions include Sep (seperate) and App (append), as shown by the deduction system in Figure FIGREF7 .", "id": 1700, "question": "What submodules does the model consist of?", "title": "Neural Word Segmentation with Rich Pretraining" }, { "answers": [ "" ], "context": "Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation.", "id": 1701, "question": "How they add human prefference annotation to fine-tuning process?", "title": "Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models" }, { "answers": [ "" ], "context": "Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.", "id": 1702, "question": "What previous automated evalution approaches authors mention?", "title": "Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models" }, { "answers": [ "Pearson correlation to human judgement - proposed vs next best metric\nSample level comparison:\n- Story generation: 0.387 vs 0.148\n- Dialogue: 0.472 vs 0.341\nModel level comparison:\n- Story generation: 0.631 vs 0.302\n- Dialogue: 0.783 vs 0.553" ], "context": "We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models.", "id": 1703, "question": "How much better peformance is achieved in human evaluation when model is trained considering proposed metric?", "title": "Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models" }, { "answers": [ "" ], "context": "The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments.", "id": 1704, "question": "Do the authors suggest that proposed metric replace human evaluation on this task?", "title": "Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models" }, { "answers": [ "" ], "context": "Extractive reading comprehension BIBREF0 , BIBREF1 obtains great attentions from both research and industry in recent years. End-to-end neural models BIBREF2 , BIBREF3 , BIBREF4 have achieved remarkable performance on the task if answers are assumed to be in the given paragraph. Nonetheless, the current systems are still not good at deciding whether no answer is presented in the context BIBREF5 . For unanswerable questions, the systems are supposed to abstain from answering rather than making unreliable guesses, which is an embodiment of language understanding ability.", "id": 1705, "question": "What is the training objective of their pair-to-sequence model?", "title": "Learning to Ask Unanswerable Questions for Machine Reading Comprehension" }, { "answers": [ "" ], "context": "Machine Reading Comprehension (MRC) Various large-scale datasets BIBREF0 , BIBREF1 , BIBREF11 , BIBREF12 , BIBREF5 , BIBREF13 have spurred rapid progress on machine reading comprehension in recent years. SQuAD BIBREF1 is an extractive benchmark whose questions and answers spans are annotated by humans. Neural reading comprehension systems BIBREF14 , BIBREF2 , BIBREF3 , BIBREF15 , BIBREF8 , BIBREF16 , BIBREF4 , BIBREF17 have outperformed humans on this task in terms of automatic metrics. The SQuAD 2.0 dataset BIBREF5 extends SQuAD with more than $50,000$ crowdsourced unanswerable questions. So far, neural reading comprehension models still fall behind humans on SQuAD 2.0. Abstaining from answering when no answer can be inferred from the given document does require more understanding than barely extracting an answer.", "id": 1706, "question": "How do they ensure the generated questions are unanswerable?", "title": "Learning to Ask Unanswerable Questions for Machine Reading Comprehension" }, { "answers": [ "" ], "context": "Given an answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ , we aim to generate unanswerable questions $\\tilde{q}$ that fulfills certain requirements. First, it cannot be answered by paragraph $p$ . Second, it must be relevant to both answerable question $q$ and paragraph $p$ , which refrains from producing irrelevant questions. Third, it should ask for something of the same type as answer $a$ .", "id": 1707, "question": "Does their approach require a dataset of unanswerable questions mapped to similar answerable questions?", "title": "Learning to Ask Unanswerable Questions for Machine Reading Comprehension" }, { "answers": [ "" ], "context": "Recent studies in information extraction domain (but also in other natural language processing fields) show that deep learning models produce state-of-the-art results BIBREF0 . Deep architectures employ multiple layers to learn hierarchical representations of the input data. In the last few years, neural networks based on dense vector representations provided the best results in various NLP tasks, including named entities recognition BIBREF1 , semantic role labelling BIBREF2 , question answering BIBREF3 and multitask learning BIBREF4 . The core element of most deep learning solutions is the dense distributed semantic representation of words, often called word embeddings. Distributional vectors follow the distributional hypothesis that words with a similar meaning tend to appear in similar contexts. Word embeddings capture the similarity between words and are often used as the first layer in deep learning models. Two of the most common and very efficient methods to produce word embeddings are Continuous Bag-of-Words (CBOW) and Skip-gram (SG), which produce distributed representations of words in a vector space, grouping them by similarity BIBREF5 , BIBREF6 . With the progress of machine learning techniques, it is possible to train such models on much larger data sets, and these often outperform the simple ones. It is possible to use a set of text documents containing even billions of words as training data. Both architectures (CBOW and SG) describe how the neural network learns the vector word representations for each word. In CBOW architecture the task is predicting the word given its context and in SG the task in predicting the context given the word.", "id": 1708, "question": "What conclusions are drawn from these experiments?", "title": "Evaluating KGR10 Polish word embeddings in the recognition of temporal expressions using BiLSTM-CRF" }, { "answers": [ "" ], "context": "At the time we were testing word embeddings for different applications, there were 2 most popular sources of word vectors. The first one, called IPIPAN, is the result of the project Compositional distributional semantic models for identification, discrimination and disambiguation of senses in Polish texts, the process of creating word embeddings is described in article BIBREF10 and corpora used were National Corpus of Polish (NKJP) BIBREF14 and Wikipedia (Wiki). The second one, called FASTTEXT, is original FastText word embeddings set, created for 157 languages (including Polish). Authors used Wikipedia and Common Crawl as the linguistic data source. Table TABREF6 shows the number of tokens in each corpus and the name of the institution which prepared it. There is also information about the public availability of the resource.", "id": 1709, "question": "What experiments are presented?", "title": "Evaluating KGR10 Polish word embeddings in the recognition of temporal expressions using BiLSTM-CRF" }, { "answers": [ "" ], "context": "KGR7 corpus (also called plWordNet Corpus 7.0, PLWNC 7.0) BIBREF15 , BIBREF16 was created at the Wroclaw University of Science and Technology by G4.19 Group. Due to the licences of documents in this corpus, this resource is not publicly available. Table TABREF8 contains KGR7 subcorpora and statistics BIBREF17 . One of the subcorpora in KGR7 is KIPI (the IPI PAN Corpus) BIBREF18 . KGR7 covers texts from a wide range of domains like: blogs, science, stenographic recordings, news, journalism, books and parliamentary transcripts. All texts come from the second half of the 20th century and represent the modern Polish language.", "id": 1710, "question": "What is specific about the specific embeddings?", "title": "Evaluating KGR10 Polish word embeddings in the recognition of temporal expressions using BiLSTM-CRF" }, { "answers": [ "" ], "context": "KGR10, also known as plWordNet Corpus 10.0 (PLWNC 10.0), is the result of the work on the toolchain to automatic acquisition and extraction of the website content, called CorpoGrabber BIBREF19 . It is a pipeline of tools to get the most relevant content of the website, including all subsites (up to the user-defined depth). The proposed toolchain can be used to build a big Web corpus of text documents. It requires the list of the root websites as the input. Tools composing CorpoGrabber are adapted to Polish, but most subtasks are language independent. The whole process can be run in parallel on a single machine and includes the following tasks: download of the HTML subpages of each input page URL with HTTrack, extraction of plain text from each subpage by removing boilerplate content (such as navigation links, headers, footers, advertisements from HTML pages) BIBREF20 , deduplication of plain text BIBREF20 , bad quality documents removal utilising Morphological Analysis Converter and Aggregator (MACA) BIBREF21 , documents tagging using Wrocław CRF Tagger (WCRFT) BIBREF22 . Last two steps are available only for Polish.", "id": 1711, "question": "What embedding algorithm is used to build the embeddings?", "title": "Evaluating KGR10 Polish word embeddings in the recognition of temporal expressions using BiLSTM-CRF" }, { "answers": [ "" ], "context": "We created a new Polish word embeddings models using the KGR10 corpus. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . These models are available under an open license in the CLARIN-PL project DSpace repository. The internal encoding solution based on embeddings of n-grams composing each word makes it possible to obtain FastText vector representations, also for words which were not processed during the creation of the model. A vector representation is associated with character n-gram and each word is represented as the sum of its n-gram vector representations. Previous solutions ignored the morphology of words and were assigning a distinct vector to each word. This is a limitation for languages with large vocabularies and many rare words, like Turkish, Finnish or Polish BIBREF9 . Authors observed that using word representations trained with subword information outperformed the plain Skip-gram model and the improvement was most significant for morphologically rich Slavic languages such as Czech (8% reduction of perplexity over SG) and Russian (13% reduction) BIBREF9 . We expected that word embeddings created that way for Polish should also provide such improvements. There were also previous attempts to build KGR10 word vectors with other methods (including FastText), and the results are presented in the article BIBREF8 . We selected the best models from that article – with embedding ID prefix EP (embeddings, previous) in Table TABREF13 – to compare with new models, marked as embedding ID prefix EC in Table TABREF13 ).", "id": 1712, "question": "How was the KGR10 corpus created?", "title": "Evaluating KGR10 Polish word embeddings in the recognition of temporal expressions using BiLSTM-CRF" }, { "answers": [ "" ], "context": "Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which do not have large corpora like English and Japanese have. There are about 5,000 languages in the world over half of which are faced with the danger of extinction. Therefore, constructing ASR systems for these endangered languages is an important issue.", "id": 1713, "question": "How big are improvements with multilingual ASR training vs single language training?", "title": "Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language" }, { "answers": [ "Transcribed data is available for duration of 38h 54m 38s for 8 speakers." ], "context": "This section briefly overviews the background of the data collection, the Ainu language, and its writing system. After that, we describe how Ainu recordings are classified and review previous works dealing with the Ainu language.", "id": 1714, "question": "How much transcribed data is available for for Ainu language?", "title": "Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language" }, { "answers": [ "" ], "context": "The Ainu people had total population of about 20,000 in the mid-19th century BIBREF7 and they used to live widely distributed in the area that includes Hokkaido, Sakhalin, and the Kuril Islands. The number of native speakers, however, rapidly decreased through the assimilation policy after late 19th century. At present, there are only less than 10 native speakers, and UNESCO listed their language as critically endangered in 2009 BIBREF8. In response to this situation, Ainu folklore and songs have been actively recorded since the late 20th century in efforts initiated by the Government of Japan. For example, the Ainu Museum started audio recording of Ainu folklore in 1976 with the cooperation of a few Ainu elders which resulted in the collection of speech data with the total duration of roughly 700 hours. This kind of data should be a key to the understanding of Ainu culture, but most of it is not transcribed and fully studied yet.", "id": 1715, "question": "What is the difference between speaker-open and speaker-closed setting?", "title": "Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language" }, { "answers": [ "HotspotQA: Yang, Ding, Muppet\nFever: Hanselowski, Yoneda, Nie" ], "context": "Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task.", "id": 1716, "question": "What baseline approaches do they compare against?", "title": "Revealing the Importance of Semantic Retrieval for Machine Reading at Scale" }, { "answers": [ "" ], "context": "Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS.", "id": 1717, "question": "How do they train the retrieval modules?", "title": "Revealing the Importance of Semantic Retrieval for Machine Reading at Scale" }, { "answers": [ "" ], "context": "In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2.", "id": 1718, "question": "How do they model the neural retrieval modules?", "title": "Revealing the Importance of Semantic Retrieval for Machine Reading at Scale" }, { "answers": [ "" ], "context": "Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.", "id": 1719, "question": "Retrieval at what level performs better, sentence level or paragraph level?", "title": "Revealing the Importance of Semantic Retrieval for Machine Reading at Scale" }, { "answers": [ "" ], "context": "Understanding texts and being able to answer a question posed by a human is a long-standing goal in the artificial intelligence field. Given the rapid advancement of neural network-based models and the availability of large-scale datasets, such as SQuAD BIBREF0 and TriviaQA BIBREF1, researchers have begun to concentrate on building automatic question-answering (QA) systems. One example of such a system is called the machine-reading question-answering (MRQA) model, which provides answers to questions from given passages BIBREF2, BIBREF3, BIBREF4.", "id": 1720, "question": "How much better performance of proposed model compared to answer-selection models?", "title": "Propagate-Selector: Detecting Supporting Sentences for Question Answering via Graph Neural Networks" }, { "answers": [ "" ], "context": "Previous researchers have also investigated neural network-based models for MRQA. One line of inquiry employs an attention mechanism between tokens in the question and passage to compute the answer span from the given text BIBREF12, BIBREF3. As the task scope was extended from specific- to open-domain QA, several models have been proposed to select a relevant paragraph from the text to predict the answer span BIBREF13, BIBREF14. However, none of these methods have addressed reasoning over multiple sentences.", "id": 1721, "question": "How are some nodes initially connected based on text structure?", "title": "Propagate-Selector: Detecting Supporting Sentences for Question Answering via Graph Neural Networks" }, { "answers": [ "2" ], "context": "When people interact with chatbots, smart speakers or digital assistants (e.g., Siri), one of their primary modes of interaction is information retrieval BIBREF0 . Thus, those that build dialog systems often have to tackle the problem of question answering.", "id": 1722, "question": "how many domains did they experiment with?", "title": "Katecheo: A Portable and Modular System for Multi-Topic Question Answering" }, { "answers": [ "" ], "context": "Katecheo is partially inspired by the work of BIBREF1 on DrQA. That previously developed method has two primary phases of question answering: document retrieval and reading comprehension. Together these functionalities enable open domain question answering. However, many dialog systems are not completely open domain. For example, developers might want to create a chatbot that has targeted conversations about restaurant reservations and movie times. It would be advantageous for such a chatbot to answer questions about food and entertainment, but the developers might not want to allow the conversation to stray into other topics.", "id": 1723, "question": "what pretrained models were used?", "title": "Katecheo: A Portable and Modular System for Multi-Topic Question Answering" }, { "answers": [ "" ], "context": "", "id": 1724, "question": "What domains are contained in the polarity classification dataset?", "title": "Transductive Learning with String Kernels for Cross-Domain Text Classification" }, { "answers": [ "8000" ], "context": "", "id": 1725, "question": "How long is the dataset?", "title": "Transductive Learning with String Kernels for Cross-Domain Text Classification" }, { "answers": [ "" ], "context": "Transfer learning (or domain adaptation) aims at building effective classifiers for a target domain when the only available labeled training data belongs to a different (source) domain. Domain adaptation techniques can be roughly divided into graph-based methods BIBREF1 , BIBREF29 , BIBREF9 , BIBREF30 , probabilistic models BIBREF3 , BIBREF4 , knowledge-based models BIBREF14 , BIBREF31 , BIBREF11 and joint optimization frameworks BIBREF12 . The transfer learning methods from the literature show promising results in a variety of real-world applications, such as image classification BIBREF12 , text classification BIBREF13 , BIBREF16 , BIBREF3 , polarity classification BIBREF1 , BIBREF29 , BIBREF4 , BIBREF6 , BIBREF30 and others BIBREF32 .", "id": 1726, "question": "What machine learning algorithms are used?", "title": "Transductive Learning with String Kernels for Cross-Domain Text Classification" }, { "answers": [ "String kernel is a technique that uses character n-grams to measure the similarity of strings" ], "context": "In recent years, methods based on string kernels have demonstrated remarkable performance in various text classification tasks BIBREF35 , BIBREF36 , BIBREF22 , BIBREF19 , BIBREF10 , BIBREF17 , BIBREF26 . String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams. Lodhi et al. BIBREF35 used string kernels for document categorization, obtaining very good results. String kernels were also successfully used in authorship identification BIBREF22 . More recently, various combinations of string kernels reached state-of-the-art accuracy rates in native language identification BIBREF19 and Arabic dialect identification BIBREF17 . Interestingly, string kernels have been used in cross-domain settings without any domain adaptation, obtaining impressive results. For instance, Ionescu et al. BIBREF19 have employed string kernels in a cross-corpus (and implicitly cross-domain) native language identification experiment, improving the state-of-the-art accuracy by a remarkable INLINEFORM0 . Giménez-Pérez et al. BIBREF10 have used string kernels for single-source and multi-source polarity classification. Remarkably, they obtain state-of-the-art performance without using knowledge from the target domain, which indicates that string kernels provide robust results in the cross-domain setting without any domain adaptation. Ionescu et al. BIBREF17 obtained the best performance in the Arabic Dialect Identification Shared Task of the 2017 VarDial Evaluation Campaign BIBREF37 , with an improvement of INLINEFORM1 over the second-best method. It is important to note that the training and the test speech samples prepared for the shared task were recorded in different setups BIBREF37 , or in other words, the training and the test sets are drawn from different distributions. Different from all these recent approaches BIBREF19 , BIBREF10 , BIBREF17 , we use unlabeled data from the target domain to significantly increase the performance of string kernels in cross-domain text classification, particularly in English polarity classification.", "id": 1727, "question": "What is a string kernel?", "title": "Transductive Learning with String Kernels for Cross-Domain Text Classification" }, { "answers": [ "" ], "context": "Vocal entrainment is an established social adaptation mechanism. It can be loosely defined as one speaker's spontaneous adaptation to the speaking style of the other speaker. Entrainment is a fairly complex multifaceted process and closely associated with many other mechanisms such as coordination, synchrony, convergence etc. While there are various aspects and levels of entrainment BIBREF0 , there is also a general agreement that entrainment is a sign of positive behavior towards the other speaker BIBREF1 , BIBREF2 , BIBREF3 . High degree of vocal entrainment has been associated with various interpersonal behavioral attributes, such as high empathy BIBREF4 , more agreement and less blame towards the partner and positive outcomes in couple therapy BIBREF5 , and high emotional bond BIBREF6 . A good understanding of entrainment provides insights to various interpersonal behaviors and facilitates the recognition and estimation of these behaviors in the realm of Behavioral Signal Processing BIBREF7 , BIBREF8 . Moreover, it also contributes to the modeling and development of `human-like' spoken dialog systems or conversational agents.", "id": 1728, "question": "Which dataset do they use to learn embeddings?", "title": "Towards an Unsupervised Entrainment Distance in Conversational Speech using Deep Neural Networks" }, { "answers": [ "They compute Pearson’s correlation between NED measure for patient-to-therapist and patient-perceived emotional bond rating and NED measure for therapist-to-patient and patient-perceived emotional bond rating" ], "context": "We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk Assessment corpus BIBREF16 , along with Fisher.", "id": 1729, "question": "How do they correlate NED with emotional bond levels?", "title": "Towards an Unsupervised Entrainment Distance in Conversational Speech using Deep Neural Networks" }, { "answers": [ "52.0%" ], "context": "Most modern approaches to NLP tasks rely on supervised learning algorithms to learn and generalize from labeled training data. While this has proven successful in high-resource scenarios, this is not realistic in many cases, such as low-resource languages, as the required amount of training data just doesn't exist. However, partial annotations are often easy to gather.", "id": 1730, "question": "What was their F1 score on the Bengali NER corpus?", "title": "Named Entity Recognition with Partially Annotated Training Data" }, { "answers": [ "" ], "context": "The supervision paradigm in this paper, partial supervision, falls broadly under the category of semi-supervision BIBREF0, and is closely related to weak supervision BIBREF1 and incidental supervision BIBREF2, in the sense that data is constructed through some noisy process. However, all of the most related work shares a key difference from ours: reliance on a small amount of fully annotated data in addition to the noisy data.", "id": 1731, "question": "Which languages are evaluated?", "title": "Named Entity Recognition with Partially Annotated Training Data" }, { "answers": [ "" ], "context": "Automatic Speech Recognition (ASR) has traditionally used Hidden Markov Models (HMM), describing temporal variability, combined with Gaussian Mixture Models (GMM), computing emission probabilities from HMM states, to model and map acoustic features to phones. In recent years, the introduction of deep neural networks replacing GMM for acoustic modeling showed huge improvements compared to previous state-of-the-art systems BIBREF0, BIBREF1. However, building and training such systems can be complex and a lot of preprocessing steps are involved. Traditional ASR systems are also factorized in several modules, the acoustic model representing only one of them along with lexicon and language models.", "id": 1732, "question": "Which model have the smallest Character Error Rate and which have the smallest Word Error Rate?", "title": "End-to-End Speech Recognition: A review for the French Language" }, { "answers": [ "" ], "context": "The CTC BIBREF5 can be seen as a direct translation of conventional HMM-DNN ASR systems into lexicon-free systems. Thus, the CTC follows the general ASR formulation, training the model to maximize $P(Y|X)$ the probability distribution over all possible label sequences: Y = arg Y A* p(Y|X) Here, $X$ denotes the observations, $Y$ is a sequence of acoustic units of length $L$ such that $Y = \\lbrace y_{l} \\in \\, \\mathcal {A} | l = 1, ..., L\\rbrace $, where $\\mathcal {A}$ is an alphabet containing all distinct units. As in traditional HMM-DNN systems, the CTC model makes conditional independence assumptions between output predictions at different time steps given aligned inputs and it uses the probabilistic chain rule to factorize the posterior distribution $p(Y|X)$ into three distributions (i.e. framewise posterior distribution, transition probability and prior distribution of units). However, unlike HMM-based models, the framewise posterior distribution is defined here as a framewise acoustic unit sequence $B$ with an additional blank label ${<}blank{>}$ such as $B = \\lbrace b_{t} \\in \\, \\mathcal {A}\\, \\cup \\, {<}blank{>} | t = 1, ..., T\\rbrace $. p(Y|X) = b=1Bt=1T p(bt | bt-1, Y) p(bt|X)pctc(Y|X) p(Y)", "id": 1733, "question": "What will be in focus for future work?", "title": "End-to-End Speech Recognition: A review for the French Language" }, { "answers": [ "" ], "context": "As opposed to CTC, the attention-based approach BIBREF7, BIBREF8 does not assume conditional independence between predictions at different time steps and does not marginalize over all alignments. Thus the posterior distribution $p(Y|X)$ is directly computed by picking a soft alignment between each output step and every input step as follows: patt(Y|X) = l=1U p(yl | y1, ..., yl-1, X) Here $p(y_{l}|y_{1},...,y_{l-1}, X)$, – our attention-based objective function –, is obtained according to a probability distribution, typically a softmax, applied to the linear projection of the output of a recurrent neural network (or long-short term memory network), called decoder, such as: p(yl|y1,...,yl-1, X) = softmax(lin(RNN())) The decoder output is conditioned by the previous output $y_{l-1}$, a hidden vector $d_{l-1}$ and a context vector $c_{l}$. Here $d_{l-1}$ denotes the high level representation (i.e. hidden states) of the decoder at step $l-1$, encoding the target input, and $c_{l}$ designate the context – or symbol-wise vector in our case – for decoding step $l$, which is computed as the sum of the complete high representation $h$ of another recurrent neural network, encoding the source input $X$, weighted by $\\alpha $ the attention weight: cl = s=1S l, s hs , l, s = (et, s)s'=1S (el, s') where $e_{t}$, also referred to as energy, measures how well the inputs around position $s$ and the output at position $l$ match, given the decoder states at decoding step $l-1$ and $h$ the encoder states for input $X$. In the following, we report the standard content-based mechanism and its location-aware variant which takes into account the alignment produced at the previous step using convolutional features: el, s = {ll content-based:", "id": 1734, "question": "Which acoustic units are more suited to model the French language?", "title": "End-to-End Speech Recognition: A review for the French Language" }, { "answers": [ "" ], "context": "The RNN transducer architecture was first introduced by Graves and al. BIBREF9 to address the main limitation of the proposed CTC network: it cannot model interdependencies as it assumes conditional independence between predictions at different time steps.", "id": 1735, "question": "What are the existing end-to-end ASR approaches for the French language?", "title": "End-to-End Speech Recognition: A review for the French Language" }, { "answers": [ "" ], "context": "Neural Machine Translation (NMT) has achieved great success in the last few years BIBREF0, BIBREF1, BIBREF2. The popular Transformer BIBREF2 model, which outperforms previous RNN/CNN based translation models BIBREF0, BIBREF1, is based on multi-layer self-attention networks and can be paralleled effectively.", "id": 1736, "question": "How much is decoding speed increased by increasing encoder and decreasing decoder depth?", "title": "Analyzing Word Translation of Transformer Layers" }, { "answers": [ "" ], "context": "Telemedicine refers to the practice of delivering patient care remotely, where doctors provide medical consultations to patients using HIPAA compliant video-conferencing tools. As an important complement to traditional face-to-face medicine practiced physically in hospitals and clinics, telemedicine has a number of advantages. First, it increases access to care. For people living in medically under-served communities (e.g., rural areas) that are in shortage of clinicians, telemedicine enables them to receive faster and cheaper care compared with traveling over a long distance to visit a clinician. Second, it reduces healthcare cost. In a study by Jefferson Health, it is shown that diverting patients from emergency departments with telemedicine can save more than $1,500 per visit. Third, telemedicine can improve quality of care. The study in BIBREF0 shows that telemedicine patients score lower for depression, anxiety, and stress, and have 38% fewer hospital admissions. Other advantages include improving patient engagement and satisfaction, improving provider satisfaction, etc. Please refer to BIBREF1 for a more comprehensive review.", "id": 1737, "question": "Did they experiment on this dataset?", "title": "MedDialog: A Large-scale Medical Dialogue Dataset" }, { "answers": [ "" ], "context": "The MedDialog dataset contains 1,145,231 consultations between patients and doctors. The total number of utterances is 3,959,333: 2,179,008 from doctors and 1,780,325 from patients. Each consultation consists of three parts: (1) description of patient's medical condition and history; (2) conversation between patient and doctor; (3) (optional) diagnosis and treatment suggestions given by the doctor. In the description of patient's medical condition and history, the following fields are included: present disease, detailed description of present disease, what help is needed from the doctor, how long the disease has been, medications, allergies, and past disease. Figure FIGREF3 shows an exemplar consultation. In the conversation, there are cases that multiple consecutive utterances are from the same person (either doctor or patient) and these utterances were posted at different time points. If we combine consecutive utterances from the same person into a single one, there are 3,209,660 utterances: 1,981,844 from doctors and 1,227,816 from patients. The data is crawled from haodf.com, which is an online platform of healthcare services, including medical consultation, scheduling appointment with doctors, etc.", "id": 1738, "question": "What language are the conversations in?", "title": "MedDialog: A Large-scale Medical Dialogue Dataset" }, { "answers": [ "" ], "context": "Large number of conversations and utterances. To our best knowledge, MedDialog is the largest medical dialogue dataset. It has about 1.1 million conversations and 4 million utterances.", "id": 1739, "question": "How did they annotate the dataset?", "title": "MedDialog: A Large-scale Medical Dialogue Dataset" }, { "answers": [ "" ], "context": "The language is Chinese, which is not easy for non-Chinese-speaking researchers to work on.", "id": 1740, "question": "What annotations are in the dataset?", "title": "MedDialog: A Large-scale Medical Dialogue Dataset" }, { "answers": [ "300,000 sentences with 1.5 million single-quiz questions" ], "context": "With the advent of the Web 2.0, regular users were able to share, remix and distribute content very easily. As a result of this process, the Web became a rich interconnected set of heterogeneous data sources. Being in a standard format, it is suitable for many tasks involving knowledge extraction and representation. For example, efforts have been made to design games with the purpose of semi-automating a wide range of knowledge transfer tasks, such as educational quizzes, by leveraging on this kind of data.", "id": 1741, "question": "What is the size of the dataset?", "title": "Learning to Automatically Generate Fill-In-The-Blank Quizzes" }, { "answers": [ "" ], "context": "The problem of fill-in-the-blank question generation has been studied in the past by several authors. Perhaps the earlies approach is by BIBREF1 , who proposed a cloze question generation system which focuses on distractor generation using search engines to automatically measure English proficiency. In the same research line, we also find the work of BIBREF2 , BIBREF3 and BIBREF4 . In this context, the work of BIBREF10 probably represents the first effort in applying machine learning techniques for multiple-choice cloze question generation. The authors propose an approach that uses conditional random fields BIBREF11 based on hand-crafted features such as word POS tags.", "id": 1742, "question": "What language platform does the data come from?", "title": "Learning to Automatically Generate Fill-In-The-Blank Quizzes" }, { "answers": [ "" ], "context": "We formalize the problem of automatic fill-on-the-blanks quiz generation using two different perspectives. These are designed to match with specific machine learning schemes that are well-defined in the literature. In both cases. we consider a training corpus of INLINEFORM0 pairs INLINEFORM1 where INLINEFORM2 is a sequence of INLINEFORM3 tokens and INLINEFORM4 is an index that indicates the position that should be blanked inside INLINEFORM5 .", "id": 1743, "question": "Which two schemes are used?", "title": "Learning to Automatically Generate Fill-In-The-Blank Quizzes" }, { "answers": [ "Around 388k examples, 194k from tst2013 (in-domain) and 194k from newstest2014 (out-of-domain)" ], "context": "Due to the fact that Neural Machine Translation (NMT) is reaching comparable or even better performance compared to the traditional statistical machine translation (SMT) models BIBREF0 , BIBREF1 , it has become very popular in the recent years BIBREF2 , BIBREF3 , BIBREF4 . With the great success of NMT, new challenges arise which have already been address with reasonable success in traditional SMT. One of the challenges is domain adaptation. In a typical domain adaptation setup such as ours, we have a large amount of out-of-domain bilingual training data for which we already have a trained neural network model (baseline). Given only an additional small amount of in-domain data, the challenge is to improve the translation performance on the new domain without deteriorating the performance on the general domain significantly. One approach one might take is to combine the in-domain data with the out-of-domain data and train the NMT model from scratch. However, there are two main problems with that approach. First, training a neural machine translation system on large data sets can take several weeks and training a new model based on the combined training data is time consuming. Second, since the in-domain data is relatively small, the out-of-domain data will tend to dominate the training data and hence the learned model will not perform as well on the in-domain test data. In this paper, we reuse the already trained out-of-domain system and continue training only on the small portion of in-domain data similar to BIBREF5 . While doing this, we adapt the parameters of the neural network model to the new domain. Instead of relying completely on the adapted (further-trained) model and over fitting on the in-domain data, we decode using an ensemble of the baseline model and the adapted model which tends to perform well on the in-domain data without deteriorating the performance on the baseline general domain.", "id": 1744, "question": "How many examples do they have in the target domain?", "title": "Fast Domain Adaptation for Neural Machine Translation" }, { "answers": [ "" ], "context": "This chapter describes a series of tools for developing and testing type-logical grammars. The Grail family of theorem provers have been designed to work with a variety of modern type-logical frameworks, including multimodal type-logical grammars BIBREF0 , NL $_{cl}$ BIBREF1 , the Displacement calculus BIBREF2 and hybrid type-logical grammars BIBREF3 .", "id": 1745, "question": "Does Grail accept Prolog inputs?", "title": "The Grail theorem prover: Type theory for syntax and semantics" }, { "answers": [ "" ], "context": "Type-logical grammars are a family of grammar formalisms built on a foundation of logic and type theory. Type-logical grammars originated when BIBREF4 introduced his Syntactic Calculus (called the Lambek calculus, L, by later authors). Though Lambek built on the work of BIBREF5 , BIBREF6 and others, Lambek's main innovation was to cast the calculus as a logic, giving a sequent calculus and showing decidability by means of cut elimination. This combination of linguistic and computational applications has proved very influential.", "id": 1746, "question": "What formalism does Grail use?", "title": "The Grail theorem prover: Type theory for syntax and semantics" }, { "answers": [ "" ], "context": "Question answering (QA) is the task of automatically producing an answer to a question given a corresponding document. It not only provides humans with efficient access to vast amounts of information, but also acts as an important proxy task to assess machine literacy via reading comprehension. Thanks to the recent release of several large-scale machine comprehension/QA datasets BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , the field has undergone significant advancement, with an array of neural models rapidly approaching human parity on some of these benchmarks BIBREF5 , BIBREF6 , BIBREF7 . However, previous models do not treat QA as a task of natural language generation (NLG), but of pointing to an answer span within a document.", "id": 1747, "question": "Which components of QA and QG models are shared during training?", "title": "A Joint Model for Question Answering and Question Generation" }, { "answers": [ "" ], "context": "Joint-learning on multiple related tasks has been explored previously BIBREF17 , BIBREF18 . In machine translation, for instance, BIBREF18 demonstrated that translation quality clearly improves over models trained with a single language pair when the attention mechanism in a neural translation model is shared and jointly trained on multiple language pairs.", "id": 1748, "question": "How much improvement does jointly learning QA and QG give, compared to only training QA?", "title": "A Joint Model for Question Answering and Question Generation" }, { "answers": [ "" ], "context": "Word embeddings have been used to improve the performance of many NLP tasks including language modelling BIBREF1 , machine translation BIBREF2 , and sentiment analysis BIBREF3 . The broad applicability of word embeddings to NLP implies that improvements to their quality will likely have widespread benefits for the field.", "id": 1749, "question": "Do they test their word embeddings on downstream tasks?", "title": "Word Embeddings via Tensor Factorization" }, { "answers": [ "" ], "context": "Some common word embeddings related to co-occurrence based matrix factorization include GloVe BIBREF7 , word2vec BIBREF9 , LexVec BIBREF10 , and NNSE BIBREF8 . In contrast, our work studies word embeddings given by factorization of tensors. An overview of tensor factorization methods is given in BIBREF11 .", "id": 1750, "question": "What are the main disadvantages of their proposed word embeddings?", "title": "Word Embeddings via Tensor Factorization" }, { "answers": [ "" ], "context": "Throughout this paper we will write scalars in lowercase italics INLINEFORM0 , vectors in lowercase bold letters INLINEFORM1 , matrices with uppercase bold letters INLINEFORM2 , and tensors (of order INLINEFORM3 ) with Euler script notation INLINEFORM4 , as is standard in the literature.", "id": 1751, "question": "What dimensions of word embeddings do they produce using factorization?", "title": "Word Embeddings via Tensor Factorization" }, { "answers": [ "" ], "context": "Pointwise mutual information (PMI) is a useful property in NLP that quantifies the likelihood that two words co-occur BIBREF9 . It is defined as: INLINEFORM0 ", "id": 1752, "question": "On which dataset(s) do they compute their word embeddings?", "title": "Word Embeddings via Tensor Factorization" }, { "answers": [ "" ], "context": "Just as the rank- INLINEFORM0 matrix decomposition is defined to be the product of two factor matrices ( INLINEFORM1 ), the canonical rank- INLINEFORM2 tensor decomposition for a third order tensor is defined to be the product of three factor matrices BIBREF11 : DISPLAYFORM0 ", "id": 1753, "question": "Do they measure computation time of their factorizations compared to other word embeddings?", "title": "Word Embeddings via Tensor Factorization" }, { "answers": [ "" ], "context": "Voice conversion (VC) aims to convert the speech from a source to that of a target without changing the linguistic content BIBREF0. Conventional VC systems follow an analysis—conversion —synthesis paradigm BIBREF1. First, a high quality vocoder such as WORLD BIBREF2 or STRAIGHT BIBREF3 is utilized to extract different acoustic features, such as spectral features and fundamental frequency (F0). These features are converted separately, and a waveform synthesizer finally generates the converted waveform using the converted features. Past VC studies have focused on the conversion of spectral features while only applying a simple linear transformation to F0. In addition, the conversion is usually performed frame-by-frame, i.e, the converted speech and the source speech are always of the same length. To summarize, the conversion of prosody, including F0 and duration, is overly simplified in the current VC literature.", "id": 1754, "question": "What datasets are experimented with?", "title": "Voice Transformer Network: Sequence-to-Sequence Voice Conversion Using Transformer with Text-to-Speech Pretraining" }, { "answers": [ "a RNN-based seq2seq VC model called ATTS2S based on the Tacotron model" ], "context": "Seq2seq models are used to find a mapping between a source feature sequence $\\vec{x}_{1:n}=(\\vec{x}_1, \\cdots , \\vec{x}_n)$ and a target feature sequence $\\vec{y}_{1:m}=(\\vec{y}_1, \\cdots , \\vec{y}_m)$ which do not necessarily have to be of the same length, i.e, $n \\ne m$. Most seq2seq models have an encoder—decoder structure BIBREF4, where advanced ones are equipped with an attention mechanism BIBREF5, BIBREF6. First, an encoder ($\\text{Enc}$) maps $\\vec{x}_{1:n}$ into a sequence of hidden representations ${1:n}=(1, \\cdots , n)$. The decoding of the output sequence is autoregressive, which means that the previously generated symbols are considered an additional input at each decoding time step. To decode an output feature $\\vec{y}_t$, a weighted sum of ${1:n}$ first forms a context vector $\\vec{c}_t$, where the weight vector is represented by a calculated attention probability vector $\\vec{a}_t=(a^{(1)}_t, \\cdots , a^{(n)}_t)$. Each attention probability $a^{(k)}_t$ can be thought of as the importance of the hidden representation $k$ at the $t$th time step. Then the decoder ($\\text{Dec}$) uses the context vector $\\vec{c}$ and the previously generated features $\\vec{y}_{1:t-1}=(\\vec{y}_1, \\cdots , \\vec{y}_{t-1})$ to decode $\\vec{y}_t$. Note that both the calculation of the attention vector and the decoding process take the previous hidden state of the decoder $\\vec{q}_{t-1}$ as the input. The above-mentioned procedure can be formulated as follows: 1:n = Enc(x1:n),", "id": 1755, "question": "What is the baseline model?", "title": "Voice Transformer Network: Sequence-to-Sequence Voice Conversion Using Transformer with Text-to-Speech Pretraining" }, { "answers": [ "" ], "context": "Social media are increasingly being used in the scientific community as a key source of data to help understand diverse natural and social phenomena, and this has prompted the development of a wide range of computational data mining tools that can extract knowledge from social media for both post-hoc and real time analysis. Thanks to the availability of a public API that enables the cost-free collection of a significant amount of data, Twitter has become a leading data source for such studies BIBREF0 . Having Twitter as a new kind of data source, researchers have looked into the development of tools for real-time trend analytics BIBREF1 , BIBREF2 or early detection of newsworthy events BIBREF3 , as well as into analytical approaches for understanding the sentiment expressed by users towards a target BIBREF4 , BIBREF5 , BIBREF6 , or public opinion on a specific topic BIBREF7 . However, Twitter data lacks reliable demographic details that would enable a representative sample of users to be collected and/or a focus on a specific user subgroup BIBREF8 , or other specific applications such as helping establish the trustworthiness of information posted BIBREF9 . Automated inference of social media demographics would be useful, among others, to broaden demographically aware social media analyses that are conducted through surveys BIBREF10 . One of the missing demographic details is a user's country of origin, which we study here. The only option then for the researcher is to try to infer such demographic characteristics before attempting the intended analysis.", "id": 1756, "question": "What model do they train?", "title": "Towards Real-Time, Country-Level Location Classification of Worldwide Tweets" }, { "answers": [ "" ], "context": "A growing body of research deals with the automated inference of demographic details of Twitter users BIBREF8 . Researchers have attempted to infer attributes of Twitter users such as age BIBREF13 , BIBREF14 , gender BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 , political orientation BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 or a range of social identities BIBREF22 . Digging more deeply into the demographics of Twitter users, other researchers have attempted to infer socioeconomic demographics such as occupational class BIBREF23 , income BIBREF24 and socioeconomic status BIBREF25 . Work by Huang et al. BIBREF26 has also tried to infer the nationality of users; this work is different from that which we report here in that the country where the tweets were posted from, was already known.", "id": 1757, "question": "What are the eight features mentioned?", "title": "Towards Real-Time, Country-Level Location Classification of Worldwide Tweets" }, { "answers": [ "" ], "context": "For training our classifier, we rely on the most widely adopted approach for the collection of a Twitter dataset with tweets categorised by location. This involves using the Twitter API endpoint that returns a stream of geolocated tweets posted from within one or more specified geographic bounding boxes. In our study, we set this bounding box to be the whole world (i.e., [-180,-90,180,90]) in order to retrieve tweets worldwide. This way, we collected streams of global geolocated tweets for two different week long periods: 4-11 October, 2014 (TC2014) and 22-28 October, 2015 (TC2015). This led to the collection of 31.7 million tweets in 2014 and 28.8 million tweets in 2015, which we adapt for our purposes as explained below.", "id": 1758, "question": "How many languages are considered in the experiments?", "title": "Towards Real-Time, Country-Level Location Classification of Worldwide Tweets" }, { "answers": [ "" ], "context": "This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.", "id": 1759, "question": "How did they evaluate the system?", "title": "Open Information Extraction from Question-Answer Pairs" }, { "answers": [ "AmazonQA and ConciergeQA datasets" ], "context": "The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper.", "id": 1760, "question": "Where did they get training data?", "title": "Open Information Extraction from Question-Answer Pairs" }, { "answers": [ "Multi-Encoder, Constrained-Decoder model" ], "context": "Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection \"The First Page\" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section \"Length of Submission\" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version.", "id": 1761, "question": "What extraction model did they use?", "title": "Open Information Extraction from Question-Answer Pairs" }, { "answers": [ "ConciergeQA and AmazonQA" ], "context": "The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \\aclfinalcopy command in the document preamble.)", "id": 1762, "question": "Which datasets did they experiment on?", "title": "Open Information Extraction from Question-Answer Pairs" }, { "answers": [ "" ], "context": "NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings.", "id": 1763, "question": "What types of facts can be extracted from QA pairs that can't be extracted from general text?", "title": "Open Information Extraction from Question-Answer Pairs" }, { "answers": [ "by adding extra supervision to generate the slots that will be present in the response" ], "context": "A traditional task-oriented dialogue system is often composed of a few modules, such as natural language understanding, dialogue state tracking, knowledge base (KB) query, dialogue policy engine and response generation. Language understanding aims to convert the input to some predefined semantic frame. State tracking is a critical component that models explicitly the input semantic frame and the dialogue history for producing KB queries. The semantic frame and the corresponding belief state are defined in terms of informable slots values and requestable slots. Informable slot values capture information provided by the user so far, e.g., {price=cheap, food=italian} indicating the user wants a cheap Italian restaurant at this stage. Requestable slots capture the information requested by the user, e.g., {address, phone} means the user wants to know the address and phone number of a restaurant. Dialogue policy model decides on the system action which is then realized by a language generation component.", "id": 1764, "question": "How do slot binary classifiers improve performance?", "title": "Flexibly-Structured Model for Task-Oriented Dialogues" }, { "answers": [ "NDM, LIDM, KVRN, and TSCP/RL" ], "context": "Our work is related to end-to-end task-oriented dialogue systems in general BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF14 , BIBREF7 , BIBREF8 and those that extend the Seq2Seq BIBREF15 architecture in particular BIBREF13 , BIBREF16 , BIBREF17 . Belief tracking, which is necessary to form KB queries, is not explicitly performed in the latter works. To compensate, BIBREF13 , BIBREF18 , BIBREF17 adopt a copy mechanism that allows copying information retrieved from the KB to the generated response. BIBREF16 adopt Memory Networks BIBREF19 to memorize the retrieved KB entities and words appearing in the dialogue history. These models scale linearly with the size of the KB and need to be retrained at each update of the KB. Both issues make these approaches less practical in real-world applications.", "id": 1765, "question": "What baselines have been used in this work?", "title": "Flexibly-Structured Model for Task-Oriented Dialogues" }, { "answers": [ "" ], "context": "Multi-task learning (MTL) in deep neural networks is typically a result of parameter sharing between two networks (of usually the same dimensions) BIBREF0 . If you have two three-layered, recurrent neural networks, both with an embedding inner layer and each recurrent layer feeding the task-specific classifier function through a feed-forward neural network, we have 19 pairs of layers that could share parameters. With the option of having private spaces, this gives us $5^{19}=$ 19,073,486,328,125 possible MTL architectures. If we additionally consider soft sharing of parameters, the number of possible architectures grows infinite. It is obviously not feasible to search this space. Neural architecture search (NAS) BIBREF1 typically requires learning from a large pool of experiments with different architectures. Searching for multi-task architectures via reinforcement learning BIBREF2 or evolutionary approaches BIBREF3 can therefore be quite expensive. In this paper, we jointly learn a latent multi-task architecture and task-specific models, paying a minimal computational cost over single task learning and standard multi-task learning (5-7% training time). We refer to this problem as multi-task architecture learning. In contrast to architecture search, the overall meta-architecture is fixed and the model learns the optimal latent connections and pathways for each task. Recently, a few authors have considered multi-task architecture learning BIBREF4 , BIBREF5 , but these papers only address a subspace of the possible architectures typically considered in neural multi-task learning, while other approaches at most consider a couple of architectures for sharing BIBREF6 , BIBREF7 , BIBREF8 . In contrast, we introduce a framework that unifies previous approaches by introducing trainable parameters for all the components that differentiate multi-task learning approaches along the above dimensions.", "id": 1766, "question": "Do sluice networks outperform non-transfer learning approaches?", "title": "Latent Multi-task Architecture Learning" }, { "answers": [ "" ], "context": "We introduce a meta-architecture for multi-task architecture learning, which we refer to as a sluice network, sketched in Figure 1 for the case of two tasks. The network learns to share parameters between $M$ neural networks—in our case, two deep recurrent neural networks (RNNs) BIBREF12 . The network can be seen as an end-to-end differentiable union of a set of sharing architectures with parameters controlling the sharing. By learning the weights of those sharing parameters (sluices) jointly with the rest of the model, we arrive at a task-specific MTL architecture over the course of training.", "id": 1767, "question": "What is hard parameter sharing?", "title": "Latent Multi-task Architecture Learning" }, { "answers": [ "" ], "context": "tocchapterList of Acronyms", "id": 1768, "question": "How successful are they at matching names of authors in Japanese and English?", "title": "Integration of Japanese Papers Into the DBLP Data Set" }, { "answers": [ "" ], "context": "The idea for this work was born when the author was searching for a possibility to combine computer science with his minor subject Japan studies in his diploma thesis. After dismissing some ideas leaning towards Named Entity Recognition and computer linguistics the author chose “Integration of Japanese Papers Into the DBLP Data Set” as his subject. The DBLP is a well-known and useful tool for finding papers published in the context of computer science. The challenge to deal with such a huge database and the problems that occur when processing Japanese input data was the reason why this idea has been chosen. The hope is that, in the future, many Japanese papers can be added by the responsible people of the DBLP project.", "id": 1769, "question": "Is their approach applicable to papers outside computer science?", "title": "Integration of Japanese Papers Into the DBLP Data Set" }, { "answers": [ "" ], "context": "Computer scientists are likely to use the DBLP to find information about certain papers or authors. Therefore, the DBLP is supposed to provide information about as many papers as possible. For example, one could be interested in the paper “Analysis of an Entry Term Set of a Civil Engineering Dictionary and Its Application to Information Retrieval Systems” by Akiko Aizawa et al. (2005) but DBLP does not include it yet. Japanese scientists might look for the original (Japanese) title “土木関連用語辞典の見出し語の分析と検索システムにおける活用に関する考察” or use Aizawa's name in Japanese characters (相澤彰子) for a search in DBLP. The DBLP contains the author “Akiko Aizawa” but does not contain this specific paper or the author's original name in Japanese characters. Our work is to implement a tool which addresses these questions, support the DBLP team in the integration of Japanese papers and reveal the difficulties of realizing the integration.", "id": 1770, "question": "Do they translate metadata from Japanese papers to English?", "title": "Integration of Japanese Papers Into the DBLP Data Set" }, { "answers": [ "Confusion in recognizing the words that are active at a given node by a speech recognition solution developed for Indian Railway Inquiry System." ], "context": "There are several commercial menu based ASR systems available around the world for a significant number of languages and interestingly speech solution based on these ASR are being used with good success in the Western part of the globe BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Typically, a menu based ASR system restricts user to speak from a pre-defined closed set of words for enabling a transaction. Before commercial deployment of a speech solution it is imperative to have a quantitative measure of the performance of the speech solution which is primarily based on the speech recognition accuracy of the speech engine used. Generally, the recognition performance of any speech recognition based solution is quantitatively evaluated by putting it to actual use by the people who are the intended users and then analyzing the logs to identify successful and unsuccessful transactions. This evaluation is then used to identifying any further improvement in the speech recognition based solution to better the overall transaction completion rates. This process of evaluation is both time consuming and expensive. For evaluation one needs to identify a set of users and also identify the set of actual usage situations and perform the test. It is also important that the set of users are able to use the system with ease meaning that even in the test conditions the performance of the system, should be good, while this can not usually be guaranteed this aspect of keeping the user experience good makes it necessary to employ a wizard of Oz (WoZ) approach. Typically this requires a human agent in the loop during actual speech transaction where the human agent corrects any mis-recognition by actually listening to the conversation between the human user and the machine without the user knowing that there is a human agent in the loop. The use of WoZ is another expense in the testing a speech solution. All this makes testing a speech solution an expensive and time consuming procedure.", "id": 1771, "question": "what bottlenecks were identified?", "title": "Evaluating the Performance of a Speech Recognition based System" }, { "answers": [ "" ], "context": "In recent years, neural network based models have become the workhorse of natural language understanding and generation. They empower industrial systems in machine translation BIBREF0 and text generation BIBREF1 , also showing state-of-the-art performance on numerous benchmarks including Recognizing Textual Entailment (RTE) BIBREF2 , Visual Question Answering (VQA) BIBREF3 , and Reading Comprehension BIBREF4 . Despite these successes, a growing body of literature suggests that these approaches do not generalize outside of the specific distributions on which they are trained, something that is necessary for a language understanding system to be widely deployed in the real world. Investigations on the three aforementioned tasks have shown that neural models easily latch onto statistical regularities which are omnipresent in existing datasets BIBREF5 , BIBREF6 , BIBREF7 and extremely hard to avoid in large scale data collection. Having learned such dataset-specific solutions, neural networks fail to make correct predictions for examples that are even slightly out of domain, yet are trivial for humans. These findings have been corroborated by a recent investigation on a synthetic instruction-following task BIBREF8 , in which seq2seq models BIBREF9 , BIBREF10 have shown little systematicity BIBREF11 in how they generalize, that is they do not learn general rules on how to compose words and fail spectacularly when for example asked to interpret “jump twice” after training on “jump”, “run twice” and “walk twice”.", "id": 1772, "question": "What is grounded language understanding?", "title": "Systematic Generalization: What Is Required and Can It Be Learned?" }, { "answers": [ "" ], "context": "A hashtag is a form of metadata labeling used in various social networks to help the users to navigate through the content. For example, one of the most popular hashtags on Instagram is \"#photooftheday\" [photo of the day]. Hashtags are written without any delimiters, although some users use an underscore or camel-casing to separate words. Hashtags themselves may be a great source for features for following opinion mining and social network analysis. Basically hashtags serve as keyphrases for a post in social media. By segmenting the hashtags into separate words we may use regular techniques to process them. The problem of hashtag segmentation resembles of another problem, namely word segmentation.", "id": 1773, "question": "Does the paper report the performance on the task of a Neural Machine Translation model?", "title": "Char-RNN and Active Learning for Hashtag Segmentation" }, { "answers": [ "" ], "context": "We treat hashtag segmentation as a sequence labeling task. Each character is labeled with one of the labels $\\mathcal {L} = \\lbrace 0, 1\\rbrace $, (1) for the end of a word, and (0) otherwise (Table TABREF9 and TABREF9). Given a string $s = {s_1, \\ldots , s_n}$ of characters, the task is to find the labels $Y^* = {y_1^*. \\ldots , y_n^*}$, such that $ Y^* = \\arg \\max _{Y \\in \\mathcal {L} ^n} p(Y | s).$", "id": 1774, "question": "What are the predefined morpho-syntactic patterns used to filter the training data?", "title": "Char-RNN and Active Learning for Hashtag Segmentation" }, { "answers": [ "" ], "context": "In this section we describe the datasets we used for hashtag segmentation. We experimented with Russian and English datasets to compare the performance of the char-RNN.", "id": 1775, "question": "Is the RNN model evaluated against any baseline?", "title": "Char-RNN and Active Learning for Hashtag Segmentation" }, { "answers": [ "" ], "context": "To our knowledge there is no available dataset for hashtag segmentation in Russian, so we faced the need to create our own dataset. Our approach to the dataset creation was twofold: the training data was created from social network texts by selecting frequent $n$-grams and generating hashtags following some hashtag patterns. The test dataset consists of real hashtags collected from vk.com (a Russian social network) and were segmented manually.", "id": 1776, "question": "Which languages are used in the paper?", "title": "Char-RNN and Active Learning for Hashtag Segmentation" }, { "answers": [ "" ], "context": "In this section, we give a brief overview of Switching Dynamical systems and how they can be used to capture both a scaffold of the narrative as well as the narrative dynamics. We then describe in detail the components of our model and its relation to existing models.", "id": 1777, "question": "What metrics are used for evaluation?", "title": "Generating Narrative Text in a Switching Dynamical System" }, { "answers": [ "" ], "context": "The specifics of the narrative (characters, setting, etc.), will differ between stories, but as BIBREF0 notes, the way they transition to the next point in the narrative (what we refer to as “narrative dynamics\") is often shared. Let's say that, as done often, we represent the `narrative specifics' at time step $i$ with a latent vector $Z_i$. A natural way to explicitly model how this state evolves over time that fits with the above observation is as a Linear Dynamical System:", "id": 1778, "question": "What baselines are used?", "title": "Generating Narrative Text in a Switching Dynamical System" }, { "answers": [ "" ], "context": "Accepted as a long paper in EMNLP 2019 (Conference on Empirical Methods in Natural Language Processing).", "id": 1779, "question": "Which model is used to capture the implicit structure?", "title": "Learning Explicit and Implicit Structures for Targeted Sentiment Analysis" }, { "answers": [ "" ], "context": "Our objective is to design a model to extract targets as well as their associated targeted sentiments for a given sentence in a joint manner. As we mentioned before, we believe that both explicit and implicit structures are crucial for building a successful model for TSA. Specifically, we first present an approach to learn flexible explicit structures based on latent CRF, and next present an approach to efficiently learn the rich implicit structures for exponentially many possible combinations of targets.", "id": 1780, "question": "How is the robustness of the model evaluated?", "title": "Learning Explicit and Implicit Structures for Targeted Sentiment Analysis" }, { "answers": [ "" ], "context": "Motivated by BIBREF11, we design an approach based on latent CRF to model flexible sentiment spans to capture better explicit structures in the output space. To do so, we firstly integrate target and targeted sentiment information into a label sequence by using 3 types of tags in our EI model: $\\mathbf {B}_p$, $\\mathbf {A}_p$, and $\\mathbf {E}_{\\epsilon ,p}$, where $p \\in \\lbrace +, -, 0\\rbrace $ indicates the sentiment polarity and $\\epsilon \\in \\lbrace \\textit {B,M,E,S}\\rbrace $ denotes the BMES tagging scheme. We explain the meaning of each type of tags as follows.", "id": 1781, "question": "How is the effectiveness of the model evaluated?", "title": "Learning Explicit and Implicit Structures for Targeted Sentiment Analysis" }, { "answers": [ "" ], "context": "Humans spend countless hours extracting structured machine readable information from unstructured information in a multitude of domains. Promising to automate this, information extraction (IE) is one of the most sought-after industrial applications of natural language processing. However, despite substantial research efforts, in practice, many applications still rely on manual effort to extract the relevant information.", "id": 1782, "question": "Do they assume sentence-level supervision?", "title": "End-to-End Information Extraction without Token-Level Supervision" }, { "answers": [ "Proposed RCRN outperforms ablative baselines BiLSTM by +2.9% and 3L-BiLSTM by +1.1% on average across 16 datasets." ], "context": "Recurrent neural networks (RNNs) live at the heart of many sequence modeling problems. In particular, the incorporation of gated additive recurrent connections is extremely powerful, leading to the pervasive adoption of models such as Gated Recurrent Units (GRU) BIBREF0 or Long Short-Term Memory (LSTM) BIBREF1 across many NLP applications BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In these models, the key idea is that the gating functions control information flow and compositionality over time, deciding how much information to read/write across time steps. This not only serves as a protection against vanishing/exploding gradients but also enables greater relative ease in modeling long-range dependencies.", "id": 1783, "question": "By how much do they outperform BiLSTMs in Sentiment Analysis?", "title": "Recurrently Controlled Recurrent Networks" }, { "answers": [ "" ], "context": "RNN variants such as LSTMs and GRUs are ubiquitous and indispensible building blocks in many NLP applications such as question answering BIBREF12 , BIBREF9 , machine translation BIBREF2 , entailment classification BIBREF13 and sentiment analysis BIBREF14 , BIBREF15 . In recent years, many RNN variants have been proposed, ranging from multi-scale models BIBREF16 , BIBREF17 , BIBREF18 to tree-structured encoders BIBREF19 , BIBREF20 . Models that are targetted at improving the internals of the RNN cell have also been proposed BIBREF21 , BIBREF22 . Given the importance of sequence encoding in NLP, the design of effective RNN units for this purpose remains an active area of research.", "id": 1784, "question": "Does their model have more parameters than other models?", "title": "Recurrently Controlled Recurrent Networks" }, { "answers": [ "51.5" ], "context": "State-of-the-art deep neural networks leverage task-specific architectures to develop hierarchical representations of their input, with each layer building a refined abstraction of the layer that came before it BIBREF0 . For text classification, one can think of this as a single reader building up an increasingly refined understanding of the content. In a departure from this philosophy, we propose a divide-and-conquer approach, where a team of readers each focus on different aspects of the text, and then combine their representations to make a joint decision.", "id": 1785, "question": "what state of the accuracy did they obtain?", "title": "End-to-End Multi-View Networks for Text Classification" }, { "answers": [ "High-order CNN, Tree-LSTM, DRNN, DCNN, CNN-MC, NBoW and SVM " ], "context": "The MVN architecture is depicted in Figure FIGREF1 . First, individual selection vectors INLINEFORM0 are created, each formed by a distinct softmax weighted sum over the word vectors of the input text. Next, these selections are sequentially transformed into views INLINEFORM1 , with each view influencing the views that come after it. Finally, all views are concatenated and fed into a two-layer perceptron for classification.", "id": 1786, "question": "what models did they compare to?", "title": "End-to-End Multi-View Networks for Text Classification" }, { "answers": [ " They used Stanford Sentiment Treebank benchmark for sentiment classification task and AG English news corpus for the text classification task." ], "context": "Each selection INLINEFORM0 is constructed by focusing on a different subset of words from the original text, as determined by a softmax weighted sum BIBREF8 . Given a piece of text with INLINEFORM1 words, we represent it as a bag-of-words feature matrix INLINEFORM2 INLINEFORM3 . Each row of the matrix corresponds to one word, which is represented by a INLINEFORM4 -dimensional vector, as provided by a learned word embedding table. The selection INLINEFORM5 for the INLINEFORM6 view is the softmax weighted sum of features: DISPLAYFORM0 ", "id": 1787, "question": "which benchmark tasks did they experiment on?", "title": "End-to-End Multi-View Networks for Text Classification" }, { "answers": [ "" ], "context": "At the core of Natural Language Processing (NLP) neural models are pre-trained word embeddings like Word2Vec BIBREF0, GloVe BIBREF1 and ELMo BIBREF2. They help initialize the neural models, lead to faster convergence and have improved performance for numerous application such as Question Answering BIBREF3, Summarization BIBREF4, Sentiment Analysis BIBREF5. While word embeddings are powerful in unlimited constraints such as computation power and compute resources, it becomes challenging to deploy them to on-device due to their huge size.", "id": 1788, "question": "Are recurrent neural networks trained on perturbed data?", "title": "On the Robustness of Projection Neural Networks For Efficient Text Representation: An Empirical Study" }, { "answers": [ "" ], "context": "The Projection function, $\\mathbb {P}$ (Figure FIGREF1), BIBREF9 used in SGNN models BIBREF6 extracts token (or character) n-gram & skip-gram features from a raw input text, $\\textbf {x}$ and dynamically generates a binary projection representation, $\\mathbb {P}(\\mathbf {x}) \\in [0,1]^{T.d}$ after a Locality-Sensitive Hashing (LSH) based transformation, $\\mathbb {L}$ as in", "id": 1789, "question": "How does their perturbation algorihm work?", "title": "On the Robustness of Projection Neural Networks For Efficient Text Representation: An Empirical Study" }, { "answers": [ "" ], "context": "As discussed in a recent survey BIBREF0 , discriminating between similar languages, national language varieties, and dialects is an important challenge faced by state-of-the-art language identification systems. The topic has attracted more and more attention from the CL/NLP community in recent years with publications on similar languages of the Iberian peninsula BIBREF1 , and varieties and dialects of several languages such as Greek BIBREF2 and Romanian BIBREF3 to name a few.", "id": 1790, "question": "Which language is divided into six dialects in the task mentioned in the paper?", "title": "Experiments in Cuneiform Language Identification" }, { "answers": [ "" ], "context": "Since its first edition in 2014, shared tasks on similar language and dialect identification have been organized together with the VarDial workshop co-located with international conferences such as COLING, EACL, and NAACL. The first and most well-attended of these competitions was the Discrminating between Similar Languages (DSL) shared task which has been organized between 2014 and 2017 BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . The DSL provided the first benchmark for evaluation of language identification systems developed for similar languages and language varieties using the DSL Corpus Collection (DSLCC) BIBREF13 , a multilingual benchmarked dataset compiled for this purpose. In 2017 and 2018, VarDial featured evaluation campaigns with multiple shared tasks not only on language and dialect identification but also on other NLP tasks related to language and dialect variation (e.g. morphosyntactic tagging, and cross-lingual dependency parsing). With the exception of the DSL, the language and dialect identification competitions organized at VarDial focused on groups of dialects from the same language such as Arabic (ADI shared task) and German (GDI shared task).", "id": 1791, "question": "What is one of the first writing systems in the world?", "title": "Experiments in Cuneiform Language Identification" }, { "answers": [ "" ], "context": "This work discusses two information extraction systems for identifying temporal information in clinical text, submitted to SemEval-2016 Task 12 : Clinical TempEval BIBREF0 . We participated in tasks from both phases: (1) identifying text spans of time and event mentions; and (2) predicting relations between clinical events and document creation time.", "id": 1792, "question": "How do they obtain distant supervision rules for predicting relations?", "title": "Brundlefly at SemEval-2016 Task 12: Recurrent Neural Networks vs. Joint Inference for Clinical Temporal Information Extraction" }, { "answers": [ "" ], "context": "Phase 1 of the challenge required parsing clinical documents to identify Timex3 and Event temporal entity mentions in text. Timex3 entities are expressions of time, ranging from concrete dates to phrases describing intervals like “the last few months.\" Event entities are broadly defined as anything relevant to a patient's clinical timeline, e.g., diagnoses, illnesses, procedures. Entity mentions are tagged using a document collection of clinic and pathology notes from the Mayo Clinic called the THYME (Temporal History of Your Medical Events) corpus BIBREF2 .", "id": 1793, "question": "Which structured prediction approach do they adopt for temporal entity extraction?", "title": "Brundlefly at SemEval-2016 Task 12: Recurrent Neural Networks vs. Joint Inference for Clinical Temporal Information Extraction" }, { "answers": [ "" ], "context": "In Information Retrieval (IR), the searched query has always been an integral part. When a user enters a query in the information retrieval system the keywords they use might be different from the ones used in the documents or they might be expressing it in a different form. Considering this situation, the information retrieval systems should be intelligent and provide the requested information to the user. According to Spink (2001), each user in the web uses 2.4 words in their query; having said that, the probability of the input query being close to those of the documents is extremely low [22]. The latest algorithms implement query indexing techniques and covers only the user's history of search. This simply brings the problem of keywords mismatch; the queries entered by user don't match with the ones in the documents, this problem is called the lexical problem. The lexical problem originates from synonymy. Synonymy is the state that two or more words have the same meaning. Thus, expanding the query by enriching each word with their synonyms will enhance the IR results.", "id": 1794, "question": "Which evaluation metric has been measured?", "title": "Improving Information Retrieval Results for Persian Documents using FarsNet" }, { "answers": [ "" ], "context": "One of the first researchers who used the method for indexing was Maron (1960) [11]. Aforementioned paper described a meticulous and novel method to retrieve information from the books in the library. This paper is also one of the pioneers of the relevance and using probabilistic indexing. Relevance feedback is the process to involve user in the retrieved documents. It was mentioned in Rocchio (1971) [15], Ide (1971) [8], and Salton (1971) [19]. In the Relevance feedback the user's opinion for the retrieved documents is asked, then by the help of the user's feedbacks the relevance and irrelevance of the documents is decided. In the later researches, relevance feedback has been used in combination with other methods. For instance, Rahimi (2014) [14] used relevance feedback and Latent Semantic Analysis (LSA) to increase user's satisfaction. Other researches regarding the usage of relevance feedback are Salton (1997) [18], Rui (1997) [16], and Rui (1998) [17].", "id": 1795, "question": "What is the WordNet counterpart for Persian?", "title": "Improving Information Retrieval Results for Persian Documents using FarsNet" }, { "answers": [ "" ], "context": "Since early times of computer-based speech synthesis research, voice quality (the perceived timbre of speech) analysis/modification has attracted interest of researchers BIBREF0. The topic of voice quality analysis finds application in various areas of speech processing such as high-quality parametric speech synthesis, expressive/emotional speech synthesis, speaker identification, emotion recognition, prosody analysis, speech therapy. Due to availability of reviews such as BIBREF1 and space limitations, a review of voice quality analysis methods will not be presented here.", "id": 1796, "question": "What large corpus is used for experiments?", "title": "Excitation-based Voice Quality Analysis and Modification" }, { "answers": [ "" ], "context": "We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named \"Trentino Language Testing\" in schools (TLT-school), that will be described in the following.", "id": 1797, "question": "Are any of the utterances ungrammatical?", "title": "TLT-school: a Corpus of Non Native Children Speech" }, { "answers": [ "They used 6 indicators for proficiency (same for written and spoken) each marked by bad, medium or good by one expert." ], "context": "In Trentino, an autonomous region in northern Italy, there is a series of evaluation campaigns underway for testing L2 linguistic competence of Italian students taking proficiency tests in both English and German. A set of three evaluation campaigns is underway, two having been completed in 2016 and 2018, and a final one scheduled in 2020. Note that the “TLT-school” corpus refers to only the 2018 campaign, that was split in two parts: 2017 try-out data set (involving about 500 pupils) and the actual 2018 data (about 2500 pupils). Each of the three campaigns (i.e. 2016, 2018 and 2020) involves about 3000 students ranging from 9 to 16 years, belonging to four different school grade levels and three proficiency levels (A1, A2, B1). The schools involved in the evaluations are located in most part of the Trentino region, not only in its main towns; Table highlights some information about the pupils that took part to the campaigns. Several tests, aimed at assessing the language learning skills of the students, were carried out by means of multiple-choice questions, which can be evaluated automatically. However, a detailed linguistic evaluation cannot be performed without allowing the students to express themselves in both written sentences and spoken utterances, which typically require the intervention of human experts to be scored.", "id": 1798, "question": "How is the proficiency score calculated?", "title": "TLT-school: a Corpus of Non Native Children Speech" }, { "answers": [ "6 indicators:\n- lexical richness\n- pronunciation and fluency\n- syntactical correctness\n- fulfillment of delivery\n- coherence and cohesion\n- communicative, descriptive, narrative skills" ], "context": "The speaking part of the proficiency tests in 2017/2018 consists of 47 question prompts provided in written form: 24 in English and 23 in German, divided according to CEFR levels. Apart from A1 level, which differs in the number of questions (11 for English; 10 for German), both English and German A2 and B1 levels have respectively 6 and 7 questions each. As for A1 level, the first four introductory questions are the same (How old are you?, Where do you live?, What are your hobbies?, Wie alt bist du?, Wo wohnst du?, Was sind deine Hobbys?) or slightly different (What's your favourite pet?, Welche Tiere magst du?) in both languages, whereas the second part of the test puts the test-takers in the role of a customer in a pizzeria (English) or in a bar (German).", "id": 1799, "question": "What proficiency indicators are used to the score the utterances?", "title": "TLT-school: a Corpus of Non Native Children Speech" }, { "answers": [ "Accuracy not available: WER results are reported 42.6 German, 35.9 English" ], "context": "Table reports some statistics extracted from the written data collected so far. In this table, the number of pupils taking part in the English and German evaluation is reported, along with the number of sentences and tokens, identified as character sequences bounded by spaces.", "id": 1800, "question": "What accuracy is achieved by the speech recognition system?", "title": "TLT-school: a Corpus of Non Native Children Speech" }, { "answers": [ "Speech recognition system is evaluated using WER metric." ], "context": "Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand.", "id": 1801, "question": "How is the speech recognition system evaluated?", "title": "TLT-school: a Corpus of Non Native Children Speech" }, { "answers": [ "Total number of transcribed utterances including Train and Test for both Eng and Ger language is 5562 (2188 cleaned)" ], "context": "In order to create both an adaptation and an evaluation set for ASR, we manually transcribed part of the 2017 data sets. We defined an initial set of guidelines for the annotation, which were used by 5 researchers to manually transcribe about 20 minutes of audio data. This experience led to a discussion, from which a second set of guidelines originated, aiming at reaching a reasonable trade-off between transcription accuracy and speed. As a consequence, we decided to apply the following transcription rules:", "id": 1802, "question": "How many of the utterances are transcribed?", "title": "TLT-school: a Corpus of Non Native Children Speech" }, { "answers": [ "Total number of utterances available is: 70607 (37344 ENG + 33263 GER)" ], "context": "From the above description it appears that the corpus can be effectively used in many research directions.", "id": 1803, "question": "How many utterances are in the corpus?", "title": "TLT-school: a Corpus of Non Native Children Speech" }, { "answers": [ "w.r.t Rouge-1 their model outperforms by 0.98% and w.r.t Rouge-L their model outperforms by 0.45%" ], "context": "Encoder-decoder models have been widely used in sequence to sequence tasks such as machine translation ( BIBREF0 , BIBREF1 ). They consist of an encoder which represents the whole input sequence with a single feature vector. The decoder then takes this representation and generates the desired output sequence. The most successful models are LSTM and GRU as they are much easier to train than vanilla RNNs.", "id": 1804, "question": "By how much does their model outperform both the state-of-the-art systems?", "title": "Efficient Summarization with Read-Again and Copy Mechanism" }, { "answers": [ "neural attention model with a convolutional encoder with an RNN decoder and RNN encoder-decoder" ], "context": "In the past few years, there has been a lot of work on extractive summarization, where a summary is created by composing words or sentences from the source text. Notable examples are BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 and BIBREF14 . As a consequence of their extractive nature the summary is restricted to words (sentences) in the source text.", "id": 1805, "question": "What is the state-of-the art?", "title": "Efficient Summarization with Read-Again and Copy Mechanism" }, { "answers": [ "" ], "context": "Abbreviations and acronyms appear frequently in the medical domain. Based on a popular online knowledge base, among the 3,096,346 stored abbreviations, 197,787 records are medical abbreviations, ranked first among all ten domains. An abbreviation can have over 100 possible explanations even within the medical domain. Medical record documentation, the authors of which are mainly physicians, other health professionals, and domain experts, is usually written under the pressure of time and high workload, requiring notation to be frequently compressed with shorthand jargon and acronyms. This is even more evident within intensive care medicine, where it is crucial that information is expressed in the most efficient manner possible to provide time-sensitive care to critically ill patients, but can result in code-like messages with poor readability. For example, given a sentence written by a physician with specialty training in critical care medicine, “STAT TTE c/w RVS. AKI - no CTA. .. etc”, it is difficult for non-experts to understand all abbreviations without specific context and/or knowledge. But when a doctor reads this, he/she would know that although “STAT” is widely used as the abbreviation of “statistic”, “statistics” and “statistical” in most domains, in hospital emergency rooms, it is often used to represent “immediately”. Within the arena of medical research, abbreviation expansion using a natural language processing system to automatically analyze clinical notes may enable knowledge discovery (e.g., relations between diseases) and has potential to improve communication and quality of care.", "id": 1806, "question": "How do they identify abbreviations?", "title": "Exploiting Task-Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion" }, { "answers": [ "" ], "context": "The task of abbreviation disambiguation in biomedical documents has been studied by various researchers using supervised machine learning algorithms BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . However, the performance of these supervised methods mainly depends on a large amount of labeled data which is extremely difficult to obtain for our task since intensive care medicine texts are very rare resources in clinical domain due to the high cost of de-identification and annotation. Tengstrand et al. tengstrand2014eacl proposed a distributional semantics-based approach for abbreviation expansion in Swedish but they focused only on expanding single words and cannot handle multi-word phrases. In contrast, we use word embeddings combined with task-oriented resources and knowledge, which can handle multiword expressions.", "id": 1807, "question": "What kind of model do they build to expand abbreviations?", "title": "Exploiting Task-Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion" }, { "answers": [ "" ], "context": "The overview of our approach is shown in Figure FIGREF6 . Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words).", "id": 1808, "question": "Do they use any knowledge base to expand abbreviations?", "title": "Exploiting Task-Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion" }, { "answers": [ "" ], "context": "Given an abbreviation as input, we expect the correct expansion to be the most semantically similar to the abbreviation, which requires the abbreviation and the expansion share similar contexts. For this reason, we exploit rich task-oriented resources such as the Wikipedia articles of all the possible candidates, research papers and books written by the intensive care medicine fellows. Together with our clinical notes data which functions as a corpus, we train word embeddings since the expansions of abbreviations in the clinical notes are likely to appear in these resources and also share the similar contexts to the abbreviation's contexts.", "id": 1809, "question": "In their used dataset, do they study how many abbreviations are ambiguous?", "title": "Exploiting Task-Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion" }, { "answers": [ "" ], "context": "In most cases, an abbreviation's expansion is a multi-word phrase. Therefore, we need to obtain the phrase's embedding so that we can compute its semantic similarity to the abbreviation.", "id": 1810, "question": "Which dataset do they use to build their model?", "title": "Exploiting Task-Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion" }, { "answers": [ "" ], "context": "Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models BIBREF0, BIBREF1 for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs BIBREF2, BIBREF3, RNNs BIBREF4, BIBREF5, and asynchronous bi-model BIBREF6. Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents.", "id": 1811, "question": "What is the domain of their collected corpus?", "title": "CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding" }, { "answers": [ "F1 scores of 86.16 on slot filling and 94.56 on intent detection" ], "context": "In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance $X = \\lbrace x_1, x_2, \\cdots , x_N \\rbrace $ with $N$ words and its corresponding slot tags $Y^{slot} = \\lbrace y_1, y_2, \\cdots , y_N \\rbrace $, the slot filling task aims to learn a parameterized mapping function $f_{\\theta } : X \\rightarrow Y $ from input words to slot tags. For the intent detection, it is designed to predict the intent label $\\hat{y}^{int}$ for the entire utterance $X$ from the predefined label set $S^{int}$.", "id": 1812, "question": "What was the performance on the self-collected corpus?", "title": "CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding" }, { "answers": [ "10,001 utterances" ], "context": "In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure FIGREF16, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer.", "id": 1813, "question": "What is the size of their dataset?", "title": "CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding" }, { "answers": [ "" ], "context": "The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove BIBREF17 to initialize word embeddings, and keep them frozen.", "id": 1814, "question": "What is the source of the CAIS dataset?", "title": "CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding" }, { "answers": [ "" ], "context": "It has been demonstrated that character level information (e.g. capitalization and prefix) BIBREF18 is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings.", "id": 1815, "question": "What were the baselines models?", "title": "CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding" }, { "answers": [ "" ], "context": "A number of real-world problems related to text data have been studied under the framework of natural language processing (NLP). Example of such problems include topic categorization, sentiment analysis, machine translation, structured information extraction, or automatic summarization. Due to the overwhelming amount of text data available on the Internet from various sources such as user-generated content or digitized books, methods to automatically and intelligently process large collections of text documents are in high demand. For several text applications, machine learning (ML) models based on global word statistics like TFIDF BIBREF0 , BIBREF1 or linear classifiers are known to perform remarkably well, e.g. for unsupervised keyword extraction BIBREF2 or document classification BIBREF3 . However more recently, neural network models based on vector space representations of words (like BIBREF4 ) have shown to be of great benefit to a large number of tasks. The trend was initiated by the seminal work of BIBREF5 and BIBREF6 , who introduced word-based neural networks to perform various NLP tasks such as language modeling, chunking, named entity recognition, and semantic role labeling. A number of recent works (e.g. BIBREF6 , BIBREF7 ) also refined the basic neural network architecture by incorporating useful structures such as convolution, pooling, and parse tree hierarchies, leading to further improvements in model predictions. Overall, these ML models have permitted to assign automatically and accurately concepts to entire documents or to sub-document levels like phrases; the assigned information can then be mined on a large scale.", "id": 1816, "question": "Are the document vectors that the authors introduce evaluated in any way other than the new way the authors propose?", "title": "\"What is Relevant in a Text Document?\": An Interpretable Machine Learning Approach" }, { "answers": [ "" ], "context": "Explanation of individual classification decisions in terms of input variables has been studied for a variety of machine learning classifiers such as additive classifiers BIBREF18 , kernel-based classifiers BIBREF19 or hierarchical networks BIBREF11 . Model-agnostic methods for explanations relying on random sampling have also been proposed BIBREF20 , BIBREF21 , BIBREF22 . Despite their generality, the latter however incur an additional computational cost due to the need to process the whole sample to provide a single explanation. Other methods are more specific to deep convolutional neural networks used in computer vision: the authors of BIBREF8 proposed a network propagation technique based on deconvolutions to reconstruct input image patterns that are linked to a particular feature map activation or prediction. The work of BIBREF9 aimed at revealing salient structures within images related to a specific class by computing the corresponding prediction score derivative with respect to the input image. The latter method reveals the sensitivity of the classifier decision to some local variation of the input image, and is related to sensitivity analysis BIBREF23 , BIBREF24 . In contrast, the LRP method of BIBREF12 corresponds to a full decomposition of the classifier output for the current input image. It is based on a layer-wise conservation principle and reveals parts of the input space that either support or speak against a specific classification decision. Note that the LRP framework can be applied to various models such as kernel support vector machines and deep neural networks BIBREF12 , BIBREF17 . We refer the reader to BIBREF14 for a comparison of the three explanation methods, and to BIBREF13 for a view of particular instances of LRP as a “deep Taylor decomposition” of the decision function.", "id": 1817, "question": "According to the authors, why does the CNN model exhibit a higher level of explainability?", "title": "\"What is Relevant in a Text Document?\": An Interpretable Machine Learning Approach" }, { "answers": [ "" ], "context": "In this section we describe our method for identifying words in a text document, that are relevant with respect to a given category of a classification problem. For this, we assume that we are given a vector-based word representation and a neural network that has already been trained to map accurately documents to their actual category. Our method can be divided in four steps: (1) Compute an input representation of a text document based on word vectors. (2) Forward-propagate the input representation through the convolutional neural network until the output is reached. (3) Backward-propagate the output through the network using the layer-wise relevance propagation (LRP) method, until the input is reached. (4) Pool the relevance scores associated to each input variable of the network onto the words to which they belong. As a result of this four-step procedure, a decomposition of the prediction score for a category onto the words of the documents is obtained. Decomposed terms are called relevance scores. These relevance scores can be viewed as highlighted text or can be used to form a list of top-words in the document. The whole procedure is also described visually in Figure 1 . While we detail in this section the LRP method for a specific network architecture and with predefined choices of layers, the method can in principle be extended to any architecture composed of similar or larger number of layers.", "id": 1818, "question": "Does the LRP method work in settings that contextualize the words with respect to one another?", "title": "\"What is Relevant in a Text Document?\": An Interpretable Machine Learning Approach" }, { "answers": [ "" ], "context": "Real time information is key for decision making in highly technical domains such as finance. The explosive growth of financial technology industry (Fintech) continued in 2016, partially due to the current interest in the market for Artificial Intelligence-based technologies.", "id": 1819, "question": "How do they incorporate lexicon into the neural network?", "title": "Fortia-FBK at SemEval-2017 Task 5: Bullish or Bearish? Inferring Sentiment towards Brands from Financial News Headlines" }, { "answers": [ "" ], "context": "While image and sound come with a natural high dimensional embedding, the issue of which is the best representation is still an open research problem in the context of natural language and text. It is beyond the scope of this paper to do a thorough overview of word representations, for this we refer the interest reader to the excellent review provided by BIBREF1 . Here, we will just introduce the main representations that are related to the proposed method.", "id": 1820, "question": "What is the source of their lexicon?", "title": "Fortia-FBK at SemEval-2017 Task 5: Bullish or Bearish? Inferring Sentiment towards Brands from Financial News Headlines" }, { "answers": [ "" ], "context": "The data consists of a set of financial news headlines, crawled from several online outlets such as Yahoo Finance, where each sentence contains one or more company names/brands.", "id": 1821, "question": "What was their performance?", "title": "Fortia-FBK at SemEval-2017 Task 5: Bullish or Bearish? Inferring Sentiment towards Brands from Financial News Headlines" }, { "answers": [ "" ], "context": "In Figure FIGREF5 , we can see the overall architecture of our model.", "id": 1822, "question": "How long is the dataset used for training?", "title": "Fortia-FBK at SemEval-2017 Task 5: Bullish or Bearish? Inferring Sentiment towards Brands from Financial News Headlines" }, { "answers": [ "" ], "context": "Minimal preprocessing was adopted in our approach: we replaced the target company's name with a fixed word <company> and numbers with <number>. The sentences were then tokenized using spaces as separator and keeping punctuation symbols as separate tokens.", "id": 1823, "question": "What embeddings do they use?", "title": "Fortia-FBK at SemEval-2017 Task 5: Bullish or Bearish? Inferring Sentiment towards Brands from Financial News Headlines" }, { "answers": [ "" ], "context": "Understanding what facts can be inferred to be true or false from text is an essential part of natural language understanding. In many cases, these inferences can go well beyond what is immediately stated in the text. For example, a simple sentence like “Hanna Huyskova won the gold medal for Belarus in freestyle skiing.\" implies that (1) Belarus is a country, (2) Hanna Huyskova is an athlete, (3) Belarus won at least one Olympic event, (4) the USA did not win the freestyle skiing event, and so on. Work completed while interning at Google. Also affiliated with Columbia University, work done at Google.", "id": 1824, "question": "did they use other pretrained language models besides bert?", "title": "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions" }, { "answers": [ "Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable\" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes\" or “no\"" ], "context": "Yes/No questions make up a subset of the reading comprehension datasets CoQA BIBREF3 , QuAC BIBREF4 , and HotPotQA BIBREF5 , and are present in the ShARC BIBREF10 dataset. These datasets were built to challenge models to understand conversational QA (for CoQA, ShARC and QuAC) or multi-step reasoning (for HotPotQA), which complicates our goal of using yes/no questions to test inferential abilities. Of the four, QuAC is the only one where the question authors were not allowed to view the text being used to answer their questions, making it the best candidate to contain naturally occurring questions. However, QuAC still heavily prompts users, including limiting their questions to be about pre-selected Wikipedia articles, and is highly class imbalanced with 80% “yes\" answers.", "id": 1825, "question": "how was the dataset built?", "title": "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions" }, { "answers": [ "" ], "context": "An example in our dataset consists of a question, a paragraph from a Wikipedia article, the title of the article, and an answer, which is either “yes\" or “no\". We include the article title since it can potentially help resolve ambiguities (e.g., coreferent phrases) in the passage, although none of the models presented in this paper make use of them.", "id": 1826, "question": "what is the size of BoolQ dataset?", "title": "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions" }, { "answers": [ "Remove numbers and interjections" ], "context": "Almost all political decisions and political opinions are, in one way or another, expressed in written or spoken texts. Great leaders in history become famous for their ability to motivate the masses with their speeches; parties publish policy programmes before elections in order to provide information about their policy objectives; parliamentary decisions are discussed and deliberated on the floor in order to exchange opinions; members of the executive in most political systems are legally obliged to provide written or verbal answers to questions from legislators; and citizens express their opinions about political events on internet blogs or in public online chats. Political texts and speeches are everywhere that people express their political opinions and preferences.", "id": 1827, "question": "what processing was done on the speeches before being parsed?", "title": "Database of Parliamentary Speeches in Ireland, 1919-2013" }, { "answers": [ "" ], "context": "Entity extraction is one of the most major NLP components. Most NLP tools (e.g., NLTK, Stanford CoreNLP, etc.), including commercial services (e.g., Google Cloud API, Alchemy API, etc.), provide entity extraction functions to recognize named entities (e.g., PERSON, LOCATION, ORGANIZATION, etc.) from texts. Some studies have defined fine-grained entity types and developed extraction methods BIBREF0 based on these types. However, these methods cannot comprehensively cover domain-specific entities. For instance, a real estate search engine needs housing equipment names to index these terms for providing fine-grained search conditions. There is a significant demand for constructing user-specific entity dictionaries, such as the case of cuisine and ingredient names for restaurant services. A straightforward solution is to prepare a set of these entity names as a domain-specific dictionary. Therefore, this paper focuses on the entity population task, which is a task of collecting entities that belong to an entity type required by a user.", "id": 1828, "question": "What programming language is the tool written in?", "title": "A Lightweight Front-end Tool for Interactive Entity Population" }, { "answers": [ "10 Epochs: pearson-Spearman correlation drops 60 points when error increase by 20%\n50 Epochs: pearson-Spearman correlation drops 55 points when error increase by 20%" ], "context": "In recent times, pre-trained contextual language models have led to significant improvement in the performance for many NLP tasks. Among the family of these models, the most popular one is BERT BIBREF0, which is also the focus of this work. The strength of the BERT model FIGREF2 stems from its transformerBIBREF1 based encoder architectureFIGREF1. While it is still not very clear as to why BERT along with its embedding works so well for downstream tasks when it is fine tuned, there has been some work in this direction that that gives some important cluesBIBREF2, BIBREF3.", "id": 1829, "question": "What is the performance change of the textual semantic similarity task when no error and maximum errors (noise) are present?", "title": "User Generated Data: Achilles' heel of BERT" }, { "answers": [ "SST-2 dataset" ], "context": "In recent years pre-trained language models ((e.g. ELMoBIBREF5, BERTBIBREF0) have made breakthroughs in several natural language tasks. These models are trained over large corpora that are not human annotated and are easily available. Chief among these models is BERTBIBREF0. The popularity of BERT stems from its ability to be fine-tuned for a variety of downstream NLP tasks such as text classification, regression, named-entity recognition, question answeringBIBREF0, machine translationBIBREF6 etc. BERT has been able to establish State-of-the-art (SOTA) results for many of these tasks. People have been able to show how one can leverage BERT to improve searchBIBREF7.", "id": 1830, "question": "Which sentiment analysis data set has a larger performance drop when a 10% error is introduced?", "title": "User Generated Data: Achilles' heel of BERT" }, { "answers": [ "" ], "context": "For our experiments, we use pre-trained BERT implementation as given by huggingface transformer library. We use the BERTBase uncased model. We work with three datasets namely - IMDB movie reviewsBIBREF11, Stanford Sentiment Treebank (SST-2) BIBREF12 and Semantic Textual Similarity (STS-B) BIBREF13.", "id": 1831, "question": "What kind is noise is present in typical industrial data?", "title": "User Generated Data: Achilles' heel of BERT" }, { "answers": [ "" ], "context": "Let us discuss the results from the above mentioned experiments. We show the plots of accuracy vs noise for each of the tasks. For IMDB, we fine tune the model for the sentiment analysis task. We plot F1 score vs % of error, as shown in Figure FIGREF6. Figure FIGREF6imdba shows the performance after fine tuning for 10 epochs, while Figure FIGREF6imdbb shows the performance after fine tuning for 50 epochs.", "id": 1832, "question": "What is the reason behind the drop in performance using BERT for some popular task?", "title": "User Generated Data: Achilles' heel of BERT" }, { "answers": [ "" ], "context": "Pre-trained feature extractors, such as BERT BIBREF0 for natural language processing and VGG BIBREF1 for computer vision, have become effective methods for improving the performance of deep learning models. In the last year, models similar to BERT have become state-of-the-art in many NLP tasks, including natural language inference (NLI), named entity recognition (NER), sentiment analysis, etc. These models follow a pre-training paradigm: they are trained on a large amount of unlabeled text via a task that resembles language modeling BIBREF2, BIBREF3 and are then fine-tuned on a smaller amount of “downstream” data, which is labeled for a specific task. Pre-trained models usually achieve higher accuracy than any model trained on downstream data alone.", "id": 1833, "question": "How they observe that fine-tuning BERT on a specific task does not improve its prunability?", "title": "Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning" }, { "answers": [ "The increase is linearly from lowest on average 2.0 , medium around 3.5, and the largest is 6.0" ], "context": "Neural network pruning involves examining a trained network and removing parts deemed to be unnecessary by some heuristic saliency criterion. One might remove weights, neurons, layers, channels, attention heads, etc. depending on which heuristic is used. Below, we describe three different lenses through which we might interpret pruning.", "id": 1834, "question": "How much is pre-training loss increased in Low/Medium/Hard level of pruning?", "title": "Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning" }, { "answers": [ "" ], "context": "The attention mechanism BIBREF1 in neural networks can be used to interpret and visualize model behavior by selecting the most pertinent pieces of information instead of all available information. For example, in BIBREF0 , a hierarchical attention network (Han) is created and tested on the classification of product and movie reviews. As a side effect of employing the attention mechanism, sentences (and words) that are considered important to the model can be highlighted, and color intensity corresponds to the level of importance (darker color indicates higher importance).", "id": 1835, "question": "How do they gather human reviews?", "title": "Paying Attention to Attention: Highlighting Influential Samples in Sequential Analysis" }, { "answers": [ "" ], "context": "Neural networks are powerful learning algorithms, but are also some of the most complex. This is made worse by the non-deterministic nature of neural network training; a small change in a learning parameter can drastically affect the network's learning ability. This has led to the development of methodologies for understanding and uncovering not just neural networks, but black box models in general. The interpretation of deep networks is a young field of research. We refer readers to BIBREF4 for a comprehensive overview of different methods for understanding and visualizing deep neural networks. More recent developments include DeepLIFT BIBREF5 (not yet applicable to RNNs), layerwise relevance propagation BIBREF6 (only very recently adapted to textual input and LSTMs BIBREF7 , BIBREF8 ), and LIME BIBREF9 .", "id": 1836, "question": "Do they explain model predictions solely on attention weights?", "title": "Paying Attention to Attention: Highlighting Influential Samples in Sequential Analysis" }, { "answers": [ "" ], "context": "In Table TABREF3 , we see the bottom visualization where the weights are uniform at the point of escalation. However, on the 2nd turn, the Han had produced more distinct weights. It is clear from this example that the importance of a single sample can change drastically as a sequence progresses. Using these changes in attention over the sequence, we formalized a set of rules to create an alternative visualization for the entire sequence to be applied in cases where the attention weights are uniform over all samples at the stopping point.", "id": 1837, "question": "Can their method of creating more informative visuals be applied to tasks other than turn taking in conversations?", "title": "Paying Attention to Attention: Highlighting Influential Samples in Sequential Analysis" }, { "answers": [ "" ], "context": "On-line eSports events provide a new setting for observing large-scale social interaction focused on a visual story that evolves over time—a video game. While watching sporting competitions has been a major source of entertainment for millennia, and is a significant part of today's culture, eSports brings this to a new level on several fronts. One is the global reach, the same games are played around the world and across cultures by speakers of several languages. Another is the scale of on-line text-based discourse during matches that is public and amendable to analysis. One of the most popular games, League of Legends, drew 43 million views for the 2016 world series final matches (broadcast in 18 languages) and a peak concurrent viewership of 14.7 million. Finally, players interact through what they see on screen while fans (and researchers) can see exactly the same views.", "id": 1838, "question": "What was the baseline?", "title": "Video Highlight Prediction Using Audience Chat Reactions" }, { "answers": [ "40 minutes" ], "context": "We briefly discuss a small sample of the related work on language and vision datasets, summarization, and highlight prediction. There has been a surge of vision and language datasets focusing on captions over the last few years, BIBREF0 , BIBREF1 , BIBREF2 , followed by efforts to focus on more specific parts of images BIBREF3 , or referring expressions BIBREF4 , or on the broader context BIBREF5 . For video, similar efforts have collected descriptions BIBREF6 , while others use existing descriptive video service (DVS) sources BIBREF7 , BIBREF8 . Beyond descriptions, other datasets use questions to relate images and language BIBREF9 , BIBREF10 . This approach is extended to movies in MovieQA.", "id": 1839, "question": "What is the average length of the recordings?", "title": "Video Highlight Prediction Using Audience Chat Reactions" }, { "answers": [ "" ], "context": "Our dataset covers 218 videos from NALCS and 103 from LMS for a total of 321 videos from week 1 to week 9 in 2017 spring series from each tournament. Each week there are 10 matches for NALCS and 6 matches for LMS. Matches are best of 3, so consist of two games or three games. The first and third games are used for training. The second games in the first 4 weeks are used as validation and the remainder of second games are used as test. Table TABREF3 lists the numbers of videos in train, validation, and test subsets.", "id": 1840, "question": "How big was the dataset presented?", "title": "Video Highlight Prediction Using Audience Chat Reactions" }, { "answers": [ "Best model achieved F-score 74.7 on NALCS and F-score of 70.0 on LMS on test set" ], "context": "In this section, we explain the proposed models and components. We first describe the notation and definition of the problem, plus the evaluation metric used. Next, we explain our vision model V-CNN-LSTM and language model L-Char-LSTM. Finally, we describe the joint multimodal model INLINEFORM0 -LSTM.", "id": 1841, "question": "What were their results?", "title": "Video Highlight Prediction Using Audience Chat Reactions" }, { "answers": [ "" ], "context": "Teaching computers to answer complex natural language questions requires sophisticated reasoning and human language understanding. We investigate generic natural language interfaces for simple arithmetic questions on semi-structured tables. Typical questions for this task are topic independent and may require performing multiple discrete operations such as aggregation, comparison, superlatives or arithmetics.", "id": 1842, "question": "Does a neural scoring function take both the question and the logical form as inputs?", "title": "Neural Multi-Step Reasoning for Question Answering on Semi-Structured Tables" }, { "answers": [ "" ], "context": "We briefly mention here two main types of QA systems related to our task: semantic parsing-based and embedding-based. Semantic parsing-based methods perform a functional parse of the question that is further converted to a machine understandable program and executed on a knowledgebase or database. For QA on semi-structured tables with multi-compositional queries, BIBREF0 generate and rank candidate logical forms with a log-linear model, resorting to hand-crafted features for scoring. As opposed, we learn neural features for each question and the paraphrase of each candidate logical form. Paraphrases and hand-crafted features have successfully facilitated semantic parsers targeting simple factoid BIBREF1 and compositional questions BIBREF2 . Compositional questions are also the focus of BIBREF3 that construct logical forms from the question embedding through operations parametrized by RNNs, thus losing interpretability. A similar fully neural, end-to-end differentiable network was proposed by BIBREF4 .", "id": 1843, "question": "What is the source of the paraphrases of the questions?", "title": "Neural Multi-Step Reasoning for Question Answering on Semi-Structured Tables" }, { "answers": [ "" ], "context": "We describe our QA system. For every question $q$ : i) a set of candidate logical forms $\\lbrace z_i\\rbrace _{i = 1, \\ldots , n_q}$ is generated using the method of BIBREF0 ; ii) each such candidate program $z_i$ is transformed in an interpretable textual representation $t_i$ ; iii) all $t_i$ 's are jointly embedded with $q$ in the same vector space and scored using a neural similarity function; iv) the logical form $z_i^*$ corresponding to the highest ranked $t_i^*$ is selected as the machine-understandable translation of question $q$ and executed on the input table to retrieve the final answer. Our contributions are the novel models that perform steps ii) and iii), while for step i) we rely on the work of BIBREF0 (henceforth: PL2015).", "id": 1844, "question": "Does the dataset they use differ from the one used by Pasupat and Liang, 2015?", "title": "Neural Multi-Step Reasoning for Question Answering on Semi-Structured Tables" }, { "answers": [ "" ], "context": "The availability of annotated corpora from the biomedical domain, in particular for non-English texts, is scarce. There are two main reasons for that: the generation of new annotated data is expensive due to the need of expert knowledge and to privacy issues: the patient and the physician should not be identified from the texts. So, although the availability of annotated data is a highly valuable asset for the research community, it is very difficult to access it.", "id": 1845, "question": "Did they experiment on this corpus?", "title": "Creation of an Annotated Corpus of Spanish Radiology Reports" }, { "answers": [ "" ], "context": "Stock prediction is crucial for quantitative analysts and investment companies. Stocks' trends, however, are affected by a lot of factors such as interest rates, inflation rates and financial news [12]. To predict stock prices accurately, one must use these variable information. In particular, in the banking industry and financial services, analysts' armies are dedicated to pouring over, analyzing, and attempting to quantify qualitative data from news. A large amount of stock trend information is extracted from the large amount of text and quantitative information that is involved in the analysis.", "id": 1846, "question": "Is the model compared against a linear regression baseline?", "title": "DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News" }, { "answers": [ "mean prediction accuracy 0.99582651\nS&P 500 Accuracy 0.99582651" ], "context": "In this section, we first introduce the background of the stock price model, which is based on the autoregressive moving average (ARMA) model. Then, we present the sentiment analysis details of the financial news and introduce how to use them to improve prediction performance. At last, we introduce the differential privacy framework and the loss function.", "id": 1847, "question": "What is the prediction accuracy of the model?", "title": "DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News" }, { "answers": [ "historical S&P 500 component stocks\n 306242 news articles" ], "context": "The ARMA model, which is one of the most widely used linear models in time series prediction [17], where the future value is assumed as a linear combination of the past errors and past values. ARMA is used to set the stock midterm prediction problem up. Let ${X}_t^\\text{A}$ be the variable based on ARMA at time $t$, then we have", "id": 1848, "question": "What is the dataset used in the paper?", "title": "DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News" }, { "answers": [ "" ], "context": "Another variable highly related to stock price is the textual information from news, whose changes may be a precursor to price changes. In our paper, news refers to a news article's title on a given trading day. It has been used to infer whether an event had informational content and whether investors' interpretations of the information were positive, negative or neutral. We hence use sentiment analysis to identify and extract opinions within a given text. Sentiment analysis aims at gauging the attitude, sentiments, evaluations and emotions of a speaker or writer based on subjectivity's computational treatment in a text [19]-[20].", "id": 1849, "question": "How does the differential privacy mechanism work?", "title": "DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News" }, { "answers": [ "it systematically holds out inputs in the training set containing basic primitive verb, \"jump\", and tests on sequences containing that verb." ], "context": "A crucial property underlying the expressive power of human language is its systematicity BIBREF0 , BIBREF1 : syntactic or grammatical rules allow arbitrary elements to be combined in novel ways, making the number of sentences possible in a language to be exponential in the number of its basic elements. Recent work has shown that standard deep learning methods in natural language processing fail to capture this important property: when tested on unseen combinations of known elements, state-of-the-art models fail to generalize BIBREF2 , BIBREF3 , BIBREF4 . It has been suggested that this failure represents a major deficiency of current deep learning models, especially when they are compared to human learners BIBREF5 , BIBREF0 .", "id": 1850, "question": "How does the SCAN dataset evaluate compositional generalization?", "title": "Compositional generalization in a deep seq2seq model by separating syntax and semantics" }, { "answers": [ "" ], "context": "Representation learning has been an active research area for more than 30 years BIBREF1, with the goal of learning high level representations which separates different explanatory factors of the phenomena represented by the input data BIBREF2, BIBREF3. Disentangled representations provide models with exponentially higher ability to generalize, using little amount of labels, to new conditions by combining multiple sources of variations.", "id": 1851, "question": "Is their model fine-tuned also on all available data, what are results?", "title": "Effectiveness of self-supervised pre-training for speech recognition" }, { "answers": [ "The system outperforms by 27.7% the LSTM model, 38.5% the RL-SPINN model and 41.6% the Gumbel Tree-LSTM" ], "context": "This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.", "id": 1852, "question": "How much does this system outperform prior work?", "title": "Cooperative Learning of Disjoint Syntax and Semantics" }, { "answers": [ "The system is compared to baseline models: LSTM, RL-SPINN and Gumbel Tree-LSTM" ], "context": "The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper.", "id": 1853, "question": "What are the baseline systems that are compared against?", "title": "Cooperative Learning of Disjoint Syntax and Semantics" }, { "answers": [ "" ], "context": "Automatic spoken assessment systems are becoming increasingly popular, especially for English with the high demand around the world for learning of English as a second language BIBREF0, BIBREF1, BIBREF2, BIBREF3. In addition to assessing a candidate's English ability such as fluency and pronunciation and giving feedback to the candidate, these automatic systems also need to ensure the integrity of the candidate's score by detecting malpractice, as shown in Figure FIGREF1. Malpractice is the action by a candidate that breaks the assessment regulation and potentially threatens the reliability of the exam and associated certification. Malpractice can take a range of forms in spoken language assessment scenarios, such as using or trying to use unauthorised materials, impersonation, speaking irrelevant to prompts/questions, speaking in his/her first language (L1) instead of the target language for spoken tests, etc. This work aims to investigate the problem of automatically detecting impersonation, in which a candidate attempts to impersonate another in a speaking test. This is closely related to speaker verification.", "id": 1854, "question": "What standard large speaker verification corpora is used for evaluation?", "title": "Non-native Speaker Verification for Spoken Language Assessment" }, { "answers": [ "BULATS i-vector/PLDA\nBULATS x-vector/PLDA\nVoxCeleb x-vector/PLDA\nPLDA adaptation (X1)\n Extractor fine-tuning (X2) " ], "context": "In this work both i-vector and x-vector representations are used. For the i-vector speaker representation the form described in BIBREF4, BIBREF19 is used. This section will just discuss the x-vector speaker representation as this is the form that is adapted to the non-native verification task.", "id": 1855, "question": "What systems are tested?", "title": "Non-native Speaker Verification for Spoken Language Assessment" }, { "answers": [ "" ], "context": "Domain adaptation is a machine learning paradigm that aims at improving the generalization performance of a new (target) domain by using a dataset from the original (source) domain. Suppose that, as the source domain dataset, we have a captioning corpus, consisting of images of daily lives and each image has captions. Suppose also that we would like to generate captions for exotic cuisine, which are rare in the corpus. It is usually very costly to make a new corpus for the target domain, i.e., taking and captioning those images. The research question here is how we can leverage the source domain dataset to improve the performance on the target domain.", "id": 1856, "question": "How many examples are there in the source domain?", "title": "Domain Adaptation for Neural Networks by Parameter Augmentation" }, { "answers": [ "" ], "context": "There are several recent studies applying domain adaptation methods to deep neural networks. However, few studies have focused on improving the fine tuning and dual outputs methods in the supervised setting.", "id": 1857, "question": "How many examples are there in the target domain?", "title": "Domain Adaptation for Neural Networks by Parameter Augmentation" }, { "answers": [ "" ], "context": "We start with the basic notations and formalization for domain adaptation. Let $\\mathcal {X}$ be the set of inputs and $\\mathcal {Y}$ be the outputs. We have a source domain dataset $D^s$ , which is sampled from some distribution $\\mathcal {D}^s$ . Also, we have a target domain dataset $D^t$ , which is sampled from another distribution $\\mathcal {D}^t$ . Since we are considering supervised settings, each element of the datasets has a form of input output pair $(x,y)$ . The goal of domain adaptation is to learn a function $f : \\mathcal {X} \\rightarrow \\mathcal {Y}$ that models the input-output relation of $D^t$ . We implicitly assume that there is a connection between the source and target distributions and thus can leverage the information of the source domain dataset. In the case of image caption generation, the input $x$ is an image (or the feature vector of an image) and $\\mathcal {Y}$0 is the caption (a sequence of words).", "id": 1858, "question": "Did they only experiment with captioning task?", "title": "Domain Adaptation for Neural Networks by Parameter Augmentation" }, { "answers": [ "" ], "context": "1.20pt", "id": 1859, "question": "how well this method is compared to other method?", "title": "Crowd Sourced Data Analysis: Mapping of Programming Concepts to Syntactical Patterns" }, { "answers": [ "VQA and GeoQA" ], "context": "This paper presents a compositional, attentional model for answering questions about a variety of world representations, including images and structured knowledge bases. The model translates from questions to dynamically assembled neural networks, then applies these networks to world representations (images or knowledge bases) to produce answers. We take advantage of two largely independent lines of work: on one hand, an extensive literature on answering questions by mapping from strings to logical representations of meaning; on the other, a series of recent successes in deep neural models for image recognition and captioning. By constructing neural networks instead of logical forms, our model leverages the best aspects of both linguistic compositionality and continuous representations.", "id": 1860, "question": "What benchmark datasets they use?", "title": "Learning to Compose Neural Networks for Question Answering" }, { "answers": [ "" ], "context": "Over the past few years, impressive advances have been made in the field of neural architecture search. Reinforcement learning and evolution have both proven their capacity to produce models that exceed the performance of those designed by humans BIBREF0 , BIBREF1 . These advances have mostly focused on improving image models, although some effort has also been invested in searching for sequence models BIBREF2 , BIBREF3 . In these cases, it has always been to find improved recurrent neural networks (RNNs), which were long established as the de facto neural model for sequence problems BIBREF4 , BIBREF5 .", "id": 1861, "question": "what is the proposed Progressive Dynamic Hurdles method?", "title": "The Evolved Transformer" }, { "answers": [ "" ], "context": "RNNs have long been used as the default option for applying neural networks to sequence modeling BIBREF4 , BIBREF5 , with LSTM BIBREF8 and GRU BIBREF9 architectures being the most popular. However, recent work has shown that RNNs are not necessary to build state-of-the-art sequence models. For example, many high performance convolutional models have been designed, such as WaveNet BIBREF10 , Gated Convolution Networks BIBREF11 , Conv Seq2Seq BIBREF6 and Dynamic Lightweight Convolution model BIBREF12 . Perhaps the most promising architecture in this direction is the Transformer architecture BIBREF7 , which relies only on multi-head attention to convey spatial information. In this work, we use both convolutions and attention in our search space to leverage the strengths of both layer types.", "id": 1862, "question": "What is in the model search space?", "title": "The Evolved Transformer" }, { "answers": [ "" ], "context": "We employ evolution-based architecture search because it is simple and has been shown to be more efficient than reinforcement learning when resources are limited BIBREF0 . We use the same tournament selection BIBREF28 algorithm as Real et al. real19, with the aging regularization omitted, and so encourage the reader to view their in-depth description of the method. In the interest of saving space, we will only give a brief overview of the algorithm here.", "id": 1863, "question": "How much energy did the NAS consume?", "title": "The Evolved Transformer" }, { "answers": [ "" ], "context": "Our encoding search space is inspired by the NASNet search space BIBREF1 , but is altered to allow it to express architecture characteristics found in recent state-of-the-art feed-forward seq2seq networks. Crucially, we ensured that the search space can represent the Transformer, so that we can seed the search process with the Transformer itself.", "id": 1864, "question": "How does Progressive Dynamic Hurdles work?", "title": "The Evolved Transformer" }, { "answers": [ "" ], "context": "There has been rapid progress on natural language inference (NLI) in the last several years, due in large part to recent advances in neural modeling BIBREF0 and the introduction of several new large-scale inference datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4. Given the high performance of current state-of-the-art models, there has also been interest in understanding the limitations of these models (given their uninterpretability) BIBREF5, BIBREF6, as well as finding systematic biases in benchmark datasets BIBREF7, BIBREF8. In parallel to these efforts, there have also been recent logic-based approaches to NLI BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, which take inspiration from linguistics. In contrast to early attempts at using logic BIBREF14, these approaches have proven to be more robust. However they tend to use many rules and their output can be hard to interpret. It is sometimes unclear whether the attendant complexity is justified, especially given that such models are currently far outpaced by data-driven models and are generally hard to hybridize with data-driven techniques.", "id": 1865, "question": "Do they beat current state-of-the-art on SICK?", "title": "MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity" }, { "answers": [ "They use Monalog for data-augmentation to fine-tune BERT on this task" ], "context": "The goal of NLI is to determine, given a premise set $P$ and a hypothesis sentence $H$, whether $H$ follows from the meaning of $P$ BIBREF21. In this paper, we look at single-premise problems that involve making a standard 3-way classification decision (i.e., Entailment (H), Contradict (C) and Neutral (N)). Our general monotonicity reasoning system works according to the pipeline in Figure FIGREF1. Given a premise text, we first do Arrow Tagging by assigning polarity annotations (i.e., the arrows $\\uparrow ,\\downarrow $, which are the basic primitives of our logic) to tokens in text. These surface-level annotations, in turn, are associated with a set of natural logic inference rules that provide instructions for how to generate entailments and contradictions by span replacements over these arrows (which relies on a library of span replacement rules). For example, in the sentence All schoolgirls are on the train, the token schoolgirls is associated with a polarity annotation $\\downarrow $, which indicates that in this sentential context, the span schoolgirls can be replaced with a semantically more specific concept (e.g., happy schoolgirls) in order to generate an entailment. A generation and search procedure is then applied to see if the hypothesis text can be generated from the premise using these inference rules. A proof in this model is finally a particular sequence of edits (e.g., see Figure FIGREF13) that derive the hypothesis text from the premise text rules and yield an entailment or contradiction.", "id": 1866, "question": "How do they combine MonaLog with BERT?", "title": "MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity" }, { "answers": [ "They derive it from Wordnet" ], "context": "Given an input premise $P$, MonaLog first polarizes each of its tokens and constituents, calling the system described by BIBREF17, which performs polarization on a CCG parse tree. For example, a polarized $P$ could be every$^{\\leavevmode {\\color {red}\\uparrow }}$ linguist$^{\\leavevmode {\\color {red}\\downarrow }}$ swim$^{\\leavevmode {\\color {red}\\uparrow }}$. Note that since we ignore morphology in the system, tokens are represented by lemmas.", "id": 1867, "question": "How do they select monotonicity facts?", "title": "MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity" }, { "answers": [ "" ], "context": "Question Answering is a task that requires capabilities beyond simple NLP since it involves both linguistic techniques and inference abilities. Both the document sources and the questions are expressed in natural language, which is ambiguous and complex to understand. To perform such a task, a model needs in fact to understand the underlying meaning of text. Achieving this ability is quite challenging for a machine since it requires a reasoning phase (chaining facts, basic deductions, etc.) over knowledge extracted from the plain input data. In this article, we focus on two Question Answering tasks: a Reasoning Question Answering (RQA) and a Reading Comprehension (RC). These tasks are tested by submitting questions to be answered directly after reading a piece of text (e.g. a document or a paragraph).", "id": 1868, "question": "How does the model recognize entities and their relation to answers at inference time when answers are not accessible?", "title": "Question Dependent Recurrent Entity Network for Question Answering" }, { "answers": [ "" ], "context": "Teaching machines to learn reading comprehension is one of the core tasks in NLP field. Recently machine comprehension task accumulates much concern among NLP researchers. We have witnessed significant progress since the release of large-scale datasets like SQuAD BIBREF0 , MS-MARCO BIBREF1 , TriviaQA BIBREF2 , CNN/Daily Mail BIBREF3 and Children's Book Test BIBREF4 . The essential problem of machine comprehension is to predict the correct answer referring to a given passage with relevant question. If a machine can obtain a good score from predicting the right answer, we can say the machine is capable of understanding the given context.", "id": 1869, "question": "What other solutions do they compare to?", "title": "Smarnet: Teaching Machines to Read and Comprehend Like Human" }, { "answers": [ "" ], "context": "The goal of open-domain MC task is to infer the proper answer from the given text. For notation, given a passage INLINEFORM0 and a question INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are the length of the passage and the question. Each token is denoted as INLINEFORM4 , where INLINEFORM5 is the word embedding extracts from pre-trained word embedding lookups, INLINEFORM6 is the char-level matrix representing one-hot encoding of characters. The model should read and comprehend the interactions between INLINEFORM7 and INLINEFORM8 , and predict an answer INLINEFORM9 based on a continuous sub-span of INLINEFORM10 .", "id": 1870, "question": "How does the gatint mechanism combine word and character information?", "title": "Smarnet: Teaching Machines to Read and Comprehend Like Human" }, { "answers": [ "" ], "context": "In the last years, statistical machine translation (SMT) system generated state-of-the-art performance for most language pairs. Recently, systems using neural machine translation (NMT) were able to outperform SMT systems in several evaluations. These models are able to generate more fluent and accurate translation for most of sentences.", "id": 1871, "question": "Which dataset do they use?", "title": "Pre-Translation for Neural Machine Translation" }, { "answers": [ "" ], "context": "The idea of linear combining of machine translation systems using different paradigms has already been used successfully for SMT and rule-based machine translation (RBMT) BIBREF2 , BIBREF3 . They build an SMT system that is post-editing the output of an RBMT system. Using the combination of SMT and RBMT, they could outperform both single systems.", "id": 1872, "question": "How is the PBMT system trained?", "title": "Pre-Translation for Neural Machine Translation" }, { "answers": [ "" ], "context": "Starting with the initial work on word-based translation system BIBREF12 , phrase-based machine translation BIBREF13 , BIBREF14 segments the sentence into continuous phrases that are used as basic translation units. This allows for many-to-many alignments.", "id": 1873, "question": "Which NMT architecture do they use?", "title": "Pre-Translation for Neural Machine Translation" }, { "answers": [ "" ], "context": "In this work, we want to combine the advantages of PBMT and NMT. Using the combined system we should be able to generate a translation for all words that occur at least once in the training data, while maintaining high quality translations for most sentences from NMT. Motivated by several approaches to simplify the translation process for PBMT using preprocessing, we will translate the source as a preprocessing step using the phrase-base machine translation system.", "id": 1874, "question": "Do they train the NMT model on PBMT outputs?", "title": "Pre-Translation for Neural Machine Translation" }, { "answers": [ "" ], "context": "Kurdish is an Indo-European language mainly spoken in central and eastern Turkey, northern Iraq and Syria, and western Iran. It is a less-resourced language BIBREF0, in other words, a language for which general-purpose grammars and raw internet-based corpora are the main existing resources. The language is spoken in five main dialects, namely, Kurmanji (aka Northern Kurdish), Sorani (aka Central Kurdish), Southern Kurdish, Zazaki and Gorani BIBREF1.", "id": 1875, "question": "Is the corpus annotated?", "title": "Developing a Fine-Grained Corpus for a Less-resourced Language: the case of Kurdish" }, { "answers": [ "" ], "context": "Although the initiative to create a corpus for Kurdish dates back to 1998 BIBREF5, efforts in creating machine-readable corpora for Kurdish are recent. The first machine-readable corpus for Kurdish is the Leipzig Corpora Collection which is constructed using different sources on the Web BIBREF6. Later, Pewan BIBREF2 and Bianet BIBREF7 were developed as general-purpose corpora based on news articles. Kurdish corpora are also constructed for specific tasks such as dialectology BIBREF8, BIBREF3, machine transliteration BIBREF9, and part-of-speech (POS) annotation BIBREF10, BIBREF11. However, to the best of our knowledge, currently, there is no domain-specific corpus for Kurdish dialects.", "id": 1876, "question": "How is the corpus normalized?", "title": "Developing a Fine-Grained Corpus for a Less-resourced Language: the case of Kurdish" }, { "answers": [ "Economics, Genocide, Geography, History, Human Rights, Kurdish, Kurdology, Philosophy, Physics, Theology, Sociology, Social Study" ], "context": "KTC is composed of 31 educational textbooks published from 2011 to 2018 in various topics by the MoE. We received the material from the MoE partly in different versions of Microsoft Word and partly in Adobe InDesign formats. In the first step, we categorized each textbook based on the topics and chapters. As the original texts were not in Unicode, we converted the content to Unicode. This step was followed by a pre-processing stage where the texts were normalized by replacing zero-width-non-joiner (ZWNJ) BIBREF2 and manually verifying the orthography based on the reference orthography of the Kurdistan Region of Iraq. In the normalization process, we did not remove punctuation and special characters so that the corpus can be easily adapted our current task and also to future tasks where the integrity of the text may be required.", "id": 1877, "question": "What are the 12 categories devised?", "title": "Developing a Fine-Grained Corpus for a Less-resourced Language: the case of Kurdish" }, { "answers": [ "" ], "context": "Previously, researchers have addressed the challenges in Kurdish corpora development BIBREF2, BIBREF13, BIBREF3. We highlight two main challenges we faced during the KTC development. First, most of the written Kurdish resources have not been digitized BIBREF14, or they are either not publicly available or are not fully convertible. Second, Kurdish text processing suffers from different orthographic issues BIBREF9 mainly due to the lack of standard orthography and the usage of non-Unicode keyboards. Therefore, we carried out a semi-automatic conversion, which made the process costly in terms of time and human assistance.", "id": 1878, "question": "Is the corpus annotated with a phonetic transcription?", "title": "Developing a Fine-Grained Corpus for a Less-resourced Language: the case of Kurdish" }, { "answers": [ "" ], "context": "We presented KTC–the Kurdish Textbook Corpus, as the first domain-specific corpus for Sorani Kurdish. This corpus will pave the way for further developments in Kurdish language processing. We have mad the corpus available at https://github.com/KurdishBLARK/KTC for non-commercial use. We are currently working on a project on the Sorani spelling error detection and correction. As future work, we are aiming to develop a similar corpus for all Kurdish dialects, particularly Kurmanji.", "id": 1879, "question": "Is the corpus annotated with Part-of-Speech tags?", "title": "Developing a Fine-Grained Corpus for a Less-resourced Language: the case of Kurdish" }, { "answers": [ "" ], "context": "Language identification (“”) is the task of determining the natural language that a document or part thereof is written in. Recognizing text in a specific language comes naturally to a human reader familiar with the language. intro:langid presents excerpts from Wikipedia articles in different languages on the topic of Natural Language Processing (“NLP”), labeled according to the language they are written in. Without referring to the labels, readers of this article will certainly have recognized at least one language in intro:langid, and many are likely to be able to identify all the languages therein.", "id": 1880, "question": "what evaluation methods are discussed?", "title": "Automatic Language Identification in Texts: A Survey" }, { "answers": [ "Answer with content missing: (Names of many identifiers missing) TextCat, ChromeCLD, LangDetect, langid.py, whatlang, whatthelang, YALI, LDIG, Polyglot 3000, Lextek Language Identifier and Open Xerox Language Identifier." ], "context": "is in some ways a special case of text categorization, and previous research has examined applying standard text categorization methods to BIBREF7 , BIBREF8 .", "id": 1881, "question": "what are the off-the-shelf systems discussed in the paper?", "title": "Automatic Language Identification in Texts: A Survey" }, { "answers": [ "" ], "context": "Among assessment methods, the job interview remains the most common way to evaluate candidates. The interview can be done via phone, live video, face to face, or more recently asynchronous video interview. For the latter, candidates connect to a platform, and record themselves while answering a set of questions chosen by the recruiter. The platform then allows several recruiters to evaluate the candidate, to discuss among themselves and possibly to invite the candidate to a face-to-face interview. Recruiters choose to use these platforms because it gives them access to a larger pool of candidates, and it speeds up the application processing time. In addition, it allows candidates to do the interview whenever and wherever it suits them the most. However, given a large number of these asynchronous interviews it may quickly become unmanageable for recruiters. The highly structured characteristic of asynchronous video interviews (same questions, same amount of time per candidate) enhances their predictive validity, and reduces inter-recruiter variability BIBREF0 . Moreover, recent advances in Social Signal Processing (SSP) BIBREF1 have enabled automated candidate assessment BIBREF2 , and companies have already started deploying solutions serving that purpose. However, previous studies used corpora of simulated interviews with limited sizes. The work proposed in this paper relies on a corpus that has been built in collaboration with a company and that consists of more than 7000 real job interviews for 475 open positions. The size of this corpus enables the exploration of emerging models such as deep learning models, that are known to be difficult to deploy for Social Computing because of the difficulty to obtain large annotations of social behaviors. Based on those facts, we propose HireNet, a new hierarchical attention neural network for the purpose of automatically classifying candidates into two classes: hirable and not hirable. Our model aims to assist recruiters in the selection process. It does not aim to make any automatic decision about candidate selection. First, this model was built to mirror the sequential and hierarchical structure of an interview assessment: recruiters watch a sequence of questions and answers, which are themselves sequences of words or behavioral signals. Second, the HireNet model integrates the context of the open position (questions during the interview and job title) in order both to determine the relative importance between question-answer pairs and to highlight important behavioral cues with regard to a question. Third, HireNet attention mechanisms enhance the interpretability of our model for each modality. In fact, they provide a way for recruiters to validate and trust the model through visualization, and possibly for candidates to locate their strengths or areas of improvement in an interview.", "id": 1882, "question": "How is \"hirability\" defined?", "title": "HireNet: a Hierarchical Attention Model for the Automatic Analysis of Asynchronous Video Job Interviews" }, { "answers": [ "" ], "context": "To the best of our knowledge, only one corpus of interviews with real open positions has been collected and is subject to automatic analysis BIBREF3 . This corpus consists of face-to-face job interviews for a marketing short assignment whose candidates are mainly students. There are video corpora of face-to-face mock interviews that include two corpora built at the Massachusetts Institute of Technology BIBREF4 , BIBREF5 , and a corpus of students in services related to hospitality BIBREF6 . Many corpora of simulated asynchronous video interviews have also been built: a corpus of employees BIBREF7 , a corpus of students from Bangalore University BIBREF8 and a corpus collected through the use of crowdsourcing tools BIBREF2 . Some researchers are also interested in online video resumes and have constituted a corpus of video CVs from YouTube BIBREF9 . A first impressions challenge dataset was also supplemented by hirability annotation BIBREF10 . Some corpora are annotated by experts or students in psychology BIBREF7 , BIBREF2 , BIBREF3 , BIBREF11 . Other corpora have used crowdsourcing platforms or naive observers BIBREF8 for annotation. Table TABREF2 contains a summary of the corpora of job interviews used in previous works.", "id": 1883, "question": "Have the candidates given their consent to have their videos used for the research?", "title": "HireNet: a Hierarchical Attention Model for the Automatic Analysis of Asynchronous Video Job Interviews" }, { "answers": [ "" ], "context": "Features Recent advances in SSP have offered toolboxes to extract features from audio BIBREF13 and video streams BIBREF14 . As asynchronous job interviews are videos, features from each modality (verbal content, audio and video) have to be extracted frame by frame in order to build a classification model. Audio cues consist mainly of prosody features (fundamental frequency, intensity, mel-frequency cepstral coefficients, etc) and speaking activity (pauses, silences, short utterances, etc) BIBREF15 , BIBREF12 . Features derived from facial expressions (facial actions units, head rotation and position, gaze direction, etc) constitute the most extracted visual cues BIBREF2 . Finally, advances in automatic speech recognition have enabled researchers to use the verbal content of candidates. In order to describe the verbal content, researchers have used lexical statistics (number of words, number of unique words, etc), dictionaries (Linguistic Inquiry Word Count) BIBREF12 , topic modeling BIBREF5 , bag of words or more recently document embedding BIBREF7 .", "id": 1884, "question": "Do they analyze if their system has any bias?", "title": "HireNet: a Hierarchical Attention Model for the Automatic Analysis of Asynchronous Video Job Interviews" }, { "answers": [ "" ], "context": "Neural networks have proven to be successful in numerous Social Computing tasks. Multiple architectures in the field of neural networks have outperformed hand crafted features for emotion detection in videos BIBREF18 , facial landmarks detection BIBREF14 , document classification BIBREF19 These results are explained by the capability of neural networks to automatically perform useful transformations on low level features. Moreover, some architectures such as Recurrent Neural Networks were especially tailored to represent sequences. In addition, attention mechanisms have proven to be successful in highlighting salient information enhancing the performance and interpretability of neural networks. For example, in rapport detection, attention mechanisms allow to focus only on important moments during dyadic conversations BIBREF20 . Finally, numerous models have been proposed to model the interactions between modalities in emotion detection tasks through attention mechanisms BIBREF21 , BIBREF18 .", "id": 1885, "question": "Is there any ethical consideration in the research?", "title": "HireNet: a Hierarchical Attention Model for the Automatic Analysis of Asynchronous Video Job Interviews" }, { "answers": [ "" ], "context": "Universal Dependencies (UD) BIBREF0, BIBREF1, BIBREF2 is an ongoing project aiming to develop cross-lingually consistent treebanks for different languages. UD provided a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. The annotation schema relies on Universal Stanford Dependencies BIBREF3 and Google Universal POS tags BIBREF4. The general principle is to provide universal annotation ; meanwhile, each language can add language-specific relations to the universal pool when necessary.", "id": 1886, "question": "What low-resource languages were used in this work?", "title": "Cross-Lingual Adaptation Using Universal Dependencies" }, { "answers": [ "" ], "context": "The Universal Dependencies project aims to produce consistent dependency treebanks and parsers for many languages BIBREF0, BIBREF1, BIBREF2. The most important achievements of the project are the cross-lingual annotation guidelines and sets of universal POS and the grammatical relation tags. Consequentially many treebanks have been developed for different languages. The general rule of UD project is to provide a universal tag set ; however each language can add language-specific relations to the universal pool or omit some tags.", "id": 1887, "question": "What classification task was used to evaluate the cross-lingual adaptation method described in this work?", "title": "Cross-Lingual Adaptation Using Universal Dependencies" }, { "answers": [ "" ], "context": "Keyword detection is like searching for a needle in a haystack: the detector must listen to continuously streaming audio, ignoring nearly all of it, yet still triggering correctly and instantly. In the last few years, with the advent of voice assistants, keyword spotting has become a common way to initiate a conversation with them (e.g. \"Ok Google\", \"Alexa\", or \"Hey Siri\"). As the assistant use cases spread through a variety of devices, from mobile phones to home appliances and further into the internet-of-things (IoT) –many of them battery powered or with restricted computational capacity, it is important for the keyword spotting system to be both high-quality as well as computationally efficient.", "id": 1888, "question": "How many parameters does the presented model have?", "title": "End-to-End Streaming Keyword Spotting" }, { "answers": [ "" ], "context": "This paper proposes a new end-to-end keyword spotting system that by subsuming both the encoding and decoding components into a single neural network can be trained to produce directly an estimation (i.e. score) of the presence of a keyword in streaming audio. The following two sections cover the efficient memoized neural network topology being utilized, as well as the method to train the end-to-end neural network to directly produce the keyword spotting score.", "id": 1889, "question": "How do they measure the quality of detection?", "title": "End-to-End Streaming Keyword Spotting" }, { "answers": [ "" ], "context": "We make use of a type of neural network layer topology called SVDF (single value decomposition filter), originally introduced in BIBREF14 to approximate a fully connected layer with a low rank approximation. As proposed in BIBREF14 and depicted in equation EQREF2 , the activation INLINEFORM0 for each node INLINEFORM1 in the rank-1 SVDF layer at a given inference step INLINEFORM2 can be interpreted as performing a mix of selectivity in time ( INLINEFORM3 ) with selectivity in the feature space ( INLINEFORM4 ) over a sequence of input vectors INLINEFORM5 of size INLINEFORM6 . DISPLAYFORM0 ", "id": 1890, "question": "What previous approaches are considered?", "title": "End-to-End Streaming Keyword Spotting" }, { "answers": [ "" ], "context": "Semantic parsing aims to map natural language questions to the logical forms of their underlying meanings, which can be regarded as programs and executed to yield answers, aka denotations BIBREF0 . In the past few years, neural network based semantic parsers have achieved promising performances BIBREF1 , however, their success is limited to the setting with rich supervision, which is costly to obtain. There have been recent attempts at low-resource semantic parsing, including data augmentation methods which are learned from a small number of annotated examples BIBREF2 , and methods for adapting to unseen domains while only being trained on annotated examples in other domains.", "id": 1891, "question": "How is the back-translation model trained?", "title": "Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning" }, { "answers": [ "" ], "context": "We focus on the task of executive semantic parsing. The goal is to map a natural language question/utterance INLINEFORM0 to a logical form/program INLINEFORM1 , which can be executed over a world INLINEFORM2 to obtain the correct answer INLINEFORM3 .", "id": 1892, "question": "Are the rules dataset specific?", "title": "Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning" }, { "answers": [ "WikiSQL - 2 rules (SELECT, WHERE)\nSimpleQuestions - 1 rule\nSequentialQA - 3 rules (SELECT, WHERE, COPY)" ], "context": "We describe our approach for low-resource neural semantic parsing in this section.", "id": 1893, "question": "How many rules had to be defined?", "title": "Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning" }, { "answers": [ "" ], "context": "Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logical form INLINEFORM1 , and a question generator, which maps INLINEFORM2 to INLINEFORM3 . The semantic parser works for the primary task, and the question generator mainly works for generating pseudo datapoints. We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises. This is based on the consideration that in an encoder-decoder model, the decoder is more sensitive to the data distribution than the encoder. We use datapoints from both models to train the semantic parser because a logical form is structural which follows a grammar, whose distribution is similar to the ground truth.", "id": 1894, "question": "What datasets are used in this paper?", "title": "Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning" }, { "answers": [ "" ], "context": "Named Entity Recognition (NER) is a classification task that identifies words in a text that refer to entities (such as dates, person, organization and location names). It is a core task of natural language processing and a component for many downstream applications like search engines, knowledge graphs and personal assistants. For high-resource languages like English, this is a well-studied problem with complex state-of-the-art systems reaching close to or above 90% F1-score on the standard datasets CoNLL03 BIBREF0 and Ontonotes BIBREF1. In recent years, research has been extended to a larger pool of languages including those of developing countries BIBREF2, BIBREF3, BIBREF4, BIBREF5. Often, for these languages (like Hausa and Yorùbá studied here), there exists a large population with access to digital devices and internet (and therefore digital text), but natural language processing (NLP) tools do not support them.", "id": 1895, "question": "How much labeled data is available for these two languages?", "title": "Distant Supervision and Noisy Label Learning for Low Resource Named Entity Recognition: A Study on Hausa and Yor\\`ub\\'a" }, { "answers": [ "Bi-LSTM: For low resource <17k clean data: Using distant supervision resulted in huge boost of F1 score (1k eg. ~9 to ~36 wit distant supervision)\nBERT: <5k clean data boost of F1 (1k eg. ~32 to ~47 with distant supervision)" ], "context": "Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan. Hausa has several dialects but the one regarded as standard Hausa is the Kananci spoken in the ancient city of Kano in Nigeria. Kananci is the dialect popularly used in many local (e.g VON news) and international news media such as BBC, VOA, DW and Radio France Internationale. Hausa is a tone language but the tones are often ignored in writings, the language is written in a modified Latin alphabet. Despite the popularity of Hausa as an important regional language in Africa and it's popularity in news media, it has very little or no labelled data for common NLP tasks such as text classification, named entity recognition and question answering.", "id": 1896, "question": "What was performance of classifiers before/after using distant supervision?", "title": "Distant Supervision and Noisy Label Learning for Low Resource Named Entity Recognition: A Study on Hausa and Yor\\`ub\\'a" }, { "answers": [ "" ], "context": "The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered from news sites, forums, weblogs, Wikipedia articles and twitter messages. We use a split of 10k training and 1k test instances. Due to the Hausa data not being publicly available at the time of writing, we could only perform a limited set of experiments on it.", "id": 1897, "question": "What classifiers were used in experiments?", "title": "Distant Supervision and Noisy Label Learning for Low Resource Named Entity Recognition: A Study on Hausa and Yor\\`ub\\'a" }, { "answers": [ "" ], "context": "In this work, we rely on two sources of distant supervision chosen for its ease of application:", "id": 1898, "question": "In which countries are Hausa and Yor\\`ub\\'a spoken?", "title": "Distant Supervision and Noisy Label Learning for Low Resource Named Entity Recognition: A Study on Hausa and Yor\\`ub\\'a" }, { "answers": [ "" ], "context": "In the light of increased vaccine hesitance in various countries, consistent monitoring of public beliefs and opinions about the national immunization program is important. Besides performing qualitative research and surveys, real-time monitoring of social media data about vaccination is a valuable tool to this end. The advantage is that one is able to detect and respond to possible vaccine concerns in a timely manner, that it generates continuous data and that it consists of unsolicited, voluntary user-generated content.", "id": 1899, "question": "What is the agreement score of their annotated dataset?", "title": "Monitoring stance towards vaccination in twitter messages" }, { "answers": [ "" ], "context": "We set out to curate a corpus of tweets annotated for their stance towards vaccination, and to employ this corpus to train a machine learning classifier to distinguish tweets with a negative stance towards vaccination from other tweets. In the following, we will describe the stages of data acquisition, from collection to labeling.", "id": 1900, "question": "What is the size of the labelled dataset?", "title": "Monitoring stance towards vaccination in twitter messages" }, { "answers": [ "" ], "context": "We queried Twitter messages that refer to a vaccination-related key term from TwiNL , a database with IDs of Dutch Twitter messages from January 2012 onwards BIBREF22. In contrast to the open Twitter Search API, which only allows one to query tweets posted within the last seven days, TwiNL makes it possible to collect a much larger sample of Twitter posts, ranging several years.", "id": 1901, "question": "Which features do they use to model Twitter messages?", "title": "Monitoring stance towards vaccination in twitter messages" }, { "answers": [ "" ], "context": "The stance towards vaccination was categorized into `Negative’, `Neutral’, `Positive’ and `Not clear’. The latter category was essential, as some posts do not convey enough information about the stance of the writer. In addition to the four-valued stance classes we included separate classes grouped under relevance, subject and sentiment as annotation categories. With these additional categorizations we aimed to obtain a precise grasp of all possibly relevant tweet characteristics in relation to vaccination, which could help in a machine learning setting.", "id": 1902, "question": "Do they allow for messages with vaccination-related key terms to be of neutral stance?", "title": "Monitoring stance towards vaccination in twitter messages" }, { "answers": [ "Evaluation datasets used:\nCMRC 2018 - 18939 questions, 10 answers\nDRCD - 33953 questions, 5 answers\nNIST MT02/03/04/05/06/08 Chinese-English - Not specified\n\nSource language train data:\nSQuAD - Not specified" ], "context": "Machine Reading Comprehension (MRC) has been a popular task to test the reading ability of the machine, which requires to read text material and answer the questions based on it. Starting from cloze-style reading comprehension, various neural network approaches have been proposed and massive progresses have been made in creating large-scale datasets and neural models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Though various types of contributions had been made, most works are dealing with English reading comprehension. Reading comprehension in other than English has not been well-addressed mainly due to the lack of large-scale training data.", "id": 1903, "question": "How big are the datasets used?", "title": "Cross-Lingual Machine Reading Comprehension" }, { "answers": [ "" ], "context": "Machine Reading Comprehension (MRC) has been a trending research topic in recent years. Among various types of MRC tasks, span-extraction reading comprehension has been enormously popular (such as SQuAD BIBREF4), and we have seen a great progress on related neural network approaches BIBREF11, BIBREF12, BIBREF13, BIBREF3, BIBREF14, especially those were built on pre-trained language models, such as BERT BIBREF7. While massive achievements have been made by the community, reading comprehension in other than English has not been well-studied mainly due to the lack of large-scale training data.", "id": 1904, "question": "Is this a span-based (extractive) QA task?", "title": "Cross-Lingual Machine Reading Comprehension" }, { "answers": [ "" ], "context": "In this section, we illustrate back-translation approaches for cross-lingual machine reading comprehension, which is natural and easy to implement.", "id": 1905, "question": "Are the contexts in a language different from the questions?", "title": "Cross-Lingual Machine Reading Comprehension" }, { "answers": [ "Microsoft Research dataset containing movie, taxi and restaurant domains." ], "context": "In a task-oriented dialogue system, the dialogue manager policy module predicts actions usually in terms of dialogue acts and domain specific slots. It is a crucial component that influences the efficiency (e.g., the conciseness and smoothness) of the communication between the user and the agent. Both supervised learning (SL) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 and reinforcement learning (RL) approaches BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9 have been adopted to learn policies. SL learns a policy to predict acts given the dialogue state. Recent work BIBREF10, BIBREF11 also used SL as pre-training for RL to mitigate the sample inefficiency of RL approaches and to reduce the number of interactions. Sequence2Sequence (Seq2Seq) BIBREF12 approaches have also been adopted in user simulators to produce user acts BIBREF13. These approaches typically assume that the agent can only produce one act per turn through classification. Generating only one act per turn significantly limits what an agent can do in a turn and leads to lengthy dialogues, making tracking of state and context throughout the dialogue harder. An example in Table TABREF3 shows how the agent can produce both an inform and a multiple_choice act, reducing the need for additional turns. The use of multiple actions has previously been used in interaction managers that keep track of the floor (who is speaking right now) BIBREF14, BIBREF15, BIBREF16, but the option of generating multiple acts simultaneously at each turn for dialogue policy has been largely ignored, and only explored in simulated scenarios without real data BIBREF17.", "id": 1906, "question": "What datasets are used for training/testing models? ", "title": "Modeling Multi-Action Policy for Task-Oriented Dialogues" }, { "answers": [ "For entity F1 in the movie, taxi and restaurant domain it results in scores of 50.86, 64, and 60.35. For success, it results it outperforms in the movie and restaurant domain with scores of 77.95 and 71.52" ], "context": "The proposed policy network adopts an encoder-decoder architecture (Figure FIGREF5). The input to the encoder is the current-turn dialogue state, which follows BIBREF19's definition. It contains policy actions from the previous turn, user dialogue acts from the current turn, user requested slots, the user informed slots, the agent requested slots and agent proposed slots. We treat the dialogue state as a sequence and adopt a GRU BIBREF20 to encode it. The encoded dialogue state is a sequence of vectors $\\mathbf {E} = (e_0, \\ldots , e_l)$ and the last hidden state is $h^{E}$. The CAS decoder recurrently generates tuples at each step. It takes $h^{E}$ as initial hidden state $h_0$. At each decoding step, the input contains the previous (continue, act, slots) tuple $(c_{t-1},a_{t-1},s_{t-1})$. An additional vector $k$ containing the number of results from the knowledge base (KB) query and the current turn number is given as input. The output of the decoder at each step is a tuple $(c, a, s)$, where $c \\in \\lbrace \\langle \\text{continue} \\rangle , \\langle \\text{stop} \\rangle , \\langle \\text{pad} \\rangle \\rbrace $, $a \\in A$ (one act from the act set), and $s \\subset S$ (a subset from the slot set).", "id": 1907, "question": "How better is gCAS approach compared to other approaches?", "title": "Modeling Multi-Action Policy for Task-Oriented Dialogues" }, { "answers": [ "It has three sequentially connected units to output continue, act and slots generating multi-acts in a doble recurrent manner." ], "context": "As shown in Figure FIGREF7, the gated CAS cell contains three sequentially connected units for outputting continue, act, and slots respectively.", "id": 1908, "question": "What is specific to gCAS cell?", "title": "Modeling Multi-Action Policy for Task-Oriented Dialogues" }, { "answers": [ "" ], "context": "The question of how human beings resolve pronouns has long been of interest to both linguistics and natural language processing (NLP) communities, for the reason that pronoun itself has weak semantic meaning BIBREF0 and brings challenges in natural language understanding. To explore solutions for that question, pronoun coreference resolution BIBREF1 was proposed. As an important yet vital sub-task of the general coreference resolution task, pronoun coreference resolution is to find the correct reference for a given pronominal anaphor in the context and has been shown to be crucial for a series of downstream tasks BIBREF2 , including machine translation BIBREF3 , summarization BIBREF4 , information extraction BIBREF5 , and dialog systems BIBREF6 .", "id": 1909, "question": "What dataset do they evaluate their model on?", "title": "Incorporating Context and External Knowledge for Pronoun Coreference Resolution" }, { "answers": [ "counts of predicate-argument tuples from English Wikipedia" ], "context": "Following the conventional setting BIBREF1 , the task of pronoun coreference resolution is defined as: for a pronoun $p$ and a candidate noun phrase set ${\\mathcal {N}}$ , the goal is to identify the correct non-pronominal references set ${\\mathcal {C}}$ . the objective is to maximize the following objective function: ", "id": 1910, "question": "What is the source of external knowledge?", "title": "Incorporating Context and External Knowledge for Pronoun Coreference Resolution" }, { "answers": [ "" ], "context": " When performing network embedding, one maps network nodes into vector representations that reside in a low-dimensional latent space. Such techniques seek to encode topological information of the network into the embedding, such as affinity BIBREF0 , local interactions (e.g, local neighborhoods) BIBREF1 , and high-level properties such as community structure BIBREF2 . Relative to classical network-representation learning schemes BIBREF3 , network embeddings provide a more fine-grained representation that can be easily repurposed for other downstream applications (e.g., node classification, link prediction, content recommendation and anomaly detection).", "id": 1911, "question": "Which of their proposed attention methods works better overall?", "title": "Improving Textual Network Embedding with Global Attention via Optimal Transport" }, { "answers": [ "" ], "context": " We introduce basic notation and definitions used in this work.", "id": 1912, "question": "Which dataset of texts do they use?", "title": "Improving Textual Network Embedding with Global Attention via Optimal Transport" }, { "answers": [ "" ], "context": "", "id": 1913, "question": "Do they measure how well they perform on longer sequences specifically?", "title": "Improving Textual Network Embedding with Global Attention via Optimal Transport" }, { "answers": [ "" ], "context": " To capture both the topological information (network structure INLINEFORM0 ) and the semantic information (text content INLINEFORM1 ) in the textual network embedding, we explicitly model two types of embeddings for each node INLINEFORM2 : ( INLINEFORM3 ) the topological embedding INLINEFORM4 , and ( INLINEFORM5 ) the semantic embedding INLINEFORM6 . The final embedding is constructed by concatenating the topological and semantic embeddings, i.e., INLINEFORM7 . We consider the topological embedding INLINEFORM8 as a static property of the node, fixed regardless of the context. On the other hand, the semantic embedding INLINEFORM9 dynamically depends on the context, which is the focus of this study.", "id": 1914, "question": "Which other embeddings do they compare against?", "title": "Improving Textual Network Embedding with Global Attention via Optimal Transport" }, { "answers": [ "Test set 1 contained 57 drug labels and 8208 sentences and test set 2 contained 66 drug labels and 4224 sentences" ], "context": "Preventable adverse drug reactions (ADRs) introduce a growing concern in the modern healthcare system as they represent a large fraction of hospital admissions and play a significant role in increased health care costs BIBREF0 . Based on a study examining hospital admission data, it is estimated that approximately three to four percent of hospital admissions are caused by adverse events BIBREF1 ; moreover, it is estimated that between 53% and 58% of these events were due to medical errors BIBREF2 (and are therefore considered preventable). Such preventable adverse events have been cited as the eighth leading cause of death in the U.S., with an estimated fatality rate of between 44,000 and 98,000 each year BIBREF3 . As drug-drug interactions (DDIs) may lead to preventable ADRs, being able to extract DDIs from structured product labeling (SPL) documents for prescription drugs is an important effort toward effective dissemination of drug safety information. The Text Analysis Conference (TAC) is a series of workshops aimed at encouraging research in natural language processing (NLP) and related applications by providing large test collections along with a standard evaluation procedure. The Drug-Drug Interaction Extraction from Drug Labels track of TAC 2018 BIBREF4 , organized by the U.S. Food and Drug Administration (FDA) and U.S. National Library of Medicine (NLM), is established with the goal of transforming the contents of SPLs into a machine-readable format with linkage to standard terminologies.", "id": 1915, "question": "What were the sizes of the test sets?", "title": "A Multi-Task Learning Framework for Extracting Drugs and Their Interactions from Drug Labels" }, { "answers": [ "" ], "context": "Herein, we describe the training and testing data involved in this task and the metrics used for evaluation. In Section SECREF5 , we describe our modeling approach, our deep learning architecture, and our training procedure.", "id": 1916, "question": "What training data did they use?", "title": "A Multi-Task Learning Framework for Extracting Drugs and Their Interactions from Drug Labels" }, { "answers": [ "" ], "context": "The specificity of a sentence measures its “quality of belonging or relating uniquely to a particular subject” BIBREF0 . It is often pragmatically defined as the level of detail in the sentence BIBREF1 , BIBREF2 . When communicating, specificity is adjusted to serve the intentions of the writer or speaker BIBREF3 . In the examples below, the second sentence is clearly more specific than the first one:", "id": 1917, "question": "What domains do they experiment with?", "title": "Domain Agnostic Real-Valued Specificity Prediction" }, { "answers": [ "" ], "context": "Text adventure games, in which players must make sense of the world through text descriptions and declare actions through natural language, can provide a stepping stone toward more real-world environments where agents must communicate to understand the state of the world and affect change in the world. Despite the steadily increasing body of research on text-adventure games BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, and in addition to the ubiquity of deep reinforcement learning applications BIBREF8, BIBREF9, teaching an agent to play text-adventure games remains a challenging task. Learning a control policy for a text-adventure game requires a significant amount of exploration, resulting in training runs that take hundreds of thousands of simulations BIBREF2, BIBREF7.", "id": 1918, "question": "What games are used to test author's methods?", "title": "Transfer in Deep Reinforcement Learning using Knowledge Graphs" }, { "answers": [ "" ], "context": "Text-adventure games, in which an agent must interact with the world entirely through natural language, provide us with two challenges that have proven difficult for deep reinforcement learning to solve BIBREF2, BIBREF4, BIBREF7: (1) The agent must act based only on potentially incomplete textual descriptions of the world around it. The world is thus partially observable, as the agent does not have access to the state of the world at any stage. (2) the action space is combinatorially large—a consequence of the agent having to declare commands in natural language. These two problems together have kept commercial text adventure games out of the reach of existing deep reinforcement learning methods, especially given the fact that most of these methods attempt to train on a particular game from scratch.", "id": 1919, "question": "How is the domain knowledge transfer represented as knowledge graph?", "title": "Transfer in Deep Reinforcement Learning using Knowledge Graphs" }, { "answers": [ "" ], "context": "Opinion Mining and Sentiment Analysis (OMSA) are crucial for determining opinion trends and attitudes about commercial products, companies reputation management, brand monitoring, or to track attitudes by mining social media, etc. Furthermore, given the explosion of information produced and shared via the Internet, especially in social media, it is simply not possible to keep up with the constant flow of new information by manual methods.", "id": 1920, "question": "What was the baseline?", "title": "Language Independent Sequence Labelling for Opinion Target Extraction" }, { "answers": [ "ABSA SemEval 2014-2016 datasets\nYelp Academic Dataset\nWikipedia dumps" ], "context": "Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised and deep learning models. To the best of our knowledge, the first work on OTE was published by BIBREF8 . They created a new task which consisted of generating overviews of the main product features from a collection of customer reviews on consumer electronics. They addressed such task using an unsupervised algorithm based on association mining. Other early unsupervised approaches include BIBREF9 which used a dependency parser to obtain more opinion targets, and BIBREF10 which aimed at extracting opinion targets in newswire via Semantic Role Labelling. From a supervised perspective, BIBREF11 presented an approach which learned the opinion target candidates and a combination of dependency and part-of-speech (POS) paths connecting such pairs. Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing.", "id": 1921, "question": "Which datasets are used?", "title": "Language Independent Sequence Labelling for Opinion Target Extraction" }, { "answers": [ "" ], "context": "Three ABSA editions were held within the SemEval Evaluation Exercises between 2014 and 2016. The ABSA 2014 and 2015 tasks consisted of English reviews only, whereas in the 2016 task 7 more languages were added. Additionally, reviews from four domains were collected for the various sub-tasks across the three editions, namely, Consumer Electronics, Telecommunications, Museums and Restaurant reviews. In any case, the only constant in each of the ABSA editions was the inclusion, for the Opinion Target Extraction (OTE) sub-task, of restaurant reviews for every language. Thus, for the experiments presented in this paper we decided to focus on the restaurant domain across 6 languages and the three different ABSA editions. Similarly, this section will be focused on reviewing the OTE results for the restaurant domain.", "id": 1922, "question": "Which six languages are experimented with?", "title": "Language Independent Sequence Labelling for Opinion Target Extraction" }, { "answers": [ "" ], "context": "The work presented in this research note requires the following resources: (i) Aspect Based Sentiment Analysis (ABSA) data for training and testing; (ii) large unlabelled corpora to obtain semantic distributional features from clustering lexicons; and (iii) a sequence labelling system. In this section we will describe each of the resources used.", "id": 1923, "question": "What shallow local features are extracted?", "title": "Language Independent Sequence Labelling for Opinion Target Extraction" }, { "answers": [ "" ], "context": "One of the principal challenges in computational linguistics is to account for the word order of the document or utterance being processed BIBREF0 . Of course, the numbers of possible phrases grows exponentially with respect to a given phrase length, requiring an approximate approach to summarizing its content. rnn are such an approach, and they are used in various tasks in nlp, such as machine translation BIBREF1 , abstractive summarization BIBREF2 and question answering BIBREF3 . However, rnn, as approximations, suffer from numerical troubles that have been identified, such as that of recovering from past errors when generating phrases. We take interest in a model that mitigates this problem, mrnn, and how it has been and can be combined for new models. To evaluate these models, we use the task of recurrent language modeling, which consists in predicting the next token (character or word) in a document. This paper is organized as follows: rnn and mrnn are introduced respectively in Sections SECREF2 and SECREF3 . Section SECREF4 presents new and existing multiplicative models. Section SECREF5 describes the datasets and experiments performed, as well as results obtained. Sections SECREF6 discusses and concludes our findings.", "id": 1924, "question": "Do they compare results against state-of-the-art language models?", "title": "Multiplicative Models for Recurrent Language Modeling" }, { "answers": [ "" ], "context": "rnn are powerful tools of sequence modeling that can preserve the order of words or characters in a document. A document is therefore a sequence of words, INLINEFORM0 . Given the exponential growth of possible histories with respect to the sequence length, the probability of observing a given sequence needs to be approximated. rnn will make this approximation using the product rule, INLINEFORM1 ", "id": 1925, "question": "Do they integrate the second-order term in the mLSTM?", "title": "Multiplicative Models for Recurrent Language Modeling" }, { "answers": [ "" ], "context": "Most recurrent neural network architectures, including lstm and gru share the following building block: DISPLAYFORM0 ", "id": 1926, "question": "Which dataset do they train their models on?", "title": "Multiplicative Models for Recurrent Language Modeling" }, { "answers": [ "$1,728" ], "context": "Environmental concerns of machine learning research has been rising as the carbon emission of certain tasks like neural architecture search reached an exceptional “ocean boiling” level BIBREF7. Increased carbon emission has been one of the key factors to aggravate global warming . Research and development process like parameter search further increase the environment impact. When using cloud-based machines, the environment impact is strongly correlated with budget.", "id": 1927, "question": "How much does it minimally cost to fine-tune some model according to benchmarking framework?", "title": "HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing" }, { "answers": [ "BERT, XLNET RoBERTa, ALBERT, DistilBERT" ], "context": "For pretraining phase, the benchmark is designed to favor energy efficient models in terms of time and cost that each model takes to reach certain multi-task performance pretrained from scratch. For example, we keep track of the time and cost of a BERT model pretrained from scratch. After every thousand of pretraining steps, we clone the model for fine-tuning and see if the final performance can reach our cut-off level. When the level is reached, time and cost for pretraining is used for comparison. Models faster or cheaper to pretrain are recommended.", "id": 1928, "question": "What models are included in baseline benchmarking results?", "title": "HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing" }, { "answers": [ "" ], "context": "Smart conversational agents are increasingly used across business domains BIBREF0 . We focus on recruitment chatbots that connect recruiters and job-seekers. The recruiter teams we work with are motivated by reasons of scale and accessibility to build and maintain chatbots that provide answers to frequently asked questions (FAQs) based on ML/NLP datasets. Our enterprise clients may have up to INLINEFORM0 employees, and commensurate hiring rate. We have found that almost INLINEFORM1 of end-user (job-seeker) traffic occurs outside of working hours BIBREF1 , which is consistent with the anecdotal reports of our clients that using the chatbot helped reduce email and ticket inquiries of common FAQs. The usefulness of these question-answering conversational UIs depends on building and maintaining the ML/NLP components used in the overall flow (see Fig. FIGREF4 ).", "id": 1929, "question": "did they compare with other evaluation metrics?", "title": "Evaluation and Improvement of Chatbot Text Classification Data Quality Using Plausible Negative Examples" }, { "answers": [ "" ], "context": "Chatbots, or “text messaging-based conversational agents”, have received particular attention in 2010s BIBREF0 . Many modern text-based chatbots use relatively simple NLP tools BIBREF7 , or avoid ML/NLP altogether BIBREF2 , relying on conversation flow design and non-NLP inputs like buttons and quick-replies. Conversational natural-language interfaces for question-answering have an extensive history, which distinguishes open-domain and closed-domain systems BIBREF8 . ML-based chatbots rely on curated data to provide examples for classes (commonly, “intents”), and must balance being widely-accessible to many end-users, but typically specialized in the domain and application goal BIBREF9 . In practice, design and development of a chatbot might assume a domain more focused, or different, than real use reveals.", "id": 1930, "question": "which datasets were used in validation?", "title": "Evaluation and Improvement of Chatbot Text Classification Data Quality Using Plausible Negative Examples" }, { "answers": [ "using multiple pivot sentences" ], "context": "Enabling computers to automatically answer questions posed in natural language on any domain or topic has been the focus of much research in recent years. Question answering (QA) is challenging due to the many different ways natural language expresses the same information need. As a result, small variations in semantically equivalent questions, may yield different answers. For example, a hypothetical QA system must recognize that the questions “who created microsoft” and “who started microsoft” have the same meaning and that they both convey the founder relation in order to retrieve the correct answer from a knowledge base.", "id": 1931, "question": "It looks like learning to paraphrase questions, a neural scoring model and a answer selection model cannot be trained end-to-end. How are they trained?", "title": "Learning to Paraphrase for Question Answering" }, { "answers": [ "" ], "context": "Language understanding is a task proving difficult to automatize, because, among other factors, much of the information that is needed for the correct interpretation of an utterance is not explicit in text BIBREF0. This contrasts with how natural is language understanding for humans, who can cope easily with information absent in text, using common sense and background knowledge like, for instance, typical spatial relations between objects. From another perspective, it is well-known that the visual modality provides complementary information to that in the text. In fact, recent advances in deep learning research have led the field of computer vision and natural language processing to significant progress in tasks that involve visual and textual understanding. Tasks that include visual and textual content include Image Captioning BIBREF1, Visual Question Answering BIBREF2, and Visual Machine Translation BIBREF3, among others.", "id": 1932, "question": "What multimodal representations are used in the experiments?", "title": "Evaluating Multimodal Representations on Visual Semantic Textual Similarity" }, { "answers": [ "" ], "context": "The task of Visual Semantic Textual Similarity stems from previous work on textual inference tasks. In textual entailment, given a textual premise and a textual hypothesis, systems need to decide whether the first entails the second, they are in contradiction, or none of the previous BIBREF4. Popular datasets include the Stanford Natural Language Inference dataset BIBREF5. As an alternative to entailment, STS datasets comprise pairs of sentences which have been annotated with similarity scores. STS systems are usually evaluated on the STS benchmark dataset BIBREF6. In this paper we present an extension of STS, so we present the task in more detail in the next section.", "id": 1933, "question": "How much better is inference that has addition of image representation compared to text-only representations? ", "title": "Evaluating Multimodal Representations on Visual Semantic Textual Similarity" }, { "answers": [ "" ], "context": "STS assesses the degree to which two sentences are semantically equivalent to each other. The annotators measure the similarity among sentences, with higher scores for more similar sentences. The annotations of similarity were guided by the scale in Table TABREF4, ranging from 0 for no meaning overlap to 5 for meaning equivalence. Intermediate values reflect interpretable levels of partial overlap in meaning.", "id": 1934, "question": "How they compute similarity between the representations?", "title": "Evaluating Multimodal Representations on Visual Semantic Textual Similarity" }, { "answers": [ "" ], "context": "The data collection of sentence-image pairs comprised several steps, including the selection of pairs to be annotated, the annotation methodology, and a final filtering stage.", "id": 1935, "question": "How big is vSTS training data?", "title": "Evaluating Multimodal Representations on Visual Semantic Textual Similarity" }, { "answers": [ "" ], "context": "Recent success in language modelling and representation learning have largely focused on learning the semantic structures of language BIBREF0. Syntactic information, such as part-of-speech (POS) sequences, is an essential part of language and can be important for tasks such as authorship identification, writing-style analysis, translation, etc. Methods that learn syntactic representations have received relatively less attention, with focus mostly on evaluating the semantic information contained in representations produced by language models.", "id": 1936, "question": "Which evaluation metrics do they use for language modelling?", "title": "Exploring Multilingual Syntactic Sentence Representations" }, { "answers": [ "" ], "context": "Training semantic embeddings based on multilingual data was studied by MUSE BIBREF1 and LASER BIBREF2 at the word and sentence levels respectively. Multi-task training for disentangling semantic and syntactic information was studied in BIBREF6. This work also used a nearest neighbour method to evaluate the syntactic properties of models, though their focus was on disentanglement rather than embedding quality.", "id": 1937, "question": "Do they do quantitative quality analysis of learned embeddings?", "title": "Exploring Multilingual Syntactic Sentence Representations" }, { "answers": [ "" ], "context": "We iterated upon the model architecture proposed in LASER BIBREF2. The model consists of a two-layer Bi-directional LSTM (BiLSTM) encoder and a single-layer LSTM decoder. The encoder is language agnostic as no language context is provided as input. In contrast to LASER, we use the concatenation of last hidden and cell states of the encoder to initialize the decoder through a linear projection.", "id": 1938, "question": "Do they evaluate on downstream tasks?", "title": "Exploring Multilingual Syntactic Sentence Representations" }, { "answers": [ "" ], "context": "Training was performed using an aligned parallel corpus. Given a source-target aligned sentence pair (as in machine translation), we:", "id": 1939, "question": "Which corpus do they use?", "title": "Exploring Multilingual Syntactic Sentence Representations" }, { "answers": [ "For the Oshiete-goo dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, Trans, by 0.021, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.037. For the nfL6 dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, CLSTM, by 0.028, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.040. Human evaluation of the NAGM's generated outputs for the Oshiete-goo dataset had 47% ratings of (1), the highest rating, while CLSTM only received 21% ratings of (1). For the nfL6 dataset, the comparison of (1)'s was NAGM's 50% to CLSTM's 30%. " ], "context": "Question Answering (QA) modules play particularly important roles in recent dialog-based Natural Language Understanding (NLU) systems, such as Apple's Siri and Amazon's Echo. Users chat with AI systems in natural language to get the answers they are seeking. QA systems can deal with two types of question: factoid and non-factoid ones. The former sort asks, for instance, for the name of a thing or person such as “What/Who is $X$?”. The latter sort includes more diverse questions that cannot be answered by a short fact. For instance, users may ask for advice on how to make a long-distance relationship work well or for opinions on public issues. Significant progress has been made in answering factoid questions BIBREF0, BIBREF1; however, answering non-factoid questions remains a challenge for QA modules.", "id": 1940, "question": "How much more accurate is the model than the baseline?", "title": "Conclusion-Supplement Answer Generation for Non-Factoid Questions" }, { "answers": [ "" ], "context": "The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Since the nature of semantic roles is more abstract than the syntactic dependencies, SRL has a wide range of applications in different areas, e.g., text classification BIBREF0, text summarization BIBREF1, BIBREF2, recognizing textual entailment BIBREF3, BIBREF4, information extraction BIBREF5, question answering BIBREF6, BIBREF7, and so on.", "id": 1941, "question": "How big is improvement over the old state-of-the-art performance on CoNLL-2009 dataset?", "title": "Syntax-Enhanced Self-Attention-Based Semantic Role Labeling" }, { "answers": [ "In closed setting 84.22 F1 and in open 87.35 F1." ], "context": "Traditional semantic role labeling task BIBREF17 presumes that the syntactic structure of the sentence is given, either being a constituent tree or a dependency tree, like in the CoNLL shared tasks BIBREF18, BIBREF19, BIBREF20. Recent neural-network-based approaches can be roughly categorized into two classes: 1) making use of the syntactic information BIBREF21, BIBREF22, BIBREF23, BIBREF24, and 2) pure end-to-end learning from tokens to semantic labels, e.g., BIBREF25, BIBREF26.", "id": 1942, "question": "What is new state-of-the-art performance on CoNLL-2009 dataset?", "title": "Syntax-Enhanced Self-Attention-Based Semantic Role Labeling" }, { "answers": [ "" ], "context": "In this section, we first introduce the basic architecture of our self-attention-based SRL model, and then present two different ways to encode the syntactic dependency information. Afterwards, we compare three approaches to incorporate the syntax into the base model, concatenation to the input embedding, LISA, and our proposed relation-aware method.", "id": 1943, "question": "How big is CoNLL-2009 dataset?", "title": "Syntax-Enhanced Self-Attention-Based Semantic Role Labeling" }, { "answers": [ "" ], "context": "Our basic model is a multi-head self-attention-based model, which is effective in SRL task as previous work proves BIBREF35. The model consists of three layers: the input layer, the encoder layer and the prediction layer as shown in Figure FIGREF5.", "id": 1944, "question": "What different approaches of encoding syntactic information authors present?", "title": "Syntax-Enhanced Self-Attention-Based Semantic Role Labeling" }, { "answers": [ "Marcheggiani and Titov (2017) and Cai et al. (2018)" ], "context": "The input layer contains three types of embeddings: token embedding, predicate embedding, and positional embedding.", "id": 1945, "question": "What are two strong baseline methods authors refer to?", "title": "Syntax-Enhanced Self-Attention-Based Semantic Role Labeling" }, { "answers": [ "14 categories" ], "context": "The substantial amount of freely available video material has brought up the need for automatic methods to summarize and compactly represent the essential content. One approach would be to produce a short video skim containing the most important video segments as proposed in the video summarization task BIBREF0. Alternatively, the video content could be described using natural language sentences. Such an approach can lead to a very compact and intuitive representation and is typically referred to as video captioning in the literature BIBREF1. However, producing a single description for an entire video might be impractical for long unconstrained footage. Instead, dense video captioning BIBREF2 aims, first, at temporally localizing events and, then, at producing natural language description for each of them. Fig. FIGREF1 illustrates dense video captions for an example video sequence.", "id": 1946, "question": "How many category tags are considered?", "title": "Multi-modal Dense Video Captioning" }, { "answers": [ "YouTube videos" ], "context": "Early works in video captioning applied rule-based models BIBREF13, BIBREF14, BIBREF15, where the idea was to identify a set of video objects and use them to fill predefined templates to generate a sentence. Later, the need for sentence templates was omitted by casting the captioning problem as a machine translation task BIBREF16. Following the success of neural models in translation systems BIBREF17, similar methods became widely popular in video captioning BIBREF18, BIBREF19, BIBREF20, BIBREF1, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. The rationale behind this approach is to train two Recurrent Neural Networks (RNNs) in an encoder-decoder fashion. Specifically, an encoder inputs a set of video features, accumulates its hidden state, which is passed to a decoder for producing a caption.", "id": 1947, "question": "What domain does the dataset fall into?", "title": "Multi-modal Dense Video Captioning" }, { "answers": [ "" ], "context": "Inspired by the idea of the dense image captioning task BIBREF37, Krishna BIBREF2 introduced a problem of dense video captioning and released a new dataset called ActivityNet Captions which leveraged the research in the field BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF38, BIBREF10. In particular, BIBREF5 adopted the idea of the context-awareness BIBREF2 and generalized the temporal event proposal module to utilize both past and future contexts as well as an attentive fusion to differentiate captions from highly overlapping events. Meanwhile, the concept of Single Shot Detector (SSD) BIBREF39 was also used to generate event proposals and reward maximization for better captioning in BIBREF6.", "id": 1948, "question": "What ASR system do they use?", "title": "Multi-modal Dense Video Captioning" }, { "answers": [ "" ], "context": "A few attempts has been made to include additional cues like audio and speech BIBREF38, BIBREF42, BIBREF43 for dense video captioning task. Rahman BIBREF38 utilized the idea of cycle-consistency BIBREF40 to build a model with visual and audio inputs. However, due to weak supervision, the system did not reach high performance. Hessel BIBREF42 and Shi BIBREF43 employ a transformer architecture BIBREF3 to encode both video frames and speech segments to generate captions for instructional (cooking) videos. Yet, the high results on a dataset which is restricted to instructional video appear to be not evidential as the speech and the captions are already very close to each other in such videos BIBREF41.", "id": 1949, "question": "What is the state of the art?", "title": "Multi-modal Dense Video Captioning" }, { "answers": [ "" ], "context": "Ultrasound technology is a widespread technology in speech research for studying tongue movement and speech articulation BIBREF0 due to its attractive characteristics, such as imaging at a reasonably rapid frame rate, which empowers researchers to envision subtle and swift gestures of the tongue in real-time. Besides, ultrasound technology is portable, relatively affordable, and clinically safe and non-invasive BIBREF1. The mid-sagittal view is regularly adapted in ultrasound data as it displays relative backness, height, and the slope of various areas of the tongue. Quantitative analysis of tongue motion needs the tongue contour to be extracted, tracked, and visualized.", "id": 1950, "question": "How big are datasets used in experiments?", "title": "Deep Learning for Automatic Tracking of Tongue Surface in Real-time Ultrasound Videos, Landmarks instead of Contours" }, { "answers": [ "" ], "context": "Similar to facial landmark detection methods BIBREF15, we considered the problem of tongue contour extraction as a simple landmark detection and tracking. For this reason, we first developed a customized annotator software that can extract equally separated and randomized markers from segmented tongue contours in different databases. Meanwhile, the same software could fit B-spline curves on the extracted markers to revert the process for evaluation purposes.", "id": 1951, "question": "What previously annotated databases are available?", "title": "Deep Learning for Automatic Tracking of Tongue Surface in Real-time Ultrasound Videos, Landmarks instead of Contours" }, { "answers": [ "" ], "context": "No physicists! ...the symbol INLINEFORM0 above does not stand for the operation that turns two Hilbert spaces into the smallest Hilbert space in which the two given ones bilinearly embed. No category-theoreticians! ...neither does it stand for the composition operation that turns any pair of objects (and morphisms) in a monoidal category into another object, and that is subject to a horrendous bunch of conditions that guaranty coherence with the remainder of the structure. Instead, this is what it means: INLINEFORM1 ", "id": 1952, "question": "Do they address abstract meanings and concepts separately?", "title": "From quantum foundations via natural language meaning to a theory of everything" }, { "answers": [ "" ], "context": "So, how does one go about formalising the concept of togetherness? While we don't want an explicit description of the foo involved, we do need some kind of means for identifying foo. Therefore, we simple give each foo a name, say INLINEFORM0 . Then, INLINEFORM1 represents the togetherness of INLINEFORM2 and INLINEFORM3 . We also don't want an explicit description of INLINEFORM4 , so how can we say anything about INLINEFORM5 without explicitly describing INLINEFORM6 , INLINEFORM7 and INLINEFORM8 ?", "id": 1953, "question": "Do they argue that all words can be derived from other (elementary) words?", "title": "From quantum foundations via natural language meaning to a theory of everything" }, { "answers": [ "" ], "context": "But we can still do a lot better. What a resource theory fails to capture (on purpose in fact) is the actual process that converts one resource into another one. So let's fix that problem, and explicitly account for processes.", "id": 1954, "question": "Do they break down word meanings into elementary particles as in the standard model of quantum theory?", "title": "From quantum foundations via natural language meaning to a theory of everything" }, { "answers": [ "" ], "context": "Video captioning has drawn more attention and shown promising results recently. To translate content-rich video into human language is a extremely complex task, which should not only extract abundant multi-modal information from video but also cross the semantic gap to generate accurate and fluent language. Thanks to the recent developments of useful deep learning frameworks, such as LSTM BIBREF1 networks, as well as of machine translation techniques such as BIBREF2, the dominant approach in video captioning is currently based on sequence learning using an encoder-decoder framework.", "id": 1955, "question": "How big is the dataset used?", "title": "VATEX Captioning Challenge 2019: Multi-modal Information Fusion and Multi-stage Training Strategy for Video Captioning" }, { "answers": [ "" ], "context": "Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer BIBREF1. Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 BIBREF2, BERT BIBREF3 and Transformer-XL BIBREF4, seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks BIBREF5 and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism—originally introduced in Neural Machine Translation to better handle long-range dependencies BIBREF6. With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest.", "id": 1956, "question": "How they prove that multi-head self-attention is at least as powerful as convolution layer? ", "title": "On the Relationship between Self-Attention and Convolutional Layers" }, { "answers": [ "" ], "context": "In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers:", "id": 1957, "question": "Is there a way of converting existing convolution layers into self-attention to perform very same convolution?", "title": "On the Relationship between Self-Attention and Convolutional Layers" }, { "answers": [ "" ], "context": "We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings.", "id": 1958, "question": "What authors mean by sufficient number of heads?", "title": "On the Relationship between Self-Attention and Convolutional Layers" }, { "answers": [ "" ], "context": "Let $\\in ^{T\\times D_{\\textit {in}}}$ be an input matrix consisting of $T$ tokens in of ${D_{\\textit {in}}}$ dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \\in [T]$ from $D_{\\textit {in}}$ to $D_{\\textit {out}}$ dimensions as follows: Self-Attention()t,: := ( t,: ) val, where we refer to the elements of the $T \\times T$ matrix := qrykey as attention scores and the softmax output as attention probabilities. The layer is parametrized by a query matrix $_{\\!\\textit {qry}}\\in ^{D_{\\textit {in}} \\times D_{k}}$, a key matrix $_{\\!\\textit {key}}\\in ^{D_{\\textit {in}} \\times D_{k}}$ and a value matrix $_{\\!\\textit {val}}\\in ^{D_{\\textit {in}} \\times D_{\\textit {out}}}$.For simplicity, we exclude any residual connections, batch normalization and constant factors. A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention := (+ ) qrykey(+ ), where $\\in ^{T \\times D_{\\textit {in}}}$ contains the embedding vectors for each position. More generally, $$ may be substituted by any function that returns a vector representation of the position.", "id": 1959, "question": "Is there any nonnumerical experiment that also support author's claim, like analysis of attention layers in publicly available networks? ", "title": "On the Relationship between Self-Attention and Convolutional Layers" }, { "answers": [ "" ], "context": "Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor $~\\in ~^{W\\times H \\times D_{\\textit {in}}}$ of width $W$, height $H$ and $D_{\\textit {in}}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by Conv()i,j,: := (1, 2) K 1,2,:,: i+1, j+2, : + , where $$ is the $K \\times K \\times D_{\\textit {out}} \\times D_{\\textit {in}}$ weight tensor , $\\in ^{D_{\\textit {out}}}$ is the bias vector and the set", "id": 1960, "question": "What numerical experiments they perform?", "title": "On the Relationship between Self-Attention and Convolutional Layers" }, { "answers": [ "" ], "context": "How infants discover the words of their native languages is a long-standing question in developmental psychology BIBREF0 . Machine learning has contributed much to this discussion by showing that predictive models of language are capable of inferring the existence of word boundaries solely based on statistical properties of the input BIBREF1 , BIBREF2 , BIBREF3 . Unfortunately, the best language models, measured in terms of their ability to model language, segment quite poorly BIBREF4 , BIBREF5 , while the strongest models in terms of word segmentation are far too weak to adequately predict language BIBREF3 , BIBREF6 . Moreover, since language acquisition is ultimately a multimodal process, neural models which simplify working with multimodal data offer opportunities for future research. However, as BIBREF7 have argued, current neural models' inability to discover meaningful words is too far behind the current (non-neural) state-of-the-art to be a useful foundation.", "id": 1961, "question": "What dataset is used?", "title": "Learning to Discover, Ground and Use Words with Segmental Neural Language Models" }, { "answers": [ "" ], "context": "We now describe the segmental neural language model (SNLM). Refer to Figure FIGREF1 for an illustration. The SNLM generates a character sequence INLINEFORM0 , where each INLINEFORM1 is a character in a finite character set INLINEFORM2 . Each sequence INLINEFORM3 is the concatenation of a sequence of segments INLINEFORM4 where INLINEFORM5 measures the length of the sequence in segments and each segment INLINEFORM6 is a sequence of characters, INLINEFORM7 . Intuitively, each INLINEFORM8 corresponds to one word. Let INLINEFORM9 represent the concatenation of the characters of the segments INLINEFORM10 to INLINEFORM11 , discarding segmentation information; thus INLINEFORM12 . For example if INLINEFORM13 , the underlying segmentation might be INLINEFORM14 (with INLINEFORM15 and INLINEFORM16 ), or INLINEFORM17 , or any of the INLINEFORM18 segmentation possibilities for INLINEFORM19 .", "id": 1962, "question": "What language do they look at?", "title": "Learning to Discover, Ground and Use Words with Segmental Neural Language Models" }, { "answers": [ "" ], "context": "Text classification can be categorized according to the text length of the data, from sentence-level classification BIBREF0 to document-level classification BIBREF1, BIBREF2. One kind of such task is sentiment classification BIBREF3, a subtask of sentiment analysis BIBREF4, BIBREF5 where we are to predict the sentiment/rating given a review written by a user. In some domains, the length of these reviews varies widely. For example, well-known review websites in East Asia such as Naver Movies and Douban Movies provide two channels for users to write reviews, depending on their preferred length. Figure FIGREF4 shows the review channels provided in Naver Movies.", "id": 1963, "question": "What dierse domains and languages are present in new datasets?", "title": "Text Length Adaptation in Sentiment Classification" }, { "answers": [ "" ], "context": "In this paper, we study semantic role labelling (SRL), a subtask of semantic parsing of natural language sentences. SRL is the task of identifying semantic roles of arguments of each predicate in a sentence. In particular, it answers a question Who did what to whom, when, where, why?. For each predicate in a sentence, the goal is to identify all constituents that fill a semantic role, and to determine their roles, such as agent, patient, or instrument, and their adjuncts, such as locative, temporal or manner.", "id": 1964, "question": "Are their corpus and software public?", "title": "Vietnamese Semantic Role Labelling" }, { "answers": [ "Qualitatively through efficiency, effectiveness and satisfaction aspects and quantitatively through metrics such as precision, recall, accuracy, BLEU score and even human judgement." ], "context": "Conversational agents or dialogue systems development are gaining more attention from both industry and academia BIBREF0 , BIBREF1 in the latest years. Some works tried to model them into domain-specific tasks such as customer service BIBREF2 , BIBREF3 , and shopping assistance BIBREF4 . Other works design a multi-purpose agents such as SIRI, Amazon Alexa, and Google Assistance. This domain is a well-researched area in Human-Computer Interaction research community but still become a hot topic now. The main development focus right now is to have an intelligent and humanizing machine to have a better engagement when communicating with human BIBREF5 . Having a better engagement will lead to higher user satisfaction, which becomes the main objective from the industry perspective.", "id": 1965, "question": "How are EAC evaluated?", "title": "Emotionally-Aware Chatbots: A Survey" }, { "answers": [ "" ], "context": "The early development of chatbot was inspired by Turing test in 1950 BIBREF20 . Eliza was the first publicly known chatbot, built by using simple hand-crafted script BIBREF21 . Parry BIBREF22 was another chatbot which successfully passed the Turing test. Similar to Eliza, Parry still uses a rule-based approach but with a better understanding, including the mental model that can stimulate emotion. Therefore, Parry is the first chatbot which involving emotion in its development. Also, worth to be mentioned is ALICE (Artificial Linguistic Internet Computer Entity), a customizable chatbot by using Artificial Intelligence Markup Language (AIML). Therefore, ALICE still also use a rule-based approach by executing a pattern-matcher recursively to obtain the response. Then in May 2014, Microsoft introduced XiaoIce BIBREF23 , an empathetic social chatbot which is able to recognize users' emotional needs. XiaoIce can provide an engaging interpersonal communication by giving encouragement or other affective messages, so that can hold human attention during communication.", "id": 1966, "question": "What are the currently available datasets for EAC?", "title": "Emotionally-Aware Chatbots: A Survey" }, { "answers": [ "" ], "context": "As we mentioned before that emotion is an essential aspect of building humanize chatbot. The rise of the emotionally-aware chatbot is started by Parry BIBREF22 in early 1975. Now, most of EAC development exploits neural-based model. In this section, we will try to review previous works which focus on EAC development. Table TABREF10 summarizes this information includes the objective and exploited approach of each work. In early development, EAC is designed by using a rule-based approach. However, in recent years mostly EAC exploit neural-based approach. Studies in EAC development become a hot topic start from 2017, noted by the first shared task in Emotion Generation Challenge on NLPCC 2017 BIBREF31 . Based on Table TABREF10 this research line continues to gain massive attention from scholars in the latest years.", "id": 1967, "question": "What are the research questions posed in the paper regarding EAC studies?", "title": "Emotionally-Aware Chatbots: A Survey" }, { "answers": [ "" ], "context": "Continuous distributional vectors for representing words (embeddings) BIBREF0 have become ubiquitous in modern, neural NLP. Cross-lingual representations BIBREF1 additionally represent words from various languages in a shared continuous space, which in turn can be used for Bilingual Lexicon Induction (BLI). BLI is often the first step towards several downstream tasks such as Part-Of-Speech (POS) tagging BIBREF2, parsing BIBREF3, document classification BIBREF4, and machine translation BIBREF5, BIBREF6, BIBREF7.", "id": 1968, "question": "What evaluation metrics did they use?", "title": "Should All Cross-Lingual Embeddings Speak English?" }, { "answers": [ "Answer with content missing: (Chapter 3) The concept can be easily explained with an example, visualized in Figure 1. Consider the Portuguese (Pt) word trabalho which, according to the MUSE Pt–En dictionary, has the words job and work as possible En translations. In turn, these two En words can be translated to 4 and 5 Czech (Cs) words respectively. By utilizing the transitive property (which translation should exhibit) we can identify the set of 7 possible Cs translations for the Pt word trabalho." ], "context": "In the supervised bilingual setting, as formulated by BIBREF1, given two languages $\\mathcal {L} = \\lbrace l_1,l_2\\rbrace $ and their pre-trained row-aligned embeddings $\\mathcal {X}_1, \\mathcal {X}_2,$ respectively, a transformation matrix $$ is learned such that:", "id": 1969, "question": "What is triangulation?", "title": "Should All Cross-Lingual Embeddings Speak English?" }, { "answers": [ "" ], "context": "One of the most common downstream evaluation tasks for the learned cross-lingual word mappings is Lexicon Induction (LI), the task of retrieving the most appropriate word-level translation for a query word from the mapped embedding spaces. Specialized evaluation (and training) dictionaries have been created for multiple language pairs, with the MUSE dictionaries BIBREF12 most often used, providing word translations between English (En) and 48 other high- to mid-resource languages, as well as on all 30 pairs among 6 very similar Romance and Germanic languages (English, French, German, Spanish, Italian, Portuguese).", "id": 1970, "question": "What languages are explored in this paper?", "title": "Should All Cross-Lingual Embeddings Speak English?" }, { "answers": [ "" ], "context": "Named-entity recognition (NER) is an information extraction (IE) task that aims to detect and categorize entities to pre-defined types in a text. On the other hand, the goal of text categorization (TC) is to assign correct categories to texts based on their content. Most NER and TC studies focus on English, hence accessing available English datasets is not a issue. However, the annotated datasets for Turkish NER and TC are scarce. It is hard to manually construct datasets for these tasks due to excessive human effort, time and budget. In this paper, our motivation is to construct an automatically annotated dataset that would be very useful for NER and TC researches in Turkish.", "id": 1971, "question": "Did they experiment with the dataset on some tasks?", "title": "Automatically Annotated Turkish Corpus for Named Entity Recognition and Text Categorization using Large-Scale Gazetteers" }, { "answers": [ "" ], "context": "Sentiment analysis or opinion mining is a sort of text classification that assigns a sentiment orientation to documents based on the detected contextual polarity BIBREF0. In the past, research work has focused only on the overall sentiment of a document, trying to determine if the entire text is positive, neutral or negative BIBREF1. However, besides predicting the general sentiment, a better understanding of the reviews could be undertaken using the more in-depth aspect based sentiment analysis (ABSA) BIBREF2, BIBREF1. Specifically, ABSA’s goal is to predict the sentiment polarity of the corpus’ target entities and aspects (e.g. the possible aspects of the entity “movie” could be “the plot”, “the actors’ acting” or “the special effects”). While it is easy to establish the text entities based on the a priori information about text corpus, the aspect terms are difficult to infer and usually the training process requires to use a predefined set of term categories BIBREF3, BIBREF4, BIBREF5.", "id": 1972, "question": "How better does the hybrid tiled CNN model perform than its counterparts?", "title": "Hybrid Tiled Convolutional Neural Networks for Text Sentiment Classification" }, { "answers": [ "" ], "context": "When we read news text with emerging entities, text in unfamiliar domains, or text in foreign languages, we often encounter expressions (words or phrases) whose senses we are unsure of. In such cases, we may first try to examine other usages of the same expression in the text, in order to infer its meaning from this context. Failing to do so, we may consult a dictionary, and in the case of polysemous words, choose an appropriate meaning based on the context. Acquiring novel word senses via dictionary definitions is known to be more effective than contextual guessing BIBREF3 , BIBREF4 . However, very often, hand-crafted dictionaries do not contain definitions for rare or novel phrases/words, and we eventually give up on understanding them completely, leaving us with only a shallow reading of the text.", "id": 1973, "question": "Do they use pretrained word embeddings?", "title": "Learning to Describe Phrases with Local and Global Contexts" }, { "answers": [ "" ], "context": "Embedding words in a common vector space can enable machine learning algorithms to achieve better performance in natural language processing (NLP) tasks. Word2vec BIBREF0 is a recently proposed family of algorithms for training such vector representations from unstructured text data via shallow neural networks. The geometry of the resulting vectors was shown in BIBREF0 to capture word semantic similarity through the cosine similarity of the corresponding vectors as well as more complex semantic relationships through vector differences, such as vec(“Madrid”) - vec(“Spain”) + vec(“France”) INLINEFORM0 vec(“Paris”).", "id": 1974, "question": "Do they use skipgram version of word2vec?", "title": "Network-Efficient Distributed Word2vec Training System for Large Vocabularies" }, { "answers": [ "" ], "context": "Sponsored search is a popular advertising model BIBREF14 used by web search engines, such as Google, Microsoft, and Yahoo, in which advertisers sponsor the top web search results in order to redirect user's attention from organic search results to ads that are highly relevant to the entered query.", "id": 1975, "question": "What domains are considered that have such large vocabularies?", "title": "Network-Efficient Distributed Word2vec Training System for Large Vocabularies" }, { "answers": [ "" ], "context": "In this paper we focus on the skipgram approach with random negative examples proposed in BIBREF0 . This has been found to yield the best results among the proposed variants on a variety of semantic tests of the resulting vectors BIBREF7 , BIBREF0 . Given a corpus consisting of a sequence of sentences INLINEFORM0 each comprising a sequence of words INLINEFORM1 , the objective is to maximize the log likelihood: DISPLAYFORM0 ", "id": 1976, "question": "Do they perform any morphological tokenization?", "title": "Network-Efficient Distributed Word2vec Training System for Large Vocabularies" }, { "answers": [ "" ], "context": "Several existing word2vec training systems are limited to running on a single machine, though with multiple parallel threads of execution operating on different segments of training data. These include the original open source implementation of word2vec BIBREF0 , as well as those of Medallia BIBREF16 , and Rehurek BIBREF17 . As mentioned in the introduction, these systems would require far larger memory configurations than available on typical commodity-scale servers.", "id": 1977, "question": "How many nodes does the cluster have?", "title": "Network-Efficient Distributed Word2vec Training System for Large Vocabularies" }, { "answers": [ "" ], "context": "The sequence-to-sequence (seq2seq) model proposed in BIBREF0 , BIBREF1 , BIBREF2 is a neural architecture for performing sequence classification and later adopted to perform speech recognition in BIBREF3 , BIBREF4 , BIBREF5 . The model allows to integrate the main blocks of ASR such as acoustic model, alignment model and language model into a single framework. The recent ASR advancements in connectionist temporal classification (CTC) BIBREF5 , BIBREF4 and attention BIBREF3 , BIBREF6 based approaches has created larger interest in speech community to use seq2seq models. To leverage performance gains from this model as similar or better to conventional hybrid RNN/DNN-HMM models requires a huge amount of data BIBREF7 . Intuitively, this is due to the wide-range role of the model in performing alignment and language modeling along with acoustic to character label mapping at each iteration.", "id": 1978, "question": "What data do they train the language models on?", "title": "Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling" }, { "answers": [ "" ], "context": "In this work, we use the attention based approach BIBREF1 as it provides an effective methodology to perform sequence-to-sequence (seq2seq) training. Considering the limitations of attention in performing monotonic alignment BIBREF18 , BIBREF19 , we choose to use CTC loss function to aid the attention mechanism in both training and decoding. The basic network architecture is shown in Fig. FIGREF7 .", "id": 1979, "question": "Do they report BLEU scores?", "title": "Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling" }, { "answers": [ "Train languages are: Cantonese, Bengali, Pashto, Turkish, Vietnamese, Haitian, Tamil, Kurdish, Tokpisin and Georgian, while Assamese, Tagalog, Swahili, Lao are used as target languages." ], "context": "In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation.", "id": 1980, "question": "What languages do they use?", "title": "Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling" }, { "answers": [ "" ], "context": "Multilingual approaches used in hybrid RNN/DNN-HMM systems BIBREF10 have been used for for tackling the problem of low-resource data condition. Some of these approaches include language adaptive training and shared layer retraining BIBREF29 . Among them, the most benefited method is the parameter sharing technique BIBREF10 . To incorporate the former approach into encoder, CTC and attention decoder model, we performed the following experiments:", "id": 1981, "question": "What architectures are explored to improve the seq2seq model?", "title": "Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling" }, { "answers": [ "" ], "context": "Our long-term goal is to build intelligent systems that can perceive their visual environment and understand the linguistic information, and further make an accurate translation inference to another language. Since image has become an important source for humans to learn and acquire knowledge (e.g. video lectures, BIBREF1 , BIBREF2 , BIBREF3 ), the visual signal might be able to disambiguate certain semantics. One way to make image content easier and faster to be understood by humans is to combine it with narrative description that can be self-explainable. This is particularly important for many natural language processing (NLP) tasks as well, such as image caption BIBREF4 and some task-specific translation–sign language translation BIBREF5 . However, BIBREF6 demonstrates that most multi-modal translation algorithms are not significantly better than an off-the-shelf text-only machine translation (MT) model for the Multi30K dataset BIBREF7 . There remains an open question about how translation models should take advantage of visual context, because from the perspective of information theory, the mutual information of two random variables $I(X,Y)$ will always be no greater than $I(X;Y,Z)$ , due to the following fact. ", "id": 1982, "question": "Why is this work different from text-only UNMT?", "title": "Unsupervised Multi-modal Neural Machine Translation" }, { "answers": [ "" ], "context": "Most languages, even with millions of speakers, have not been the center for natural language processing and are counted as low-resource for tasks like named entity recognition (NER). Similarly, even for high-resource languages, there exists only few labeled data for most entity types beyond person, location and organization. Distantly- or weakly-supervised approaches have been proposed to solve this issue, e.g., by using lists of entities for labeling raw text BIBREF0, BIBREF1. This allows obtaining large amounts of training data quickly and cheaply. Unfortunately, these labels often contain errors and learning with this noisily-labeled data is difficult and can even reduce overall performance (see, e.g. BIBREF2).", "id": 1983, "question": "What is baseline used?", "title": "Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels" }, { "answers": [ "" ], "context": "A popular approach is modeling the relationship between noisy and clean labels, i.e., estimating $p(\\hat{y}|y)$ where $y$ is the clean and $\\hat{y}$ the noisy label. For example, this can be represented as a noise or confusion matrix between the clean and the noisy labels, as explained in Section SECREF3. Having its roots in statistics BIBREF4, this or similar ideas have been recently studied in NLP BIBREF2, BIBREF3, BIBREF5, image classification BIBREF6, BIBREF7, BIBREF8 and general machine learning settings BIBREF9, BIBREF10, BIBREF11. All of these methods, however, do not take the features into account that are used to represent the instances during classification. In BIBREF12 only the noise type depends on $x$ but not the actual noise model. BIBREF13 and BIBREF14 use the learned feature representation $h$ to model $p(\\hat{y}|y,h(x))$ for image classification and relation extraction respectively. In the work of BIBREF15, $p(y|\\hat{y},h(x))$ is estimated to clean the labels for an image classification task. The survey by BIBREF16 gives a detailed overview about other techniques for learning in the presence of noisy labels.", "id": 1984, "question": "Did they evaluate against baseline?", "title": "Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels" }, { "answers": [ "They evaluate newly proposed models in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise" ], "context": "We assume a low-resource setting with a small set of gold standard annotated data $C$ consisting of instances with features $x$ and corresponding, clean labels $y$. Additionally, a large set of noisy instances $(x,\\hat{y}) \\in N$ is available. This can be obtained e.g. from weak or distant supervision. In a multi-class classification setting, we can learn the probability of a label $y$ having a specific class given the feature $x$ as", "id": 1985, "question": "How they evaluate their approach?", "title": "Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels" }, { "answers": [ "It contains 106,350 documents" ], "context": "While document classification models should be objective and independent from human biases in documents, research have shown that the models can learn human biases and therefore be discriminatory towards particular demographic groups BIBREF0, BIBREF1, BIBREF2. The goal of fairness-aware document classifiers is to train and build non-discriminatory models towards people no matter what their demographic attributes are, such as gender and ethnicity. Existing research BIBREF0, BIBREF3, BIBREF4, BIBREF5, BIBREF1 in evaluating fairness of document classifiers focus on the group fairness BIBREF6, which refers to every demographic group has equal probability of being assigned to the positive predicted document category.", "id": 1986, "question": "How large is the corpus?", "title": "Multilingual Twitter Corpus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition" }, { "answers": [ "" ], "context": "We assemble the annotated datasets for hate speech classification. To narrow down the data sources, we limit our dataset sources to the unique online social media site, Twitter. We have requested 16 published Twitter hate speech datasets, and finally obtained 7 of them in five languages. By using the Twitter streaming API, we collected the tweets annotated by hate speech labels and their corresponding user profiles in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17. We binarize all tweets' labels (indicating whether a tweet has indications of hate speech), allowing to merge the different label sets and reduce the data sparsity.", "id": 1987, "question": "Which document classifiers do they experiment with?", "title": "Multilingual Twitter Corpus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition" }, { "answers": [ "over 104k documents" ], "context": "We consider four user factors of age, race, gender and geographic location. For location, we inference two granularities, country and US region, but only experiment with the country attribute. While the demographic attributes can be inferred through tweets BIBREF22, BIBREF8, we intentionally exclude the contents from the tweets if they infer these user attributes, in order to make the evaluation of fairness more reliable and independent. If users were grouped based on attributes inferred from their text, then any differences in text classification across those groups could be related to the same text. Instead, we infer attributes from public user profile information (i.e., description, name and photo).", "id": 1988, "question": "How large is the dataset?", "title": "Multilingual Twitter Corpus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition" }, { "answers": [ "" ], "context": "Conventional spoken dialogue systems (SDS) require a substantial amount of hand-crafted rules to achieve good interaction with users. The large amount of required engineering limits the scalability of these systems to settings with new or multiple domains. Recently, statistical approaches have been studied that allow natural, efficient and more diverse interaction with users without depending on pre-defined rules BIBREF0 , BIBREF1 , BIBREF2 .", "id": 1989, "question": "What evaluation metrics are used to measure diversity?", "title": "Variational Cross-domain Natural Language Generation for Spoken Dialogue Systems" }, { "answers": [ "" ], "context": "The VAE is a generative latent variable model. It uses a neural network (NN) to generate $\\hat{x}$ from a latent variable $z$ , which is sampled from the prior $p_{\\theta }(z)$ . The VAE is trained such that $\\hat{x}$ is a sample of the distribution $p_{D}(x)$ from which the training data was collected. Generative latent variable models have the form $p_{\\theta }(x)=\\int _{z}p_{\\theta }(x|z)p_{\\theta }(z) dz$ . In a VAE an NN, called the decoder, models $p_{\\theta }(x|z)$ and would ideally be trained to maximize the expectation of the above integral $E\\left[p_{\\theta }(x)\\right]$ . Since this is intractable, the VAE uses another NN, called the encoder, to model $q_{\\phi }(z|x)$ which should approximate the posterior $p_{\\theta }(z|x)$ . The NNs in the VAE are trained to maximise the variational lower bound (VLB) to $z$0 , which is given by: ", "id": 1990, "question": "How is some information lost in the RNN-based generation models?", "title": "Variational Cross-domain Natural Language Generation for Spoken Dialogue Systems" }, { "answers": [ "" ], "context": "While social media sites provide users with the revolutionized communication medium by bringing the communication efficiency to a new level, they can be easily misused for widely spreading misinformation and fake news. Fake news and misinformation have been a long-standing issue for various purposes such as political propaganda BIBREF0 and financial propaganda BIBREF1. To fight against fake news, traditional publishers employed human editors to manually and carefully check the content of news articles to maintain their reputation. However, social media provided a new way to spread news, which lead to broader information sources and expanded audience (i.e., anyone can be a media and create news). In particular, users share news articles with their own opinion or read articles shared by their friends from whatever the source of news is with mostly blind trust BIBREF2 or with their own ideologies BIBREF3, BIBREF4. Although social media posts usually have a very short life cycle, the unprecedented amount of fake news may lead to a catastrophic impact on both individuals and society. Besides from misleading users with false information BIBREF4, widely propagated fake news could even cause trust crisis of entire news ecosystem BIBREF5, even further affecting both the cyberspace and physical space.", "id": 1991, "question": "What is the model accuracy?", "title": "Attributed Multi-Relational Attention Network for Fact-checking URL Recommendation" }, { "answers": [ "" ], "context": "In this section, we briefly review related works and position our work within the following areas: (1) fake news and misinformation; (2) advancements in recommender systems; and (3) graph convolutional networks.", "id": 1992, "question": "How do the authors define fake news?", "title": "Attributed Multi-Relational Attention Network for Fact-checking URL Recommendation" }, { "answers": [ "" ], "context": "Fake news has attracted considerable attention since it is related to our daily life and has become a serious problem related to multiple areas such as politics BIBREF0 and finance BIBREF1. Social media sites have become one of popular mediums to propagate fake news and misinformation. The dominant line of work in this topic is fake news detection BIBREF15 which was mostly formulated as a binary classification problem. Researchers began to incorporate social context and other features for identifying fake news at an early stage and preventing it from diffusion on the social network BIBREF5, BIBREF7. Some other researchers focus on investigating the propagation patterns of fake news in social network BIBREF16, BIBREF17. BIBREF18 also studied fake news intervention. Unlike most previous works, we follow the direction of BIBREF12 and propose to build a personalized recommender system for promoting the fact-checking article circulation to debunk fake news.", "id": 1993, "question": "What dataset is used?", "title": "Attributed Multi-Relational Attention Network for Fact-checking URL Recommendation" }, { "answers": [ "" ], "context": "Language mixing has been a common phenomenon in multilingual communities. It is motivated in response to social factors as a way of communication in a multicultural society. From a sociolinguistic perspective, individuals do code-switching in order to construct an optimal interaction by accomplishing the conceptual, relational-interpersonal, and discourse-presentational meaning of conversation BIBREF0 . In its practice, the variation of code-switching will vary due to the traditions, beliefs, and normative values in the respective communities. A number of studies BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 found that code-switching is not produced indiscriminately, but follows syntactic constraints. Many linguists formulated various constraints to define a general rule for code-switching BIBREF1 , BIBREF3 , BIBREF4 . However, the constraints are not enough to make a good generalization of real code-switching constraints, and they have not been tested in large-scale corpora for many language pairs.", "id": 1994, "question": "Did they use other evaluation metrics?", "title": "Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling" }, { "answers": [ "Perplexity score 142.84 on dev and 138.91 on test" ], "context": "The synthetic code-switching generation approach was introduced by adapting equivalence constraint on monolingual sentence pairs during the decoding step on an automatic speech recognition (ASR) model BIBREF5 . BIBREF10 explored Functional Head Constraint, which was found to be more restrictive than the Equivalence Constraint, but complex to be implemented, by using a lattice parser with a weighted finite-state transducer. BIBREF11 extended the RNN by adding POS information to the input layer and factorized output layer with a language identifier. Then, Factorized RNN networks were combined with an n-gram backoff model using linear interpolation BIBREF12 . BIBREF13 added syntactic and semantic features to the Factorized RNN networks. BIBREF14 adapted an effective curriculum learning by training a network with monolingual corpora of both languages, and subsequently train on code-switched data. A further investigation of Equivalence Constraint and Curriculum Learning showed an improvement in language modeling BIBREF6 . A multi-task learning approach was introduced to train the syntax representation of languages by constraining the language generator BIBREF9 .", "id": 1995, "question": "What was their perplexity score?", "title": "Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling" }, { "answers": [ "" ], "context": "We use a sequence to sequence (Seq2Seq) model in combination with pointer and copy networks BIBREF7 to align and choose words from the monolingual sentences and generate a code-switching sentence. The models' input is the concatenation of the two monolingual sentences, denoted as INLINEFORM0 , and the output is a code-switched sentence, denoted as INLINEFORM1 . The main assumption is that almost all, the token present in the code-switching sentence are also present in the source monolingual sentences. Our model leverages this property by copying input tokens, instead of generating vocabulary words. This approach has two major advantages: (1) the learning complexity decreases since it relies on copying instead of generating; (2) improvement in generalization, the copy mechanism could produce words from the input that are not present in the vocabulary.", "id": 1996, "question": "What languages are explored in this paper?", "title": "Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling" }, { "answers": [ "Parallel monolingual corpus in English and Mandarin" ], "context": "Instead of generating words from a large vocabulary space using a Seq2Seq model with attention BIBREF17 , pointer-generator network BIBREF7 is proposed to copy words from the input to the output using an attention mechanism and generate the output sequence using decoders. The network is depicted in Figure FIGREF1 . For each decoder step, a generation probability INLINEFORM0 INLINEFORM1 [0,1] is calculated, which weights the probability of generating words from the vocabulary, and copying words from the source text. INLINEFORM2 is a soft gating probability to decide whether generating the next token from the decoder or copying the word from the input instead. The attention distribution INLINEFORM3 is a standard attention with general scoring BIBREF17 . It considers all encoder hidden states to derive the context vector. The vocabulary distribution INLINEFORM4 is calculated by concatenating the decoder state INLINEFORM5 and the context vector INLINEFORM6 . DISPLAYFORM0 ", "id": 1997, "question": "What parallel corpus did they use?", "title": "Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling" }, { "answers": [ "" ], "context": "Automatically generating captions for images, namely image captioning BIBREF0, BIBREF1, has emerged as a prominent research problem at the intersection of computer vision (CV) and natural language processing (NLP). This task is challenging as it requires to first recognize the objects in the image, the relationships between them, and finally properly organize and describe them in natural language.", "id": 1998, "question": "What datasets are used for experiments on three other tasks?", "title": "Normalized and Geometry-Aware Self-Attention Network for Image Captioning" }, { "answers": [ "in open-ended task esp. for counting-type questions " ], "context": "Visual Question Answering (VQA) is a challenging and young research field, which can help machines achieve one of the ultimate goals in computer vision, holistic scene understanding BIBREF1 . VQA is a computer vision task: a system is given an arbitrary text-based question about an image, and then it should output the text-based answer of the given question about the image. The given question may contain many sub-problems in computer vision, e.g.,", "id": 1999, "question": "In which setting they achieve the state of the art?", "title": "VQABQ: Visual Question Answering by Basic Questions" }, { "answers": [ "" ], "context": "The following two important reasons motivate us to do Visual Question Answering by Basic Questions (VQABQ). First, recently most of VQA works only emphasize more on the image part, the visual features, but put less effort on the question part, the text features. However, image and question features both are important for VQA. If we only focus on one of them, we probably cannot get the good performance of VQA in the near future. Therefore, we should put our effort more on both of them at the same time. In BIBREF7 , they proposed a novel co-attention mechanism that jointly performs image-guided question attention and question-guided image attention for VQA. BIBREF7 also proposed a hierarchical architecture to represent the question, and construct image-question co-attention maps at the word level, phrase level and question level. Then, these co-attended features are combined with word level, phrase level and question level recursively for predicting the final answer of the query question based on the input image. BIBREF8 is also a recent work focusing on the text-based question part, text feature. In BIBREF8 , they presented a reasoning network to update the question representation iteratively after the question interacts with image content each time. Both of BIBREF7 , BIBREF8 yield better performance than previous works by doing more effort on the question part.", "id": 2000, "question": "What accuracy do they approach with their proposed method?", "title": "VQABQ: Visual Question Answering by Basic Questions" }, { "answers": [ "LASSO optimization problem" ], "context": "Recently, there are many papers BIBREF0 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 have proposed methods to solve the VQA issue. Our method involves in different areas in machine learning, natural language processing (NLP) and computer vision. The following, we discuss recent works related to our approach for solving VQA problem.", "id": 2001, "question": "What they formulate the question generation as?", "title": "VQABQ: Visual Question Answering by Basic Questions" }, { "answers": [ "" ], "context": "We propose a new dataset, called Basic Question Dataset (BQD), generated by our basic question generation algorithm. BQD is the first basic question dataset. Regarding the BQD, the dataset format is $\\lbrace Image,~MQ,~3~(BQ + corresponding~similarity~score)\\rbrace $ . All of our images are from the testing images of MS COCO dataset BIBREF30 , the MQ, main questions, are from the testing questions of VQA, open-ended, dataset BIBREF0 , the BQ, basic questions, are from the training and validation questions of VQA, open-ended, dataset BIBREF0 , and the corresponding similarity score of BQ is generated by our basic question generation method, referring to Section 5. Moreover, we also take the multiple-choice questions in VQA dataset BIBREF0 to do the same thing as above. Note that we remove the repeated questions in the VQA dataset, so the total number of questions is slightly less than VQA dataset BIBREF0 . In BQD, we have 81434 images, 244302 MQ and 732906 (BQ + corresponding similarity score). At the same time, we also exploit BQD to do VQA and achieve the competitive accuracy compared to state-of-the-art.", "id": 2002, "question": "What two main modules their approach consists of?", "title": "VQABQ: Visual Question Answering by Basic Questions" }, { "answers": [ "" ], "context": "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Semantic role labeling (SRL) is a shallow semantic parsing, which is dedicated to identifying the semantic arguments of a predicate and labeling them with their semantic roles. SRL is considered as one of the core tasks in the natural language processing (NLP), which has been successfully applied to various downstream tasks, such as information extraction BIBREF0 , question answering BIBREF1 , BIBREF2 , machine translation BIBREF3 , BIBREF4 .", "id": 2003, "question": "Are there syntax-agnostic SRL models before?", "title": "A Full End-to-End Semantic Role Labeler, Syntax-agnostic Over Syntax-aware?" }, { "answers": [ "" ], "context": "SRL includes two subtasks: predicate identification/disambiguation and argument identification/labeling. Since the CoNLL-2009 dataset provides the gold predicates, most previous neural SRL systems use a default model to perform predicate disambiguation and focus on argument identification/labeling. Despite nearly all SRL work adopted the pipeline model with two or more components, Zhao2008Parsing and zhao-jair-2013 presented an end-to-end solution for the entire SRL task with a word pair classifier. Following the same formulization, we propose the first neural SRL system that uniformly handles the tasks of predicate disambiguation and argument identification/labeling.", "id": 2004, "question": "What is the biaffine scorer?", "title": "A Full End-to-End Semantic Role Labeler, Syntax-agnostic Over Syntax-aware?" }, { "answers": [ "" ], "context": "Social media are sometimes used to disseminate hateful messages. In Europe, the current surge in hate speech has been linked to the ongoing refugee crisis. Lawmakers and social media sites are increasingly aware of the problem and are developing approaches to deal with it, for example promising to remove illegal messages within 24 hours after they are reported BIBREF0 .", "id": 2005, "question": "What languages are were included in the dataset of hateful content?", "title": "Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis" }, { "answers": [ "" ], "context": "For the purpose of building a classifier, warner2012 define hate speech as “abusive speech targeting specific group characteristics, such as ethnic origin, religion, gender, or sexual orientation”. More recent approaches rely on lists of guidelines such as a tweet being hate speech if it “uses a sexist or racial slur” BIBREF2 . These approaches are similar in that they leave plenty of room for personal interpretation, since there may be differences in what is considered offensive. For instance, while the utterance “the refugees will live off our money” is clearly generalising and maybe unfair, it is unclear if this is already hate speech. More precise definitions from law are specific to certain jurisdictions and therefore do not capture all forms of offensive, hateful speech, see e.g. matsuda1993. In practice, social media services are using their own definitions which have been subject to adjustments over the years BIBREF3 . As of June 2016, Twitter bans hateful conduct.", "id": 2006, "question": "How was reliability measured?", "title": "Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis" }, { "answers": [ "" ], "context": "As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee crisis in Europe. We therefore had to compile our own corpus. We used Twitter as a source as it offers recent comments on current events. In our study we only considered the textual content of tweets that contain certain keywords, ignoring those that contain pictures or links. This section provides a detailed description of the approach we used to select the tweets and subsequently annotate them.", "id": 2007, "question": "How did the authors demonstrate that showing a hate speech definition caused annotators to partially align their own opinion with the definition?", "title": "Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis" }, { "answers": [ "" ], "context": "In order to assess the reliability of the hate speech definitions on social media more comprehensively, we developed two online surveys in a between-subjects design. They were completed by 56 participants in total (see Table TABREF7 ). The main goal was to examine the extent to which non-experts agree upon their understanding of hate speech given a diversity of social media content. We used the Twitter definition of hateful conduct in the first survey. This definition was presented at the beginning, and again above every tweet. The second survey did not contain any definition. Participants were randomly assigned one of the two surveys.", "id": 2008, "question": "What definition was one of the groups was shown?", "title": "Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis" }, { "answers": [ "Personal thought of the annotator." ], "context": "Since the surveys were completed by 56 participants, they resulted in 1120 annotations. Table TABREF7 shows some summary statistics.", "id": 2009, "question": "Was the degree of offensiveness taken as how generally offensive the text was, or how personally offensive it was to the annotator?", "title": "Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis" }, { "answers": [ "" ], "context": "This paper describes the creation of our hate speech corpus and offers first insights into the low agreement among users when it comes to identifying hateful messages. Our results imply that hate speech is a vague concept that requires significantly better definitions and guidelines in order to be annotated reliably. Based on the present findings, we are planning to develop a new coding scheme which includes clear-cut criteria that let people distinguish hate speech from other content.", "id": 2010, "question": "How were potentially hateful messages identified?", "title": "Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis" }, { "answers": [ "Word embeddings trained on GoogleNews and Word embeddings trained on Reddit dataset" ], "context": "Word embeddings are distributed representations of texts which capture similarities between words. Beside improving a wide variety of NLP tasks, the power of word embeddings is often also tested intrinsically. Together with the idea of training word embeddings, BIBREF0 introduced the idea of testing the soundness of embedding spaces via the analogy task. Proportional analogies are equations of the form INLINEFORM0 , or simply A is to B as C is to D. Given the terms INLINEFORM1 , the model must return the word that correctly stands for INLINEFORM2 in the given analogy. A most classic example is man is to king as woman is to X, where the model is expected to return queen, by subtracting “manness\" from the concept of king to obtain some general royalty, and then re-adding some “womanness\" to obtain the concept of queen ( INLINEFORM3 ).", "id": 2011, "question": "Which embeddings do they detect biases in?", "title": "Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor" }, { "answers": [ "" ], "context": "Named Entity Recognition (ner) is considered a necessary first step in the linguistic processing of any new domain, as it facilitates the development of applications showing co-occurrences of domain entities, cause-effect relations among them, and, eventually, it opens the (still to be reached) possibility of understanding full text content. On the other hand, Biomedical literature and, more specifically, clinical texts, show a number of features as regards ner that pose a challenge to NLP researchers BIBREF0: (1) the clinical discourse is characterized by being conceptually very dense; (2) the number of different classes for nes is greater than traditional classes used with, for instance, newswire text; (3) they show a high formal variability for nes (actually, it is rare to find entities in their “canonical form”); and, (4) this text type contains a great number of ortho-typographic errors, due mainly to time constraints when drafted.", "id": 2012, "question": "What does their system consist of?", "title": "Annotating and normalizing biomedical NEs with limited knowledge" }, { "answers": [ "Entity identification with offset mapping and concept indexing" ], "context": "As it is common in resource-based system development, special effort has been devoted to the creation of the set of resources used by the system. These are mainly two —a flat subset of the snomed ct medical ontology, and the library and a part of the contextual regexp grammars developed by BIBREF6 FSL:2018 for a previous competition on abbreviation resolution in clinical texts written in Spanish. The process of creation and/or adaptation of these resources is described in this section.", "id": 2013, "question": "What are the two PharmaCoNER subtasks?", "title": "Annotating and normalizing biomedical NEs with limited knowledge" }, { "answers": [ "" ], "context": "intro", "id": 2014, "question": "What neural language models are explored?", "title": "An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models" }, { "answers": [ "They randomly sample sentences from Wikipedia that contains an object RC and add them to training data" ], "context": "The most common evaluation metric of an LM is perplexity. Although neural LMs achieve impressive perplexity BIBREF9, it is an average score across all tokens and does not inform the models' behaviors on linguistically challenging structures, which are rare in the corpus. This is the main motivation to separately evaluate the models' syntactic robustness by a different task.", "id": 2015, "question": "How do they perform data augmentation?", "title": "An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models" }, { "answers": [ "" ], "context": "task As introduced in Section intro, the task for a model is to assign a higher probability to the grammatical sentence over the ungrammatical one, given a pair of minimally different sentences at a critical position affecting the grammaticality. For example, (UNKREF3) and (UNKREF5) only differ at a final verb form, and to assign a higher probability to (UNKREF3), models need to be aware of the agreement dependency between author and laughs over an RC.", "id": 2016, "question": "What proportion of negative-examples do they use?", "title": "An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models" }, { "answers": [ "" ], "context": "Dialects are language varieties defined across space. These varieties can differ in distinct linguistic levels (phonetic, morphosyntactic, lexical), which determine a particular regional speech BIBREF0 . The extension and boundaries (always diffuse) of a dialect area are obtained from the variation of one or many features such as, e.g., the different word alternations for a given concept. Typically, the dialect forms plotted on a map appear as a geographical continuum that gradually connects places with slightly different diatopic characteristics. A dialectometric analysis aims at a computational approach to dialect distribution, providing quantitative linguistic distances between locations BIBREF1 , BIBREF2 , BIBREF3 .", "id": 2017, "question": "Do the authors mention any possible confounds in their study?", "title": "Dialectometric analysis of language variation in Twitter" }, { "answers": [ "Lexicon of the cities tend to use most forms of a particular concept" ], "context": "Our corpus consists of approximately 11 million geotagged tweets produced in Europe in Spanish language between October 2014 and June 2016. (Although we will focus on Spain, we will not consider in this work the speech of the Canary Islands due to difficulties with the data extraction). The classification of tweets is accomplished by applying the Compact Language Detector (CLD) BIBREF16 to our dataset. CLD exhibits accurate benchmarks and is thus good for our purposes, although a different detector might be used BIBREF17 . We have empirically checked that when CLD determines the language with a probability of at least 60% the results are extremely reliable. Therefore, we only take into account those tweets for which the probability of being written in Spanish is greater than INLINEFORM0 . Further, we remove unwanted characters, such as hashtags or at-mentions, using Twokenize BIBREF18 , a tokenizer designed for Twitter text in English, adapted to our goals.", "id": 2018, "question": "What are the characteristics of the city dialect?", "title": "Dialectometric analysis of language variation in Twitter" }, { "answers": [ "It uses particular forms of a concept rather than all of them uniformly" ], "context": "The dialectometric differences are quantified between regions defined with the aid of our cells. For this purpose we take into account two metrics, which we now briefly discuss.", "id": 2019, "question": "What are the characteristics of the rural dialect?", "title": "Dialectometric analysis of language variation in Twitter" }, { "answers": [ "" ], "context": "Text classification is an important task in Natural Language Processing with many applications, such as web search, information retrieval, ranking and document classification BIBREF0 , BIBREF1 . Recently, models based on neural networks have become increasingly popular BIBREF2 , BIBREF3 , BIBREF4 . While these models achieve very good performance in practice, they tend to be relatively slow both at train and test time, limiting their use on very large datasets.", "id": 2020, "question": "What are their baseline methods?", "title": "Bag of Tricks for Efficient Text Classification" }, { "answers": [ "" ], "context": "A simple and efficient baseline for sentence classification is to represent sentences as bag of words (BoW) and train a linear classifier, e.g., a logistic regression or an SVM BIBREF5 , BIBREF7 . However, linear classifiers do not share parameters among features and classes. This possibly limits their generalization in the context of large output space where some classes have very few examples. Common solutions to this problem are to factorize the linear classifier into low rank matrices BIBREF12 , BIBREF10 or to use multilayer neural networks BIBREF13 , BIBREF14 .", "id": 2021, "question": "Which datasets are used for evaluation?", "title": "Bag of Tricks for Efficient Text Classification" }, { "answers": [ "" ], "context": "Current neural networks for language understanding rely heavily on unsupervised pretraining tasks like language modeling. However, it is still an open question what degree of knowledge state-of-the-art language models (LMs) acquire about different linguistic phenomena. Many recent studies BIBREF0, BIBREF1, BIBREF2 have advanced our understanding in this area by evaluating LMs' preferences between minimal pairs of sentences, as in Example SECREF1. However, these studies have used different analysis metrics and focused on a small set of linguistic paradigms, making a big-picture comparison between these studies limited.", "id": 2022, "question": "Which of the model yields the best performance?", "title": "BLiMP: A Benchmark of Linguistic Minimal Pairs for English" }, { "answers": [ "Overall accuracy per model is: 5-gram (60.5), LSTM (68.9), TXL (68.7), GPT-2 (80.1)" ], "context": "The objective of a language model is to give a probability distribution over the possible strings of a language. Language models can be built on neural network models or non-neural network models. Due to their unsupervised nature, they can be trained without external annotations. More recently, neural network based language modeling has been shown to be a strong pretraining task for natural language understanding tasks BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some recent models, such as BERT BIBREF9 use closely related tasks such as masked language modeling.", "id": 2023, "question": "What is the performance of the models on the tasks?", "title": "BLiMP: A Benchmark of Linguistic Minimal Pairs for English" }, { "answers": [ "" ], "context": "A large number of recent studies has used acceptability judgments to reveal what neural networks know about grammar. One branch of this literature has focused on using minimal pairs to infer whether LMs learn about specific linguistic phenomena. Table TABREF4 gives a summary of work that has studied linguistic phenomena in this way. For instance, linzen2016assessing look closely at minimal pairs contrasting subject-verb agreement. marvin2018targeted look at a larger set of phenomena, including negative polarity item licensing and reflexive licensing. However, a relatively small set of phenomena is covered by these studies, to the exclusion of well-studied phenomena in linguistics such as control and raising, ellipsis, distributional restrictions on quantifiers, and countless others. This is likely due to the labor-intensive nature of collecting examples that exhibit informative grammatical phenomena and their acceptability judgments.", "id": 2024, "question": "How is the data automatically generated?", "title": "BLiMP: A Benchmark of Linguistic Minimal Pairs for English" }, { "answers": [ "" ], "context": "Processing of free-text clinical records play an important role in computer-supported medicine BIBREF0 , BIBREF1 . A detailed description of symptoms, examination and an interview is often stored in an unstructured way as free-text, hard to process but rich in important information. Some attempts of processing medical notes exist for English, while for other languages the problem is still challenging BIBREF2 .", "id": 2025, "question": "Do they fine-tune the used word embeddings on their medical texts?", "title": "Clustering of Medical Free-Text Records Based on Word Embeddings" }, { "answers": [ "" ], "context": "The clustering method is developed and validated on a new dataset of free-text clinical records of about 100,000 visits.", "id": 2026, "question": "Which word embeddings do they use to represent medical visits?", "title": "Clustering of Medical Free-Text Records Based on Word Embeddings" }, { "answers": [ "" ], "context": "In this section we describe our algorithm for visits clustering. The clustering is derived in four steps.", "id": 2027, "question": "Do they explore similarity of texts across different doctors?", "title": "Clustering of Medical Free-Text Records Based on Word Embeddings" }, { "answers": [ "" ], "context": "As there are no generally available terminological resources for Polish medical texts, the first step of data processing was aimed at automatic identification of the most frequently used words and phrases. The doctors' notes are usually rather short and concise, so we assumed that all frequently appearing phrases are domain related and important for text understanding. The notes are built mostly from noun phrases so it was decided to extract simple noun phrases which consist of a noun optionally modified by a sequence of adjectives (in Polish they can occur both before and after a noun) or by another noun in the genitive. We only extracted sequences that can be interpreted as phrases in Polish, i.e. nouns and adjectives have to agree in case, number and gender.", "id": 2028, "question": "Which clustering technique do they use on partients' visits texts?", "title": "Clustering of Medical Free-Text Records Based on Word Embeddings" }, { "answers": [ "" ], "context": "", "id": 2029, "question": "What is proof that proposed functional form approximates well generalization error in practice?", "title": "A Constructive Prediction of the Generalization Error Across Scales" }, { "answers": [ "" ], "context": "", "id": 2030, "question": "How is proposed functional form constructed for some model?", "title": "A Constructive Prediction of the Generalization Error Across Scales" }, { "answers": [ "bag of words, tf-idf, bag-of-means" ], "context": "Recently, deep learning has been particularly successful in speech and image as an automatic feature extractor BIBREF1 , BIBREF2 , BIBREF3 , however deep learning's application to text as an automatic feature extractor has not been always successful BIBREF0 even compared to simple linear models with BoW or TF-IDF feature representation. In many experiments when the text is polished like news articles or when the dataset is small, BoW or TF-IDF is still the state-of-art representation compared to sent2vec or paragraph2vec BIBREF4 representation using deep learning models like RNN (Recurrent Neural Network) or CNN (Convolution Neural Network) BIBREF0 . It is only when the dataset becomes large or when the words are noisy and non-standardized with misspellings, text emoticons and short-forms that deep learning models which learns the sentence-level semantics start to outperform BoW representation, because under such circumstances, BoW representation can become extremely sparse and the vocabulary size can become huge. It becomes clear that for large, complex data, a large deep learning model with a large capacity can extract a better sentence-level representation than BoW sentence representation. However, for small and standardized news-like dataset, a direct word counting TF-IDF sentence representation is superior. Then the question is can we design a deep learning model that performs well for both simple and complex, small and large datasets? And when the dataset is small and standardized, the deep learning model should perform comparatively well as BoW? With that problem in mind, we designed TDSM (Top-Down-Semantic-Model) which learns a sentence representation that carries the information of both the BoW-like representation and RNN style of sentence-level semantic which performs well for both simple and complex, small and large datasets.", "id": 2031, "question": "What other non-neural baselines do the authors compare to? ", "title": "Character-Based Text Classification using Top Down Semantic Model for Sentence Representation" }, { "answers": [ "" ], "context": "Self-Critical Sequence Training(SCST), upon its release, has been a popular way to train sequence generation models. While originally proposed for image captioning task, SCST not only has become the new standard for training captioning models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, but also has been applied to many other tasks, like video captioningBIBREF10, BIBREF11, BIBREF12, reading comprehensionBIBREF13, summarizationBIBREF14, BIBREF15, BIBREF16, BIBREF17, image paragraph generationBIBREF18, speech recognitionBIBREF19.", "id": 2032, "question": "How much better is performance of proposed approach compared to greedy decoding baseline?", "title": "A Better Variant of Self-Critical Sequence Training" }, { "answers": [ "" ], "context": "MIXER BIBREF22 is the first to use REINFORCE algorithm for sequence generation training. They use a learned function approximator to get the baseline.", "id": 2033, "question": "What environment is used for self-critical sequence training?", "title": "A Better Variant of Self-Critical Sequence Training" }, { "answers": [ "" ], "context": "The goal of SCST, for example in captioning, is to maximize the expected CIDEr score of generated captions.", "id": 2034, "question": "What baseline function is used in REINFORCE algorithm?", "title": "A Better Variant of Self-Critical Sequence Training" }, { "answers": [ "" ], "context": "The success of SCST comes from better gradient variance reduction introduced by the greedy decoding baseline. In our variant, we use the baseline proposed in BIBREF21 to achieve even better variance reduction.", "id": 2035, "question": "What baseline model is used for comparison?", "title": "A Better Variant of Self-Critical Sequence Training" }, { "answers": [ "" ], "context": "This paper reports on the history, progress, and lessons from the Aristo project, a six-year quest to answer grade-school and high-school science exams. Aristo has recently surpassed 90% on multiple choice questions from the 8th Grade New York Regents Science Exam (see Figure FIGREF6). We begin by offering several perspectives on why this achievement is significant for NLP and for AI more broadly.", "id": 2036, "question": "Is Aristo just some modern NLP model (ex. BERT) finetuned od data specific for this task?", "title": "From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project" }, { "answers": [ "Aristo Corpus\nRegents 4th\nRegents 8th\nRegents `12th\nARC-Easy\nARC-challenge " ], "context": "In 1950, Alan Turing proposed the now well-known Turing Test as a possible test of machine intelligence: If a system can exhibit conversational behavior that is indistinguishable from that of a human during a conversation, that system could be considered intelligent (BID1). As the field of AI has grown, the test has become less meaningful as a challenge task for several reasons. First, its setup is not well defined (e.g., who is the person giving the test?). A computer scientist would likely know good distinguishing questions to ask, while a random member of the general public may not. What constraints are there on the interaction? What guidelines are provided to the judges? Second, recent Turing Test competitions have shown that, in certain formulations, the test itself is gameable; that is, people can be fooled by systems that simply retrieve sentences and make no claim of being intelligent (BID2;BID3). John Markoff of The New York Times wrote that the Turing Test is more a test of human gullibility than machine intelligence. Finally, the test, as originally conceived, is pass/fail rather than scored, thus providing no measure of progress toward a goal, something essential for any challenge problem.", "id": 2037, "question": "On what dataset is Aristo system trained?", "title": "From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project" }, { "answers": [ "" ], "context": "Offering a channel for customers to share opinions and give scores to products and services, review websites have become a highly influential information source that customers refer to for making purchase decisions. Popular examples include IMDB on the movie domain, Epinions on the product domain, and Yelp on the service domain. Figure FIGREF4 shows a screenshot of a restaurant review page on Yelp.com, which offers two main types of information. First, an overall rating score is given under the restaurant name; second, detailed user reviews are listed below the rating.", "id": 2038, "question": "Does they focus on any specific product/service domain?", "title": "Opinion Recommendation using Neural Memory Model" }, { "answers": [ "" ], "context": "Sentiment Analysis. Our task is related to document-level sentiment classification BIBREF1 , which is to infer the sentiment polarity of a given document. Recently, various neural network models are used to capture the sentimental information automatically, including convolutional neural networks BIBREF9 , recursive neural network BIBREF10 and recurrent neural network BIBREF11 , BIBREF12 , which have been shown to achieve competitive results across different benchmarks. Different from binary classification, review rating prediction aims to predict the numeric rating of a given review. PangL05 pioneered this task by regarding it as a classification/regression problem. Most subsequent work focuses on designing effective textural features of reviews BIBREF13 , BIBREF14 , BIBREF15 . Recently, TangQLY15 proposed a neural network model to predict the rating score by using both lexical semantic and user model.", "id": 2039, "question": "What are the baselines?", "title": "Opinion Recommendation using Neural Memory Model" }, { "answers": [ "" ], "context": "In the data-to-text generation task (D2T), the input is data encoding facts (e.g., a table, a set of tuples, or a small knowledge graph), and the output is a natural language text representing those facts. In neural D2T, the common approaches train a neural end-to-end encoder-decoder system that encodes the input data and decodes an output text. In recent work BIBREF0 we proposed to adopt ideas from “traditional” language generation approaches (i.e. BIBREF1, BIBREF2, BIBREF3) that separate the generation into a planning stage that determines the order and structure of the expressed facts, and a realization stage that maps the plan to natural language text. We show that by breaking the task this way, one can achieve the same fluency of neural generation systems while being able to better control the form of the generated text and to improve its correctness by reducing missing facts and “hallucinations”, common in neural systems.", "id": 2040, "question": "How is fluency of generated text evaluated?", "title": "Improving Quality and Efficiency in Plan-based Neural Data-to-Text Generation" }, { "answers": [ "" ], "context": "We provide a brief overview of the step-by-step system. See BIBREF0 for further details. The system works in two stages. The first stage (planning) maps the input facts (encoded as a directed, labeled graph, where nodes represent entities and edges represent relations) to text plans, while the second stage (realization) maps the text plans to natural language text.", "id": 2041, "question": "How is faithfulness of the resulting text evaluated?", "title": "Improving Quality and Efficiency in Plan-based Neural Data-to-Text Generation" }, { "answers": [ "" ], "context": "The data-to-plan component in BIBREF0 exhaustively generates all possible plans, scores them using a heuristic, and chooses the highest scoring one for realization. While this is feasible with the small input graphs in the WebNLG challenge BIBREF4, it is also very computationally intensive, growing exponentially with the input size. We propose an alternative planner which works in linear time in the size of the graph and remains verifiable: generated plans are guaranteed to represent the input faithfully.", "id": 2042, "question": "How are typing hints suggested?", "title": "Improving Quality and Efficiency in Plan-based Neural Data-to-Text Generation" }, { "answers": [ "" ], "context": "In BIBREF0, the sentence plan trees were linearized into strings that were then fed to a neural machine translation decoder (OpenNMT) BIBREF6 with a copy mechanism. This linearization process is lossy, in the sense that the linearized strings do not explicitly distinguish between symbols that represent entities (e.g., BARACK_OBAMA) and symbols that represent relations (e.g., works-for). While this information can be deduced from the position of the symbol within the structure, there is a benefit in making it more explicit. In particular, the decoder needs to act differently when decoding relations and entities: entities are copied, while relations need to be verbalized. By making the typing information explicit to the decoder, we make it easier for it to generalize this behavior distinction and apply it also for unseen entities and relations. We thus expect the typing information to be especially useful for the unseen part of the evaluation set.", "id": 2043, "question": "What is the effectiveness plan generation?", "title": "Improving Quality and Efficiency in Plan-based Neural Data-to-Text Generation" }, { "answers": [ "" ], "context": "While the plan generation stage is guaranteed to be faithful to the input, the translation process from plans to text is based on a neural seq2seq model and may suffer from known issues with such models: hallucinating facts that do not exist in the input, repeating facts, or dropping facts. While the clear mapping between plans and text helps to reduce these issues greatly, the system in BIBREF0 still has 2% errors of these kinds.", "id": 2044, "question": "How is neural planning component trained?", "title": "Improving Quality and Efficiency in Plan-based Neural Data-to-Text Generation" }, { "answers": [ "" ], "context": "Word embeddings are frequently used in NLP tasks. In vector space models every word from the source corpus is represented by a dense vector in $\\mathbb {R}^d$ , where the typical dimension $d$ varies from tens to hundreds. Such embedding maps similar (in some sense) words to close vectors. These models are based on the so called distributional hypothesis: similar words tend to occur in similar contexts BIBREF0 . Some models also use letter trigrams or additional word properties such as morphological tags.", "id": 2045, "question": "How do they evaluate interpretability in this paper?", "title": "Rotations and Interpretability of Word Embeddings: the Case of the Russian Language" }, { "answers": [ "" ], "context": "Recent NLP studies have thrived on distributional hypothesis. More recently, there have been efforts in applying the intuition to larger semantic units, such as sentences, or documents. However, approaches based on distributional semantics are limited by the grounding problem BIBREF0 , which calls for techniques to ground certain conceptual knowledge in perceptual information.", "id": 2046, "question": "How much better performing is the proposed method over the baselines?", "title": "Improving Visually Grounded Sentence Representations with Self-Attention" }, { "answers": [ "" ], "context": "Sentence Representations. Since the inception of word embeddings BIBREF3 , extensive work have emerged for larger semantic units, such as sentences and paragraphs. These works range from deep neural models BIBREF4 to log-bilinear models BIBREF5 , BIBREF6 . A recent work proposed using supervised learning of a specific task as a leverage to obtain general sentence representation BIBREF7 .", "id": 2047, "question": "What baselines are the proposed method compared against?", "title": "Improving Visually Grounded Sentence Representations with Self-Attention" }, { "answers": [ "" ], "context": "Given a data sample INLINEFORM0 , where INLINEFORM1 is the source caption, INLINEFORM2 is the target caption, and INLINEFORM3 is the hidden representation of the image, our goal is to predict INLINEFORM4 and INLINEFORM5 with INLINEFORM6 , and the hidden representation in the middle serves as the general sentence representation.", "id": 2048, "question": "What dataset/corpus is this work evaluated over?", "title": "Improving Visually Grounded Sentence Representations with Self-Attention" }, { "answers": [ "12" ], "context": "This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ Many natural language understanding tasks such as Text Entailment and Question Answering systems are dependent on the interpretation of the semantic relationships between terms. The challenge on the construction of robust semantic interpretation models is to provide a model which is both comprehensive (capture a large set of semantic relations) and fine-grained. While semantic relations (high-level binary predicates which express relationships between words) can serve as a semantic interpretation model, in many cases, the relationship between words cannot be fully articulated as a single semantic relation, depending on a contextualization that involves one or more target words, their corresponding semantic relationships and associated logical operators (e.g. modality, functional operators).", "id": 2049, "question": "How many roles are proposed?", "title": "Categorization of Semantic Roles for Dictionary Definitions" }, { "answers": [ "- Font & Keyboard\n- Speech-to-Text\n- Text-to-Speech\n- Text Prediction\n- Spell Checker\n- Grammar Checker\n- Text Search\n- Machine Translation\n- Voice to Text Search\n- Voice to Speech Search" ], "context": "Technology pervades all aspects of society and continues to change the way people access and share information, learn and educate, as well as provide and access services. Language is the main medium through which such transformational technology can be integrated into the socioeconomic processes of a community. Natural Language Processing (NLP) and Speech systems, therefore, break down barriers and enable users and whole communities with easy access to information and services. However, the current trend in building language technology is designed to work on languages with very high resources in terms of data and infrastructure.", "id": 2050, "question": "What language technologies have been introduced in the past?", "title": "Unsung Challenges of Building and Deploying Language Technologies for Low Resource Language Communities" }, { "answers": [ "" ], "context": "Sentiment analysis BIBREF2 in review text usually consists of multiple aspects. For instance, the following review talks about the location, room, and staff aspects of a hotel, “Excellent location to the Tower of London. We also walked to several other areas of interest; albeit a bit of a trek if you don't mind walking. The room was a typical hotel room in need of a refresh, however clean. The staff couldn't have been more professional, they really helped get us a taxi when our pre arranged pickup ran late.” In this review, some of the sentiment terms are “excellent”, “typical”, “clean”, and “professional”.", "id": 2051, "question": "Does the dataset contain non-English reviews?", "title": "Aspect and Opinion Term Extraction for Aspect Based Sentiment Analysis of Hotel Reviews Using Transfer Learning" }, { "answers": [ "" ], "context": "For this aspect and opinion term extraction task, we use tokenized and annotated hotel reviews on Airy Rooms provided by BIBREF1. The dataset consists of 5000 reviews in bahasa Indonesia. The dataset is divided into training and test sets of 4000 and 1000 reviews respectively. The label distribution of the tokens in BIO scheme can be seen in Table TABREF3. In addition, we also see this case as on entity level, i.e. ASPECT, SENTIMENT, and OTHER labels.", "id": 2052, "question": "Does the paper report the performance of the method when is trained for more than 8 epochs?", "title": "Aspect and Opinion Term Extraction for Aspect Based Sentiment Analysis of Hotel Reviews Using Transfer Learning" }, { "answers": [ "" ], "context": "Recent years have seen a huge boom in the number of different social media platforms available to users. People are increasingly using these platforms to voice their opinions or let others know about their whereabouts and activities. Each of these platforms has its own characteristics and is used for different purposes. The availability of a huge amount of data from many social media platforms has inspired researchers to study the relation between the data generated through the use of these platforms and real-world attributes.", "id": 2053, "question": "What do the correlation demonstrate? ", "title": "Community Question Answering Platforms vs. Twitter for Predicting Characteristics of Urban Neighbourhoods" }, { "answers": [ "" ], "context": "The spatial unit of analysis chosen for this work is the neighbourhood. This is identified with a unique name (e.g., Camden) and people normally use this name in QA discussions to refer to specific neighbourhoods. A list of neighbourhoods for London is extracted from the GeoNames gazetteer, a dataset containing names of geographic places including place names. For each neighbourhood, GeoNames provides its name and a set of geographic coordinates (i.e., latitude and longitude) which roughly represents its centre. Note that geographical boundaries are not provided. GeoNames contains 589 neighbourhoods that fall within the boundaries of the Greater London metropolitan area. In the remainder of the paper, we use the terms “neighbourhood” or “area” to refer to our spatial unit of analysis.", "id": 2054, "question": "On Twitter, do the demographic attributes and answers show more correlations than on Yahoo! Answers?", "title": "Community Question Answering Platforms vs. Twitter for Predicting Characteristics of Urban Neighbourhoods" }, { "answers": [ "" ], "context": "We collect questions and answers (QAs) from Yahoo! Answers using its public API. For each neighbourhood, the query consists of the name of the neighbourhood together with the keywords London and area. This is to prevent obtaining irrelevant QAs for ambiguous entity names such as Victoria. For each neighbourhood, we then take all the QAs that are returned by the API. Each QA consists of a title and a content which is an elaboration on the title. This is followed by a number of answers. In total, we collect $12,947$ QAs across all London neighbourhoods. These QAs span over the last 5 years. It is common for users to discuss characteristics of several neighbourhoods in the same QA thread. This means that the same QA can be assigned to more than one neighbourhood. Figure 1 shows the histogram of the number of QAs for each neighbourhood. As the figure shows, the majority of areas have less than 100 QAs with some areas having less than 10. Only few areas have over 100 QAs.", "id": 2055, "question": "How many demographic attributes they try to predict?", "title": "Community Question Answering Platforms vs. Twitter for Predicting Characteristics of Urban Neighbourhoods" }, { "answers": [ "" ], "context": "Abstractive document summarization BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 attempts to produce a condensed representation of the most salient information of the document, aspects of which may not appear as parts of the original input text. One popular framework used in abstractive summarization is the sequence-to-sequence model introduced by BIBREF5. The attention mechanism BIBREF6 is proposed to enhance the sequence-to-sequence model by allowing salient features to dynamically come to the forefront as needed to make up for the incapability of memorizing the long input source.", "id": 2056, "question": "What evaluation metrics do they use?", "title": "Attention Optimization for Abstractive Document Summarization" }, { "answers": [ "The reciprocal of the variance of the attention distribution" ], "context": "We adopt the Pointer-Generator Network (PGN) BIBREF11 as our baseline model, which augments the standard attention-based seq2seq model with a hybrid pointer network BIBREF14. An input document is firstly fed into a Bi-LSTM encoder, then an uni-directional LSTM is used as the decoder to generate the summary word by word. At each decoding step, the attention distribution $a_t$ and the context vector $c_t$ are calculated as follows:", "id": 2057, "question": "How do they define local variance?", "title": "Attention Optimization for Abstractive Document Summarization" }, { "answers": [ "" ], "context": "Contextual word embeddings BIBREF0 , BIBREF1 , BIBREF2 have been successfully applied to various NLP tasks, including named entity recognition, document classification, and textual entailment. The multilingual version of BERT (which is trained on Wikipedia articles from 100 languages and equipped with a 110,000 shared wordpiece vocabulary) has also demonstrated the ability to perform `zero-resource' cross-lingual classification on the XNLI dataset BIBREF3 . Specifically, when multilingual BERT is finetuned for XNLI with English data alone, the model also gains the ability to handle the same task in other languages. We believe that this zero-resource transfer learning can be extended to other multilingual datasets.", "id": 2058, "question": "How do they quantify alignment between the embeddings of a document and its translation?", "title": "Adversarial Learning with Contextual Embeddings for Zero-resource Cross-lingual Classification and NER" }, { "answers": [ "" ], "context": "Language-adversarial training BIBREF12 was proposed for generating bilingual dictionaries without parallel data. This idea was extended to zero-resource cross-lingual tasks in NER BIBREF5 , BIBREF6 and text classification BIBREF4 , where we would expect that language-adversarial techniques induce features that are language-independent.", "id": 2059, "question": "Does adversarial learning have stronger performance gains for text classification, or for NER?", "title": "Adversarial Learning with Contextual Embeddings for Zero-resource Cross-lingual Classification and NER" }, { "answers": [ "" ], "context": "Self-training, where an initial model is used to generate labels on an unlabeled corpus for the purpose of domain or cross-lingual adaptation, was studied in the context of text classification BIBREF11 and parsing BIBREF13 , BIBREF14 . A similar idea based on expectation-maximization, where the unobserved label is treated as a latent variable, has also been applied to cross-lingual text classification in BIBREF15 .", "id": 2060, "question": "Do any of the evaluations show that adversarial learning improves performance in at least two different language families?", "title": "Adversarial Learning with Contextual Embeddings for Zero-resource Cross-lingual Classification and NER" }, { "answers": [ "" ], "context": "Current research, theory, and policy surrounding K-12 instruction in the United States highlight the role of student-centered disciplinary discussions (i.e. discussions related to a specific academic discipline or school subject such as physics or English Language Arts) in instructional quality and student learning opportunities BIBREF0 , BIBREF1 . Such student-centered discussions – often called “dialogic\" or “inquiry-based” – are widely viewed as the most effective instructional approach for disciplinary understanding, problem-solving, and literacy BIBREF2 , BIBREF3 , BIBREF4 . In English Language Arts (ELA) classrooms, student-centered discussions about literature have a positive impact on the development of students' reasoning, writing, and reading skills BIBREF5 , BIBREF6 . However, most studies have focused on the role of teachers and their talk BIBREF7 , BIBREF2 , BIBREF8 rather than on the aspects of student talk that contribute to discussion quality.", "id": 2061, "question": "what experiments are conducted?", "title": "Annotating Student Talk in Text-based Classroom Discussions" }, { "answers": [ "" ], "context": "One discourse feature used to assess the quality of discussions is students' argument moves: their claims about the text, their sharing of textual evidence for claims, and their warranting or reasoning to support the claims BIBREF10 , BIBREF11 . Many researchers view student reasoning as of primary importance, particularly when the reasoning is elaborated and highly inferential BIBREF12 . In Natural Language Processing (NLP), most educationally-oriented argumentation research has focused on corpora of student persuasive essays BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . We instead focus on multi-party spoken discussion transcripts from classrooms. A second key difference consists in the inclusion of the warrant label in our scheme, as it is important to understand how students explicitly use reasoning to connect evidence to claims. Educational studies suggest that discussion quality is also influenced by the specificity of student talk BIBREF19 , BIBREF20 . Chisholm and Godley found that as specificity increased, the quality of students' claims and reasoning also increased. Previous NLP research has studied specificity in the context of professionally written newspaper articles BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . While the annotation instructions used in these studies work well for general purpose corpora, specificity in text-based discussions also needs to capture particular relations between discussions and texts. Furthermore, since the concept of a sentence is not clearly defined in speech, we annotate argumentative discourse units rather than sentences (see Section SECREF3 ).", "id": 2062, "question": "what opportunities are highlighted?", "title": "Annotating Student Talk in Text-based Classroom Discussions" }, { "answers": [ "Measuring three aspects: argumentation, specificity and knowledge domain." ], "context": "Our annotation scheme uses argument moves as the unit of analysis. We define an argument move as an utterance, or part of an utterance, that contains an argumentative discourse unit (ADU) BIBREF28 . Like Peldszus and Stede Peldszus:15, in this paper we use transcripts already segmented into argument moves and focus on the steps following segmentation, i.e., labeling argumentation, specificity, and knowledge domain. Table TABREF2 shows a section of a transcribed classroom discussion along with labels assigned by a human annotator following segmentation.", "id": 2063, "question": "how do they measure discussion quality?", "title": "Annotating Student Talk in Text-based Classroom Discussions" }, { "answers": [ "" ], "context": "The argumentation scheme is based on BIBREF29 and consists of a simplified set of labels derived from Toulmin's Toulmin:58 model: INLINEFORM0 Claim: an arguable statement that presents a particular interpretation of a text or topic. INLINEFORM1 Evidence: facts, documentation, text reference, or testimony used to support or justify a claim. INLINEFORM2 Warrant: reasons explaining how a specific evidence instance supports a specific claim. Our scheme specifies that warrants must come after claim and evidence, since by definition warrants cannot exist without them.", "id": 2064, "question": "do they use a crowdsourcing platform?", "title": "Annotating Student Talk in Text-based Classroom Discussions" }, { "answers": [ "2008 Punyakanok et al. \n2009 Zhao et al. + ME \n2008 Toutanova et al. \n2010 Bjorkelund et al. \n2015 FitzGerald et al. \n2015 Zhou and Xu \n2016 Roth and Lapata \n2017 He et al. \n2017 Marcheggiani et al.\n2017 Marcheggiani and Titov \n2018 Tan et al. \n2018 He et al. \n2018 Strubell et al. \n2018 Cai et al. \n2018 He et al. \n2018 Li et al. \n" ], "context": "The purpose of semantic role labeling (SRL) is to derive the meaning representation for a sentence, which is beneficial to a wide range of natural language processing (NLP) tasks BIBREF0 , BIBREF1 . SRL can be formed as four subtasks, including predicate detection, predicate disambiguation, argument identification and argument classification. For argument annotation, there are two formulizations. One is based on text spans, namely span-based SRL. The other is dependency-based SRL, which annotates the syntactic head of argument rather than entire argument span. Figure FIGREF1 shows example annotations.", "id": 2065, "question": "what were the baselines?", "title": "Dependency or Span, End-to-End Uniform Semantic Role Labeling" }, { "answers": [ "LSTM and BERT " ], "context": "Aspect-based sentiment analysis BIBREF0, BIBREF1 is a fine-grained sentiment analysis task which has gained much attention from research and industries. It aims at predicting the sentiment polarity of a particular aspect of the text. With the rapid development of deep learning, this task has been widely addressed by attention-based neural networks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. To name a few, wang2016attention learn to attend on different parts of the sentence given different aspects, then generates aspect-specific sentence representations for sentiment prediction. tay2018learning learn to attend on correct words based on associative relationships between sentence words and a given aspect. These attention-based methods have brought the ABSA task remarkable performance improvement.", "id": 2066, "question": "Which soft-selection approaches are evaluated?", "title": "Learning to Detect Opinion Snippet for Aspect-Based Sentiment Analysis" }, { "answers": [ "" ], "context": "Traditional machine learning methods for aspect-based sentiment analysis focus on extracting a set of features to train sentiment classifiers BIBREF10, BIBREF11, BIBREF12, which usually are labor intensive. With the development of deep learning technologies, neural attention mechanism BIBREF13 has been widely adopted to address this task BIBREF14, BIBREF2, BIBREF15, BIBREF3, BIBREF16, BIBREF4, BIBREF17, BIBREF6, BIBREF5, BIBREF18, BIBREF19, BIBREF20, BIBREF21. wang2016attention propose attention-based LSTM networks which attend on different parts of the sentence for different aspects. Ma2017Interactive utilize the interactive attention to capture the deep associations between the sentence and the aspect. Hierarchical models BIBREF4, BIBREF17, BIBREF6 are also employed to capture multiple levels of emotional expression for more accurate prediction, as the complexity of sentence structure and semantic diversity. tay2018learning learn to attend based on associative relationships between sentence words and aspect.", "id": 2067, "question": "Is the model evaluated against the baseline also on single-aspect sentences?", "title": "Learning to Detect Opinion Snippet for Aspect-Based Sentiment Analysis" }, { "answers": [ "" ], "context": "We first formulate the problem. Given a sentence $S=\\lbrace w_1,w_2,...,w_N\\rbrace $ and an aspect $A=\\lbrace a_1,a_2,...,a_M\\rbrace $, the ABSA task is to predict the sentiment of $A$. In our setting, the aspect can be either aspect terms or an aspect category. As aspect terms, $A$ is a snippet of words in $S$, i.e., a sub-sequence of the sentence, while as an aspect category, $A$ represents a semantic category with $M=1$, containing just an abstract token.", "id": 2068, "question": "Is the accuracy of the opinion snippet detection subtask reported?", "title": "Learning to Detect Opinion Snippet for Aspect-Based Sentiment Analysis" }, { "answers": [ "" ], "context": "The goal of text summarization is to condense a piece of text into a shorter version that contains the salient information. Due to the prevalence of news articles and the need to provide succinct summaries for readers, a majority of existing datasets for summarization come from the news domain BIBREF0, BIBREF1, BIBREF2. However, according to journalistic conventions, the most important information in a news report usually appears near the beginning of the article BIBREF3. While it facilitates faster and easier understanding of the news for readers, this lead bias causes undesirable consequences for summarization models. The output of these models is inevitably affected by the positional information of sentences. Furthermore, the simple baseline of using the top few sentences as summary can achieve a stronger performance than many sophisticated models BIBREF4. It can take a lot of effort for models to overcome the lead bias BIBREF3.", "id": 2069, "question": "What were the baselines?", "title": "Make Lead Bias in Your Favor: A Simple and Effective Method for News Summarization" }, { "answers": [ "" ], "context": "End-to-end abstractive text summarization has been intensively studied in recent literature. To generate summary tokens, most architectures take the encoder-decoder approach BIBREF8. BIBREF9 first introduces an attention-based seq2seq model to the abstractive sentence summarization task. However, its output summary degenerates as document length increases, and out-of-vocabulary (OOV) words cannot be efficiently handled. To tackle these challenges, BIBREF4 proposes a pointer-generator network that can both produce words from the vocabulary via a generator and copy words from the source article via a pointer. BIBREF10 utilizes reinforcement learning to improve the result. BIBREF11 uses a content selector to over-determine phrases in source documents that helps constrain the model to likely phrases. BIBREF12 adds Gaussian focal bias and a salience-selection network to the transformer encoder-decoder structure BIBREF13 for abstractive summarization. BIBREF14 randomly reshuffles the sentences in news articles to reduce the effect of lead bias in extractive summarization.", "id": 2070, "question": "What metric was used in the evaluation step?", "title": "Make Lead Bias in Your Favor: A Simple and Effective Method for News Summarization" }, { "answers": [ "" ], "context": "In recent years, pretraining language models have proved to be quite helpful in NLP tasks. The state-of-the-art pretrained models include ELMo BIBREF15, GPT BIBREF7, BERT BIBREF6 and UniLM BIBREF16. Built upon large-scale corpora, these pretrained models learn effective representations for various semantic structures and linguistic relationships. As a result, pretrained models have been widely used with considerable success in applications such as question answering BIBREF17, sentiment analysis BIBREF15 and passage reranking BIBREF18. Furthermore, UniLM BIBREF16 leverages its sequence-to-sequence capability for abstractive summarization; the BERT model has been employed as an encoder in BERTSUM BIBREF19 for extractive/abstractive summarization.", "id": 2071, "question": "What did they pretrain the model on?", "title": "Make Lead Bias in Your Favor: A Simple and Effective Method for News Summarization" }, { "answers": [ "" ], "context": "News articles usually follow the convention of placing the most important information early in the content, forming an inverted pyramid structure. This lead bias has been discovered in a number of studies BIBREF3, BIBREF14. One of the consequences is that the lead baseline, which simply takes the top few sentences as the summary, can achieve a rather strong performance in news summarization. For instance, in the CNN/Daily Mail dataset BIBREF0, using the top three sentences as summaries can get a higher ROUGE score than many deep learning based models. This positional bias brings lots of difficulty for models to extract salient information from the article and generate high-quality summaries. For instance, BIBREF14 discovers that most models' performances drop significantly when a random sentence is inserted in the leading position, or when the sentences in a news article are shuffled.", "id": 2072, "question": "What does the data cleaning and filtering process consist of?", "title": "Make Lead Bias in Your Favor: A Simple and Effective Method for News Summarization" }, { "answers": [ "" ], "context": "In this section, we introduce our abstractive summarization model, which has a transformer-based encoder-decoder structure. We first formulate the supervised summarization problem and then present the network architecture.", "id": 2073, "question": "What unlabeled corpus did they use?", "title": "Make Lead Bias in Your Favor: A Simple and Effective Method for News Summarization" }, { "answers": [ "" ], "context": "Automatic text causality mining is a critical but difficult task because causality is thought to play an essential role in human cognition when making decisions BIBREF0. Thus, automatic text causality has been studied extensively in a wide range of areas, such as industry BIBREF1, physics BIBREF2 and healthcare BIBREF3, etc. A tool to automatically scour the plethora of textual content on the web and extract meaningful causal relations could help us construct causal chains to unveil previously unknown relationships between events BIBREF4 and accelerates the discovery of the intrinsic logic of the events BIBREF5.", "id": 2074, "question": "How efective is MCDN for ambiguous and implicit causality inference compared to state-of-the-art?", "title": "A Multi-level Neural Network for Implicit Causality Detection in Web Texts" }, { "answers": [ "" ], "context": "Causality mining is a fundamental task with abundant upstream applications. Early works utilize Bayesian network BIBREF14, BIBREF15, syntactic constraint BIBREF16, and dependency structure BIBREF17 to extract cause-effect pairs. Nevertheless, they could hardly summarize moderate patterns and rules to avoid overfitting. Further studies incorporate world knowledge that provides a supplement to lexico-syntax analysis. Generalizing nouns to their hypernyms in WordNet and each verb to its class in VerbNet BIBREF18, BIBREF19 eliminates the negative effect of lexical variations and discover frequent patterns of cause-effect pairs. As is well known, the implicit expressions of causality are more frequent. J.-H. Oh et al. BIBREF20 exploited cue words and sequence labeling by CRFs and selected the most relevant causality expressions as complements to implicitly expressed causality. However, the method requires retrieval and ranking from enormous web texts. From natural properties perspective, causality describes relations between regularly correlated events or phenomena. Constructing a cause-effect network or graph could help discover co-occurrence patterns and evolution rules of causation BIBREF3, BIBREF19. Therefore, Zhao et al. BIBREF21 conducted causality reasoning on the heterogeneous network to extract implicit relations cross sentences and find new causal relations.", "id": 2075, "question": "What performance did proposed method achieve, how much better is than previous state-of-the-art?", "title": "A Multi-level Neural Network for Implicit Causality Detection in Web Texts" }, { "answers": [ "" ], "context": "Relation Networks (RNs) is initially a simple plug-and-play module to solve Visual-QA problems that fundamentally hinge on relational reasoning BIBREF22. RNs can effectively couple with convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and multi-layer perceptrons (MLPs) to reduce overall network complexity. We gain a general ability to reason about the relations between entities and their properties. Original RNs can only perform single step inference such as $A \\rightarrow B$ rather than $A \\rightarrow B \\rightarrow C$. For tasks that require complex multi-step of relational reasoning, Palm et al. BIBREF23 introduced the recurrent relational network that operates on a graph representation of objects. Pavez et al. BIBREF24 added complex reasoning ability to Memory Networks with RNs, which reduced its computational complexity from quadratic to linear. However, their tasks remain text QA and visual QA. In this paper, it's the first time that RNs is applied to relation extraction as proposed SCRN.", "id": 2076, "question": "What was previous state-of-the-art approach?", "title": "A Multi-level Neural Network for Implicit Causality Detection in Web Texts" }, { "answers": [ "" ], "context": "This section describes the linguistic background of causal relation and the AltLexes dataset, which we used. It's a commonly held belief that causality can be expressed explicitly and implicitly using various propositions. In the Penn Discourse Treebank (PDTB) BIBREF25, over $12\\%$ of explicit discourse connectives are marked as causal such as \"hence\", \"as a result\" and \"consequently\", as are nearly $26\\%$ of implicit discourse relationships. In addition to these, there exists a type of implicit connectives in PDTB named AltLex (Alternative lexicalization) has been capable of indicating causal relations, which is an open class of markers and potentially infinite.", "id": 2077, "question": "How is Relation network used to infer causality at segment level?", "title": "A Multi-level Neural Network for Implicit Causality Detection in Web Texts" }, { "answers": [ "" ], "context": "We have seen rapid progress in machine reading compression in recent years with the introduction of large-scale datasets, such as SQuAD BIBREF3 , MS MARCO BIBREF4 , SearchQA BIBREF5 , TriviaQA BIBREF6 , and QUASAR-T BIBREF7 , and the broad adoption of neural models, such as BiDAF BIBREF8 , DrQA BIBREF9 , DocumentQA BIBREF10 , and QAnet BIBREF11 .", "id": 2078, "question": "What is the TREC-CAR dataset?", "title": "Passage Re-ranking with BERT" }, { "answers": [ "" ], "context": "Slot Filling (SF) is the task of identifying the semantic concept expressed in natural language utterance. For instance, consider a request to edit an image expressed in natural language: “Remove the blue ball on the table and change the color of the wall to brown”. Here, the user asks for an \"Action\" (i.e., removing) on one “Object” (blue ball on the table) in the image and changing an “Attribute” (i.e., color) of the image to new “Value” (i.e., brown). Our goal in SF is to provide a sequence of labels for the given sentence to identify the semantic concept expressed in the given sentence.", "id": 2079, "question": "How does their model utilize contextual information for each work in the given sentence in a multi-task setting? setting?", "title": "Improving Slot Filling by Utilizing Contextual Information" }, { "answers": [ "" ], "context": "The task of Slot Filling is formulated as a sequence labeling problem. Deep learning has been extensively employed for this task (BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11). The prior work has mainly utilized the recurrent neural network as the encoder to extract features per word and Conditional Random Field (CRF) BIBREF12 as the decoder to generate the labels per word. Recently the work BIBREF1 shows that the global context of the sentence could be useful to enhance the performance of neural sequence labeling. In their approach, they use a separate sequential model to extract word features. Afterwards, using max pooling over the representations of the words, they obtain the sentence representations and concatenate it to the word embedding as the input to the main task encoder (i.e. the RNN model to perform sequence labeling). The benefit of using the global context along the word representation is 2-fold: 1) it enhance the representations of the word by the semantics of the entire sentence thus the word representation are more contextualized 2) The global view of the sentence would increase the model performance as it contains information about the entire sentence and this information might not be encoded in word representations due to long decencies.", "id": 2080, "question": "What metris are used for evaluation?", "title": "Improving Slot Filling by Utilizing Contextual Information" }, { "answers": [ "" ], "context": "Our model is trained in a multi-task setting in which the main task is slot filling to identify the best possible sequence of labels for the given sentence. In the first auxiliary task we aim to increase consistency between the word representation and its context. The second auxiliary task is to enhance task specific information in contextual information. In this section, we explain each of these tasks in more details.", "id": 2081, "question": "How better is proposed model compared to baselines?", "title": "Improving Slot Filling by Utilizing Contextual Information" }, { "answers": [ "" ], "context": "The input to the model is a sequence of words $x_1,x_2,...,x_N$. The goal is to assign each word one of the labels action, object, attribute, value or other. Following other methods for sequence labelling, we use the BIO encoding schema. In addition to the sequence of words, the part-of-speech (POS) tags and the dependency parse tree of the input are given to the model.", "id": 2082, "question": "What are the baselines?", "title": "Improving Slot Filling by Utilizing Contextual Information" }, { "answers": [ "Dataset has 1737 train, 497 dev and 559 test sentences." ], "context": "In this sub-task we aim to increase the consistency of the word representation and its context. To obtain the context of each word we perform max pooling over the all words of the sentence excluding the word itself:", "id": 2083, "question": "How big is slot filing dataset?", "title": "Improving Slot Filling by Utilizing Contextual Information" }, { "answers": [ "" ], "context": "A run-on sentence is defined as having at least two main or independent clauses that lack either a conjunction to connect them or a punctuation mark to separate them. Run-ons are problematic because they not only make the sentence unfriendly to the reader but potentially also to the local discourse. Consider the example in Table TABREF1 .", "id": 2084, "question": "Which machine learning models do they use to correct run-on sentences?", "title": "How do you correct run-on sentences it's not as easy as it seems" }, { "answers": [ "4.756 million sentences" ], "context": "Early work in the field of GEC focused on correcting specific error types such as preposition and article errors BIBREF2 , BIBREF3 , BIBREF4 , but did not consider run-on sentences. The closest work to our own is BIBREF5 , who used Conditional Random Fields (CRFs) for correcting comma errors (excluding comma splices, a type of run-on sentence). BIBREF6 used a similar system based on CRFs but focused on comma splice correction. Recently, the field has focused on the task of whole-sentence correction, targeting all errors in a sentence in one pass. Whole-sentence correction methods borrow from advances in statistical machine translation BIBREF7 , BIBREF8 , BIBREF9 and, more recently, neural machine translation BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 .", "id": 2085, "question": "How large is the dataset they generate?", "title": "How do you correct run-on sentences it's not as easy as it seems" }, { "answers": [ "" ], "context": "Recent research advances in Computer Vision (CV) and Natural Language Processing (NLP) introduced several tasks that are quite challenging to be solved, the so-called AI-complete problems. Most of those tasks require systems that understand information from multiple sources, i.e., semantics from visual and textual data, in order to provide some kind of reasoning. For instance, image captioning BIBREF0, BIBREF1, BIBREF2 presents itself as a hard task to solve, though it is actually challenging to quantitatively evaluate models on that task, and that recent studies BIBREF3 have raised questions on its AI-completeness.", "id": 2086, "question": "What are least important components identified in the the training of VQA models?", "title": "Component Analysis for Visual Question Answering Architectures" }, { "answers": [ "" ], "context": "The task of VAQ has gained attention since Antol et al. BIBREF3 presented a large-scale dataset with open-ended questions. Many of the developed VQA models employ a very similar architecture BIBREF3, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27: they represent images with features from pre-trained convolutional neural networks; they use word embeddings or recurrent neural networks to represent questions and/or answers; and they combine those features in a classification model over possible answers.", "id": 2087, "question": "What type of experiments are performed?", "title": "Component Analysis for Visual Question Answering Architectures" }, { "answers": [ "" ], "context": "In this section we first introduce the baseline approach, with default image and text encoders, alongside a pre-defined fusion strategy. That base approach is inspired by the pioneer of Antol et al. on VQA BIBREF3. To understand the importance of each component, we update the base architecture according to each component we are investigating.", "id": 2088, "question": "What components are identified as core components for training VQA models?", "title": "Component Analysis for Visual Question Answering Architectures" }, { "answers": [ "" ], "context": "Text simplification (hereafter TS) has received increasing interest by the scientific community in recent years. It aims at producing a simpler version of a source text that is both easier to read and to understand, thus improving the accessibility of text for people suffering from a range of disabilities such as aphasia BIBREF0 or dyslexia BIBREF1 , as well as for second language learners BIBREF2 and people with low literacy BIBREF3 . This topic has been researched for a variety of languages such as English BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , French BIBREF8 , Spanish BIBREF9 , Portuguese BIBREF10 , Italian BIBREF11 and Japanese BIBREF12 .", "id": 2089, "question": "what approaches are compared?", "title": "Reference-less Quality Estimation of Text Simplification Systems" }, { "answers": [ "" ], "context": "One prominent way of modelling the decision-making component of a spoken dialogue system (SDS) is to use (partially observable) Markov decision processes ((PO)MDPs) BIBREF0, BIBREF1. There, reinforcement learning (RL) BIBREF2 is applied to find the optimal system behaviour represented by the policy $\\pi $. Task-oriented dialogue systems model the reward $r$, used to guide the learning process, traditionally with task success as the principal reward component BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF1, BIBREF7, BIBREF8.", "id": 2090, "question": "What model do they use a baseline to estimate satisfaction?", "title": "Improving Interaction Quality Estimation with BiLSTMs and the Impact on Dialogue Policy Learning" }, { "answers": [ "" ], "context": "Dialogue systems have become a very popular research topic in recent years with the rapid improvement of personal assistants and the growing demand for online customer support. However, research has been split in two subfields BIBREF2: models presented for generation of open-ended conversations BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 and work on solving goal-oriented dialogue through dialogue management pipelines that include dialogue state tracking and dialogue policy BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14.", "id": 2091, "question": "what semantically conditioned models did they compare with?", "title": "Retrieval-based Goal-Oriented Dialogue Generation" }, { "answers": [ "" ], "context": "Linear classifiers in combination with the right features achieve good performance on text classification tasks BIBREF0 . Those hand-crafted features provide baselines for evaluating deep learning methods and are sometimes difficult to beat BIBREF1 , BIBREF2 . In some cases, hand-crafted features can even be combined with learned features to improve performance on a given task BIBREF3 , BIBREF4 highlighting some complementarity in the information captured by each approach. Conducting empirical experiments to study such complementarity would be beneficial, and the reasons are threefold: Firstly, this enables us to compare the performance of both hand crafted and learned representations and make design decisions regarding the trade-offs between speed and accuracy on a specific dataset. Secondly, it helps in investigating where the performance gaps are and whether these methods can complement each other and how they can be combined to improve performance. Finally, it allows us to derive new linguistic hypotheses as in many cases, deep learning methods are great engineering tools but they operate as black box methods and it is difficult to extract from them linguistic insights.", "id": 2092, "question": "Do they differentiate insights where they are dealing with learned or engineered representations?", "title": "INFODENS: An Open-source Framework for Learning Text Representations" }, { "answers": [ "" ], "context": "The framework is designed in a modular and developer-friendly manner to encourage changes and extensions. The source code is accompanied by a user and a developer guide, and we give a brief overview of the architecture in this section, summarized in Figure FIGREF2 . The framework consists of the following frozen and hot spots:", "id": 2093, "question": "Do they show an example of usage for INFODENS?", "title": "INFODENS: An Open-source Framework for Learning Text Representations" }, { "answers": [ "" ], "context": "These are the modules of the framework that need not be changed for extending the functionality in typical use cases.", "id": 2094, "question": "What kind of representation exploration does INFODENS provide?", "title": "INFODENS: An Open-source Framework for Learning Text Representations" }, { "answers": [ "" ], "context": "Query-focused summarization BIBREF0 aims to create a brief, well-organized and fluent summary that answers the need of the query. It is useful in many scenarios like news services and search engines, etc. Nowadays, most summarization systems are under the extractive framework which directly selects existing sentences to form the summary. Basically, there are two major tasks in extractive query-focused summarization, i.e., to measure the saliency of a sentence and its relevance to a user's query.", "id": 2095, "question": "What models do they compare to?", "title": "AttSum: Joint Learning of Focusing and Summarization with Neural Attention" }, { "answers": [ "" ], "context": "In a world where traditional financial information is ubiquitous and the financial models are largely homogeneous, finding hidden information that has not been priced in from alternative data is critical. The recent development in Natural Language Processing provides such opportunities to look into text data in addition to numerical data. When the market sets the stock price, it is not uncommon that the expectation of the company growth outweighs the company fundamentals. Twitter, a online news and social network where the users post and interact with messages to express views about certain topics, contains valuable information on the public mood and sentiment. A collection of research BIBREF0 BIBREF1 have shown that there is a positive correlation between the \"public mood\" and the \"market mood\". Other research BIBREF2 also shows that significant correlation exists between the Twitter sentiment and the abnormal return during the peaks of the Twitter volume during a major event.", "id": 2096, "question": "What is the optimal trading strategy based on reinforcement learning?", "title": "Trading the Twitter Sentiment with Reinforcement Learning" }, { "answers": [ "" ], "context": "There are two options of getting the Tweets. First, Twitter provides an API to download the Tweets. However, rate limit and history limit make it not an option for this paper. Second, scrapping Tweets directly from Twitter website. Using the second option, the daily Tweets for stocks of interest from 2015 January to 2017 June were downloaded.", "id": 2097, "question": "Do the authors give any examples of major events which draw the public's attention and the impact they have on stock price?", "title": "Trading the Twitter Sentiment with Reinforcement Learning" }, { "answers": [ "" ], "context": "To translate each tweet into a sentiment score, the Stanford coreNLP software was used. Stanford CoreNLP is designed to make linguistic analysis accessible to the general public. It provides named Entity Recognition, co-reference and basic dependencies and many other text understanding applications. An example that illustrate the basic functionality of Stanford coreNLP is shown in Figure. FIGREF5 ", "id": 2098, "question": "Which tweets are used to output the daily sentiment signal?", "title": "Trading the Twitter Sentiment with Reinforcement Learning" }, { "answers": [ "" ], "context": "Feature engineering is the process to extract meaningful information from the raw data in order to improve the performance of machine learning mode. Domain knowledge and intuition are often applied to keep the number of the features reasonable relative to the training data size. Two categories of features are defines: technical features and sentiment features. The technical features include previous day's return and volume, price momentum and volatility. The sentiment features include number of Tweets, daily average sentiment score, cross-section sentiment volatility, sentiment momentum and reversal.", "id": 2099, "question": "What is the baseline machine learning prediction approach?", "title": "Trading the Twitter Sentiment with Reinforcement Learning" }, { "answers": [ "can be biased by dataset used and may generate categories which are suboptimal compared to human designed categories" ], "context": "Words are the smallest elements of a language with a practical meaning. Researchers from diverse fields including linguistics BIBREF0 , computer science BIBREF1 and statistics BIBREF2 have developed models that seek to capture “word meaning\" so that these models can accomplish various NLP tasks such as parsing, word sense disambiguation and machine translation. Most of the effort in this field is based on the distributional hypothesis BIBREF3 which claims that a word is characterized by the company it keeps BIBREF4 . Building on this idea, several vector space models such as well known Latent Semantic Analysis (LSA) BIBREF5 and Latent Dirichlet Allocation (LDA) BIBREF6 that make use of word distribution statistics have been proposed in distributional semantics. Although these methods have been commonly used in NLP, more recent techniques that generate dense, continuous valued vectors, called embeddings, have been receiving increasing interest in NLP research. Approaches that learn embeddings include neural network based predictive methods BIBREF1 , BIBREF7 and count-based matrix-factorization methods BIBREF8 . Word embeddings brought about significant performance improvements in many intrinsic NLP tasks such as analogy or semantic textual similarity tasks, as well as downstream NLP tasks such as part-of-speech (POS) tagging BIBREF9 , named entity recognition BIBREF10 , word sense disambiguation BIBREF11 , sentiment analysis BIBREF12 and cross-lingual studies BIBREF13 .", "id": 2100, "question": "What are the weaknesses of their proposed interpretability quantification method?", "title": "Semantic Structure and Interpretability of Word Embeddings" }, { "answers": [ "it is less expensive and quantifies interpretability using continuous values rather than binary evaluations" ], "context": "In the word embedding literature, the problem of interpretability has been approached via several different routes. For learning sparse, interpretable word representations from co-occurrence variant matrices, BIBREF21 suggested algorithms based on non-negative matrix factorization (NMF) and the resulting representations are called non-negative sparse embeddings (NNSE). To address memory and scale issues of the algorithms in BIBREF21 , BIBREF22 proposed an online method of learning interpretable word embeddings. In both studies, interpretability was evaluated using a word intrusion test introduced in BIBREF20 . The word intrusion test is expensive to apply since it requires manual evaluations by human observers separately for each embedding dimension. As an alternative method to incorporate human judgement, BIBREF23 proposed joint non-negative sparse embedding (JNNSE), where the aim is to combine text-based similarity information among words with brain activity based similarity information to improve interpretability. Yet, this approach still requires labor-intensive collection of neuroimaging data from multiple subjects.", "id": 2101, "question": "What advantages does their proposed method of quantifying interpretability have over the human-in-the-loop evaluation they compare to?", "title": "Semantic Structure and Interpretability of Word Embeddings" }, { "answers": [ "" ], "context": "Natural language interfaces have been gaining significant popularity, enabling ordinary users to write and execute complex queries. One of the prominent paradigms for developing NL interfaces is semantic parsing, which is the mapping of NL phrases into a formal language. As Machine Learning techniques are standardly used in semantic parsing, a training set of question-answer pairs is provided alongside a target database BIBREF0 , BIBREF1 , BIBREF2 . The parser is a parameterized function that is trained by updating its parameters such that questions from the training set are translated into queries that yield the correct answers.", "id": 2102, "question": "How do they generate a graphic representation of a query from a query?", "title": "Explaining Queries over Web Tables to Non-Experts" }, { "answers": [ "" ], "context": "We review our system architecture from Figure FIGREF7 and describe its general workflow.", "id": 2103, "question": "How do they gather data for the query explanation problem?", "title": "Explaining Queries over Web Tables to Non-Experts" }, { "answers": [ "" ], "context": "We begin by formally defining our task of querying tables. Afterwards, we discuss the formal query language and show how lambda DCS queries can be translated directly into SQL.", "id": 2104, "question": "Which query explanation method was preffered by the users in terms of correctness?", "title": "Explaining Queries over Web Tables to Non-Experts" }, { "answers": [ "" ], "context": "An NL interface for querying tables receives a question INLINEFORM0 on a table INLINEFORM1 and outputs a set of values INLINEFORM2 as the answer (where each value is either the content of a cell, or the result of an aggregate function on cells). As discussed in the introduction, we make the assumption that a query concerns a single table.", "id": 2105, "question": "Do they conduct a user study where they show an NL interface with and without their explanation?", "title": "Explaining Queries over Web Tables to Non-Experts" }, { "answers": [ "" ], "context": "Following the definition of our data model we introduce our formal query language, lambda dependency-based compositional semantics (lambda DCS) BIBREF6 , BIBREF0 , which is a language inspired by lambda calculus, that revolves around sets. Lambda DCS was originally designed for building an NL interface over Freebase BIBREF9 .", "id": 2106, "question": "How do the users in the user studies evaluate reliability of a NL interface?", "title": "Explaining Queries over Web Tables to Non-Experts" }, { "answers": [ "" ], "context": "Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling BIBREF0 all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation BIBREF1 . Yet scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible.", "id": 2107, "question": "What was the task given to workers?", "title": "Autocompletion interfaces make crowd workers slower, but their use promotes response diversity" }, { "answers": [ "By computing number of unique responses and number of responses divided by the number of unique responses to that question for each of the questions" ], "context": "An important goal of crowdsourcing research is achieving efficient scalability of the crowd to very large sets of tasks. Efficiency in crowdsourcing manifests both in receiving more effective information per worker and in making individual workers faster and/or more accurate. The former problem is a significant area of interest BIBREF5 , BIBREF6 , BIBREF7 while less work has been put towards the latter.", "id": 2108, "question": "How was lexical diversity measured?", "title": "Autocompletion interfaces make crowd workers slower, but their use promotes response diversity" }, { "answers": [ "" ], "context": "Here we describe the task we studied and its input data, worker recruitment, the design of our experimental treatment and control, the “instrumentation” we used to measure the speeds of workers as they performed our task, and our procedures to post-process and rate the worker responses to our task prior to subsequent analysis.", "id": 2109, "question": "How many responses did they obtain?", "title": "Autocompletion interfaces make crowd workers slower, but their use promotes response diversity" }, { "answers": [ "" ], "context": "We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017.", "id": 2110, "question": "What crowdsourcing platform was used?", "title": "Autocompletion interfaces make crowd workers slower, but their use promotes response diversity" }, { "answers": [ "" ], "context": "Abstractive test summarization is an important text generation task. With the applying of the sequence-to-sequence model and the publication of large-scale datasets, the quality of the automatic generated summarization has been greatly improved BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . However, the semantic consistency of the automatically generated summaries is still far from satisfactory.", "id": 2111, "question": "Are results reported only for English data?", "title": "Regularizing Output Distribution of Abstractive Chinese Social Media Text Summarization for Improved Semantic Consistency" }, { "answers": [ "RNN-context, SRB, CopyNet, RNN-distract, DRGD" ], "context": "Base on the fact that the spurious correspondence is not stable and its realization in the model is prone to change, we propose to alleviate the issue heuristically by regularization. We use the cross-entropy with an annealed output distribution as the regularization term in the loss so that the little fluctuation in the distribution will be depressed and more robust and stable correspondence will be learned. By correspondence, we mean the relation between (a) the current output, and (b) the source content and the partially generated output. Furthermore, we propose to use an additional output layer to generate the annealed output distribution. Due to the same fact, the two output layers will differ more in the words that superficially co-occur, so that the output distribution can be better regularized.", "id": 2112, "question": "Which existing models does this approach outperform?", "title": "Regularizing Output Distribution of Abstractive Chinese Social Media Text Summarization for Improved Semantic Consistency" }, { "answers": [ "comparing the summary with the text instead of the reference and labeling the candidate bad if it is incorrect or irrelevant" ], "context": "Typically, in the training of the sequence-to-sequence model, only the one-hot hard target is used in the cross-entropy based loss function. For an example in the training set, the loss of an output vector is DISPLAYFORM0 ", "id": 2113, "question": "What human evaluation method is proposed?", "title": "Regularizing Output Distribution of Abstractive Chinese Social Media Text Summarization for Improved Semantic Consistency" }, { "answers": [ "" ], "context": "Open-domain response generation BIBREF0, BIBREF1 for single-round short text conversation BIBREF2, aims at generating a meaningful and interesting response given a query from human users. Neural generation models are of growing interest in this topic due to their potential to leverage massive conversational datasets on the web. These generation models such as encoder-decoder models BIBREF3, BIBREF2, BIBREF4, directly build a mapping from the input query to its output response, which treats all query-response pairs uniformly and optimizes the maximum likelihood estimation (MLE). However, when the models converge, they tend to output bland and generic responses BIBREF5, BIBREF6, BIBREF7.", "id": 2114, "question": "How is human evaluation performed, what were the criteria?", "title": "A Discrete CVAE for Response Generation on Short-Text Conversation" }, { "answers": [ "" ], "context": "In this section, we briefly review recent advancement in encoder-decoder models and CVAE-based models for response generation.", "id": 2115, "question": "What automatic metrics are used?", "title": "A Discrete CVAE for Response Generation on Short-Text Conversation" }, { "answers": [ "" ], "context": "Encoder-decoder models for short-text conversation BIBREF3, BIBREF2 maximize the likelihood of responses given queries. During testing, a decoder sequentially generates a response using search strategies such as beam search. However, these models frequently generate bland and generic responses.", "id": 2116, "question": "What other kinds of generation models are used in experiments?", "title": "A Discrete CVAE for Response Generation on Short-Text Conversation" }, { "answers": [ "" ], "context": "A few works indicate that it is worth trying to apply the CVAE to dialogue generation which is originally used in image generation BIBREF16, BIBREF17 and optimized with the variational lower bound of the conditional log-likelihood. For task-oriented dialogues, BIBREF22 wen2017latent use the latent variable to model intentions in the framework of neural variational inference. For chit-chat multi-round conversations, BIBREF23 serban2017hierarchical model the generative process with multiple levels of variability based on a hierarchical sequence-to-sequence model with a continuous high-dimensional latent variable. BIBREF14 zhao2017learning make use of the CVAE and the latent variable is used to capture discourse-level variations. BIBREF24 gu2018dialogwae propose to induce the latent variables by transforming context-dependent Gaussian noise. BIBREF15 shen2017conditional present a conditional variational framework for generating specific responses based on specific attributes. Yet, it is observed in other tasks such as image captioning BIBREF25 and question generation BIBREF26 that the CVAE-based generation models suffer from the low output diversity problem, i.e. multiple sampled variables point to the same generated sequences. In this work, we utilize a discrete latent variable with an interpretable meaning to alleviate this low output diversity problem on short-text conversation.", "id": 2117, "question": "How does discrete latent variable has an explicit semantic meaning to improve the CVAE on short-text conversation?", "title": "A Discrete CVAE for Response Generation on Short-Text Conversation" }, { "answers": [ "" ], "context": "The web has provided researchers with vast amounts of unlabeled text data, and enabled the development of increasingly sophisticated language models which can achieve state of the art performance despite having no task specific training BIBREF0, BIBREF1, BIBREF2. It is desirable to adapt these models for bespoke tasks such as short text classification.", "id": 2118, "question": "What news dataset was used?", "title": "Short-Text Classification Using Unsupervised Keyword Expansion" }, { "answers": [ "" ], "context": "Document expansion methods have typically focused on creating new features with the help of custom models. Word co-occurrence models BIBREF4, topic modeling BIBREF5, latent concept expansion BIBREF6, and word embedding clustering BIBREF7, are all examples of document expansion methods that must first be trained using either the original dataset or an external dataset from within the same domain. The expansion models may therefore only be used when there is a sufficiently large training set.", "id": 2119, "question": "How do they determine similarity between predicted word and topics?", "title": "Short-Text Classification Using Unsupervised Keyword Expansion" }, { "answers": [ "" ], "context": "The News Category Dataset BIBREF11 is a collection of headlines published by HuffPost BIBREF12 between 2012 and 2018, and was obtained online from Kaggle BIBREF13. The full dataset contains 200k news headlines with category labels, publication dates, and short text descriptions. For this analysis, a sample of roughly 33k headlines spanning 23 categories was used. Further analysis can be found in table SECREF12 in the appendix.", "id": 2120, "question": "What is the language model pre-trained on?", "title": "Short-Text Classification Using Unsupervised Keyword Expansion" }, { "answers": [ "EN, JA, ES, AR, PT, KO, TH, FR, TR, RU, IT, DE, PL, NL, EL, SV, FA, VI, FI, CS, UK, HI, DA, HU, NO, RO, SR, LV, BG, UR, TA, MR, BN, IN, KN, ET, SL, GU, CY, ZH, CKB, IS, LT, ML, SI, IW, NE, KM, MY, TL, KA, BO" ], "context": "Language Identification (LID) is the Natural Language Processing (NLP) task of automatically recognizing the language that a document is written in. While this task was called \"solved\" by some authors over a decade ago, it has seen a resurgence in recent years thanks to the rise in popularity of social media BIBREF0, BIBREF1, and the corresponding daily creation of millions of new messages in dozens of different languages including rare ones that are not often included in language identification systems. Moreover, these messages are typically very short (Twitter messages were until recently limited to 140 characters) and very noisy (including an abundance of spelling mistakes, non-word tokens like URLs, emoticons, or hashtags, as well as foreign-language words in messages of another language), whereas LID was solved using long and clean documents. Indeed, several studies have shown that LID systems trained to a high accuracy on traditional documents suffer significant drops in accuracy when applied to short social-media texts BIBREF2, BIBREF3.", "id": 2121, "question": "What languages are represented in the dataset?", "title": "Language Identification on Massive Datasets of Short Message using an Attention Mechanism CNN" }, { "answers": [ "" ], "context": "In this section, we will consider recent advances on the specific challenge of language identification in short text messages. Readers interested in a general overview of the area of LID, including older work and other challenges in the area, are encouraged to read the thorough survey of BIBREF0.", "id": 2122, "question": "Which existing language ID systems are tested?", "title": "Language Identification on Massive Datasets of Short Message using an Attention Mechanism CNN" }, { "answers": [ "" ], "context": "One of the first, if not the first, systems for LID specialized for short text messages is the graph-based method of BIBREF5. Their graph is composed of vertices, or character n-grams (n = 3) observed in messages in all languages, and of edges, or connections between successive n-grams weighted by the observed frequency of that connection in each language. Identifying the language of a new message is then done by identifying the most probable path in the graph that generates that message. Their method achieves an accuracy of 0.975 on their own Twitter corpus.", "id": 2123, "question": "How was the one year worth of data collected?", "title": "Language Identification on Massive Datasets of Short Message using an Attention Mechanism CNN" }, { "answers": [ "" ], "context": "All over the world, languages are disappearing at an unprecedented rate, fostering the need for specific tools aimed to aid field linguists to collect, transcribe, analyze, and annotate endangered language data (e.g. BIBREF0, BIBREF1). A remarkable effort in this direction has improved the data collection procedures and tools BIBREF2, BIBREF3, enabling to collect corpora for an increasing number of endangered languages (e.g. BIBREF4).", "id": 2124, "question": "Which language family does Mboshi belong to?", "title": "Controlling Utterance Length in NMT-based Word Segmentation with Attention" }, { "answers": [ "" ], "context": "In this section, we briefly review the main concepts of recurrent architectures for machine translation introduced in BIBREF18, BIBREF19, BIBREF20. In our setting, the source and target sentences are always observed and we are mostly interested in the attention mechanism that is used to induce word segmentation.", "id": 2125, "question": "Does the paper report any alignment-only baseline?", "title": "Controlling Utterance Length in NMT-based Word Segmentation with Attention" }, { "answers": [ "" ], "context": "Sequence-to-sequence models transform a variable-length source sequence into a variable-length target output sequence. In our context, the source sequence is a sequence of words $w_1, \\ldots , w_J$ and the target sequence is an unsegmented sequence of phonemes or characters $\\omega _1, \\ldots , \\omega _I$. In the RNN encoder-decoder architecture, an encoder consisting of a RNN reads a sequence of word embeddings $e(w_1),\\dots ,e(w_J)$ representing the source and produces a dense representation $c$ of this sentence in a low-dimensional vector space. Vector $c$ is then fed to an RNN decoder producing the output translation $\\omega _1,\\dots ,\\omega _I$ sequentially.", "id": 2126, "question": "What is the dataset used in the paper?", "title": "Controlling Utterance Length in NMT-based Word Segmentation with Attention" }, { "answers": [ "" ], "context": "Encoding a variable-length source sentence in a fixed-length vector can lead to poor translation results with long sentences BIBREF19. To address this problem, BIBREF20 introduces an attention mechanism which provides a flexible source context to better inform the decoder's decisions. This means that the fixed context vector $c$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) is replaced with a position-dependent context $c_i$, defined as:", "id": 2127, "question": "How is the word segmentation task evaluated?", "title": "Controlling Utterance Length in NMT-based Word Segmentation with Attention" }, { "answers": [ "" ], "context": "Dependency parsing predicts the existence and type of linguistic dependency relations between words (as shown in Figure FIGREF1), which is a critical step in accomplishing deep natural language processing. Dependency parsing has been well developed BIBREF0, BIBREF1, and it generally relies on two types of parsing models: transition-based models and graph-based models. The former BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF4 traditionally apply local and greedy transition-based algorithms, while the latter BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 apply globally optimized graph-based algorithms.", "id": 2128, "question": "What are performance compared to former models?", "title": "Global Greedy Dependency Parsing" }, { "answers": [ "Proposed vs best baseline:\nDecoding: 8541 vs 8532 tokens/sec\nTraining: 8h vs 8h" ], "context": "The global greedy parser will build its dependency trees in a stepwise manner without backtracking, which takes a general greedy decoding algorithm as in easy-first parsers.", "id": 2129, "question": "How faster is training and decoding compared to former models?", "title": "Global Greedy Dependency Parsing" }, { "answers": [ "" ], "context": "Neural Machine Translation (NMT) has made considerable progress in recent years BIBREF0 , BIBREF1 , BIBREF2 . Traditional NMT has relied solely on parallel sentence pairs for training data, which can be an expensive and scarce resource. This motivates the use of monolingual data, usually more abundant BIBREF3 . Approaches using monolingual data for machine translation include language model fusion for both phrase-based BIBREF4 , BIBREF5 and neural MT BIBREF6 , BIBREF7 , back-translation BIBREF8 , BIBREF9 , unsupervised machine translation BIBREF10 , BIBREF11 , dual learning BIBREF12 , BIBREF13 , BIBREF14 , and multi-task learning BIBREF15 .", "id": 2130, "question": "What datasets was the method evaluated on?", "title": "Tagged Back-Translation" }, { "answers": [ "" ], "context": "Automatic dubbing can be regarded as an extension of the speech-to-speech translation (STST) task BIBREF0, which is generally seen as the combination of three sub-tasks: (i) transcribing speech to text in a source language (ASR), (ii) translating text from a source to a target language (MT) and (iii) generating speech from text in a target language (TTS). Independently from the implementation approach BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, the main goal of STST is producing an output that reflects the linguistic content of the original sentence. On the other hand, automatic dubbing aims to replace all speech contained in a video document with speech in a different language, so that the result sounds and looks as natural as the original. Hence, in addition to conveying the same content of the original utterance, dubbing should also match the original timbre, emotion, duration, prosody, background noise, and reverberation.", "id": 2131, "question": "Is the model evaluated against a baseline?", "title": "From Speech-to-Speech Translation to Automatic Dubbing" }, { "answers": [ "" ], "context": "With some approximation, we consider here automatic dubbing of the audio track of a video as the task of STST, i.e. ASR + MT + TTS, with the additional requirement that the output must be temporally, prosodically and acoustically close to the original audio. We investigate an architecture (see Figure 1) that enhances the STST pipeline with (i) enhanced MT able to generate translations of variable lengths, (ii) a prosodic alignment module that temporally aligns the MT output with the speech segments in the original audio, (iii) enhanced TTS to accurately control the duration of each produce utterance, and, finally, (iv) audio rendering that adds to the TTS output background noise and reverberation extracted from the original audio. In the following, we describe each component in detail, with the exception of ASR, for which we use BIBREF16 an of-the-shelf online service. ˜", "id": 2132, "question": "How many people are employed for the subjective evaluation?", "title": "From Speech-to-Speech Translation to Automatic Dubbing" }, { "answers": [ "" ], "context": "The prominent model for representing semantics of words is the distributional vector space model BIBREF2 and the prevalent approach for constructing these models is the distributional one which assumes that semantics of a word can be predicted from its context, hence placing words with similar contexts in close proximity to each other in an imaginary high-dimensional vector space. Distributional techniques, either in their conventional form which compute co-occurrence matrices BIBREF2 , BIBREF3 and learn high-dimensional vectors for words, or the recent neural-based paradigm which directly learns latent low-dimensional vectors, usually referred to as embeddings BIBREF4 , rely on a multitude of occurrences for each individual word to enable accurate representations. As a result of this statistical nature, words that are infrequent or unseen during training, such as domain-specific words, will not have reliable embeddings. This is the case even if massive corpora are used for training, such as the 100B-word Google News dataset BIBREF5 .", "id": 2133, "question": "What other embedding models are tested?", "title": "Learning Rare Word Representations using Semantic Bridging" }, { "answers": [ "" ], "context": "We take an existing semantic space INLINEFORM0 and enrich it with rare and unseen words on the basis of the knowledge encoded for them in an external knowledge base (KB) INLINEFORM1 . The procedure has two main steps: we first embed INLINEFORM2 to transform it from a graph representation into a vector space representation (§ SECREF2 ), and then map this space to INLINEFORM3 (§ SECREF7 ). Our methodology is illustrated in Figure 1.", "id": 2134, "question": "How is performance measured?", "title": "Learning Rare Word Representations using Semantic Bridging" }, { "answers": [ "" ], "context": "Our coverage enhancement starts by transforming the knowledge base INLINEFORM0 into a vector space representation that is comparable to that of the corpus-based space INLINEFORM1 . To this end, we use two techniques for learning low-dimensional feature spaces from knowledge graphs: DeepWalk and node2vec. DeepWalk uses a stream of short random walks in order to extract local information for a node from the graph. By treating these walks as short sentences and phrases in a special language, the approach learns latent representations for each node. Similarly, node2vec learns a mapping of nodes to continuous vectors that maximizes the likelihood of preserving network neighborhoods of nodes. Thanks to a flexible objective that is not tied to a particular sampling strategy, node2vec reports improvements over DeepWalk on multiple classification and link prediction datasets. For both these systems we used the default parameters and set the dimensionality of output representation to 100. Also, note than nodes in the semantic graph of WordNet represent synsets. Hence, a polysemous word would correspond to multiple nodes. In our experiments, we use the MaxSim assumption of BIBREF11 in order to map words to synsets.", "id": 2135, "question": "How are rare words defined?", "title": "Learning Rare Word Representations using Semantic Bridging" }, { "answers": [ "" ], "context": "There is a growing interest in research revolving around automated fake news detection and fact checking as its need increases due to the dangerous speed fake news spreads on social media BIBREF0. With as much as 68% of adults in the United States regularly consuming news on social media, being able to distinguish fake from non-fake is a pressing need.", "id": 2136, "question": "What other datasets are used?", "title": "Localization of Fake News Detection via Multitask Transfer Learning" }, { "answers": [ "" ], "context": "We provide a baseline model as a comparison point, using a few-shot learning-based technique to benchmark transfer learning against methods designed with low resource settings in mind. After which, we show three TL techniques that we studied and adapted to the task of fake news detection.", "id": 2137, "question": "What is the size of the dataset?", "title": "Localization of Fake News Detection via Multitask Transfer Learning" }, { "answers": [ "Online sites tagged as fake news site by Verafiles and NUJP and news website in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera" ], "context": "We use a siamese neural network, shown to perform state-of-the-art few-shot learning BIBREF11, as our baseline model.", "id": 2138, "question": "What is the source of the dataset?", "title": "Localization of Fake News Detection via Multitask Transfer Learning" }, { "answers": [ "Siamese neural network consisting of an embedding layer, a LSTM layer and a feed-forward layer with ReLU activations" ], "context": "ULMFiT BIBREF5 was introduced as a TL method for Natural Language Processing (NLP) that works akin to ImageNet BIBREF13 pretraining in Computer Vision.", "id": 2139, "question": "What were the baselines?", "title": "Localization of Fake News Detection via Multitask Transfer Learning" }, { "answers": [ "" ], "context": "Autonomous robots, such as service robots, operating in the human living environment with humans have to be able to perform various tasks and language communication. To this end, robots are required to acquire novel concepts and vocabulary on the basis of the information obtained from their sensors, e.g., laser sensors, microphones, and cameras, and recognize a variety of objects, places, and situations in an ambient environment. Above all, we consider it important for the robot to learn the names that humans associate with places in the environment and the spatial areas corresponding to these names; i.e., the robot has to be able to understand words related to places. Therefore, it is important to deal with considerable uncertainty, such as the robot's movement errors, sensor noise, and speech recognition errors.", "id": 2140, "question": "How do they show that acquiring names of places helps self-localization?", "title": "Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences" }, { "answers": [ "" ], "context": "Most studies on lexical acquisition typically focus on lexicons about objects BIBREF0 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Many of these studies have not be able to address the lexical acquisition of words other than those related to objects, e.g., words about places.", "id": 2141, "question": "How do they evaluate how their model acquired words?", "title": "Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences" }, { "answers": [ "" ], "context": "The following studies have addressed lexical acquisition related to places. However, these studies could not utilize the learned language knowledge in other estimations such as the self-localization of a robot.", "id": 2142, "question": "Which method do they use for word segmentation?", "title": "Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences" }, { "answers": [ "" ], "context": "We propose nonparametric Bayesian spatial concept acquisition method (SpCoA) that integrates a nonparametric morphological analyzer for the lattice BIBREF22 , i.e., latticelm, a spatial clustering method, and Monte Carlo localization (MCL) BIBREF23 .", "id": 2143, "question": "Does their model start with any prior knowledge of words?", "title": "Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences" }, { "answers": [ "" ], "context": "Misinformation and disinformation are two of the most pertinent and difficult challenges of the information age, exacerbated by the popularity of social media. In an effort to counter this, a significant amount of manual labour has been invested in fact checking claims, often collecting the results of these manual checks on fact checking portals or websites such as politifact.com or snopes.com. In a parallel development, researchers have recently started to view fact checking as a task that can be partially automated, using machine learning and NLP to automatically predict the veracity of claims. However, existing efforts either use small datasets consisting of naturally occurring claims (e.g. BIBREF0 , BIBREF1 ), or datasets consisting of artificially constructed claims such as FEVER BIBREF2 . While the latter offer valuable contributions to further automatic claim verification work, they cannot replace real-world datasets.", "id": 2144, "question": "What were the baselines?", "title": "MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims" }, { "answers": [ "besides claim, label and claim url, it also includes a claim ID, reason, category, speaker, checker, tags, claim entities, article title, publish data and claim date" ], "context": "Over the past few years, a variety of mostly small datasets related to fact checking have been released. An overview over core datasets is given in Table TABREF4 , and a version of this table extended with the number of documents, source of annotations and SoA performances can be found in the appendix (Table TABREF1 ). The datasets can be grouped into four categories (I–IV). Category I contains datasets aimed at testing how well the veracity of a claim can be predicted using the claim alone, without context or evidence documents. Category II contains datasets bundled with documents related to each claim – either topically related to provide context, or serving as evidence. Those documents are, however, not annotated. Category III is for predicting veracity; they encourage retrieving evidence documents as part of their task description, but do not distribute them. Finally, category IV comprises datasets annotated for both veracity and stance. Thus, every document is annotated with a label indicating whether the document supports or denies the claim, or is unrelated to it. Additional labels can then be added to the datasets to better predict veracity, for instance by jointly training stance and veracity prediction models.", "id": 2145, "question": "What metadata is included?", "title": "MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims" }, { "answers": [ "" ], "context": "Fact checking methods partly depend on the type of dataset used. Methods only taking into account claims typically encode those with CNNs or RNNs BIBREF3 , BIBREF4 , and potentially encode metadata BIBREF3 in a similar way. Methods for small datasets often use hand-crafted features that are a mix of bag of word and other lexical features, e.g. LIWC, and then use those as input to a SVM or MLP BIBREF0 , BIBREF4 , BIBREF13 . Some use additional Twitter-specific features BIBREF26 . More involved methods taking into account evidence documents, often trained on larger datasets, consist of evidence identification and ranking following a neural model that measures the compatibility between claim and evidence BIBREF2 , BIBREF27 , BIBREF28 .", "id": 2146, "question": "How many expert journalists were there?", "title": "MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims" }, { "answers": [ "monolingual" ], "context": "Recent advances in learning distributed representations for words (i.e., word embeddings) have resulted in improvements across numerous natural language understanding tasks BIBREF0 , BIBREF1 . These methods use unlabeled text corpora to model the semantic content of words using their co-occurring context words. Key to this is the observation that semantically similar words have similar contexts BIBREF2 , thus leading to similar word embeddings. A limitation of these word embedding approaches is that they only produce monolingual embeddings. This is because word co-occurrences are very likely to be limited to being within language rather than across language in text corpora. Hence semantically similar words across languages are unlikely to have similar word embeddings.", "id": 2147, "question": "Do the images have multilingual annotations or monolingual ones?", "title": "Learning Multilingual Word Embeddings Using Image-Text Data" }, { "answers": [ "" ], "context": "Most work on producing multilingual embeddings has relied on crosslingual human-labeled data, such as bilingual lexicons BIBREF13 , BIBREF4 , BIBREF6 , BIBREF14 or parallel/aligned corpora BIBREF15 , BIBREF4 , BIBREF16 , BIBREF17 . These works are also largely bilingual due to either limitations of methods or the requirement for data that exists only for a few language pairs. Bilingual embeddings are less desirable because they do not leverage the relevant resources of other languages. For example, in learning bilingual embeddings for English and French, it may be useful to leverage resources in Spanish, since French and Spanish are closely related. Bilingual embeddings are also limited in their applications to just one language pair.", "id": 2148, "question": "Could you learn such embedding simply from the image annotations and without using visual information?", "title": "Learning Multilingual Word Embeddings Using Image-Text Data" }, { "answers": [ "performance is significantly degraded without pixel data" ], "context": "We experiment using a dataset derived from Google Images search results. The dataset consists of queries and the corresponding image search results. For example, one (query, image) pair might be “cat with big ears” and an image of a cat. Each (query, image) pair also has a weight corresponding to a relevance score of the image for the query. The dataset includes 3 billion (query, image, weight) triples, with 900 million unique images and 220 million unique queries. The data was prepared by first taking the query-image set, filtering to remove any personally identifiable information and adult content, and tokenizing the remaining queries by replacing special characters with spaces and trimming extraneous whitespace. Rare tokens (those that do not appear in queries at least six times) are filtered out. Each token in each query is given a language tag based on the user-set home language of the user making the search on Google Images. For example, if the query “back pain” is made by a user with English as her home language, then the query is stored as “en:back en:pain”. The dataset includes queries in about 130 languages.", "id": 2149, "question": "How much important is the visual grounding in the learning of the multilingual representations?", "title": "Learning Multilingual Word Embeddings Using Image-Text Data" }, { "answers": [ "Comparing BLEU score of model with and without attention" ], "context": "The ability to determine entailment or contradiction between natural language text is essential for improving the performance in a wide range of natural language processing tasks. Recognizing Textual Entailment (RTE) is a task primarily designed to determine whether two natural language sentences are independent, contradictory or in an entailment relationship where the second sentence (the hypothesis) can be inferred from the first (the premise). Although systems that perform well in RTE could potentially be used to improve question answering, information extraction, text summarization and machine translation BIBREF0 , only in few of such downstream NLP tasks sentence-pairs are actually available. Usually, only a single source sentence (e.g. a question that needs to be answered or a source sentence that we want to translate) is present and models need to come up with their own hypotheses and commonsense knowledge inferences.", "id": 2150, "question": "How is the generative model evaluated?", "title": "Generating Natural Language Inference Chains" }, { "answers": [ "" ], "context": "Social media plays an important role in health informatics and Twitter has been one of the most influential social media channel for mining population-level health insights BIBREF0 , BIBREF1 , BIBREF2 . These insights range from forecasting of influenza epidemics BIBREF3 to predicting adverse drug reactions BIBREF4 . A notable challenge due to the short length of Twitter messages is categorization of tweets into topics in a supervised manner, i.e., topic classification, as well as in an unsupervised manner, i.e., clustering.", "id": 2151, "question": "How do they evaluate their method?", "title": "Deep Representation Learning for Clustering of Health Tweets" }, { "answers": [ "The health benefits of alcohol consumption are more limited than previously thought, researchers say" ], "context": "Devising efficient representations of tweets, i.e., features, for performing clustering has been studied extensively. Most frequently used features for representing the text in tweets as numerical vectors are bag-of-words (BoWs) and term frequency-inverse document frequency (tf-idf) features BIBREF17 , BIBREF9 , BIBREF10 , BIBREF18 , BIBREF19 . Both of these feature extraction methods are based on word occurrence counts and eventually, result in a sparse (most elements being zero) document-term matrix. Proposed algorithms for clustering tweets into topics include variants of hierarchical, density-based and centroid-based clustering methods; k-means algorithm being the most frequently used one BIBREF9 , BIBREF19 , BIBREF20 .", "id": 2152, "question": "What is an example of a health-related tweet?", "title": "Deep Representation Learning for Clustering of Health Tweets" }, { "answers": [ "" ], "context": "Since Satoshi Nakamoto published the article \"Bitcoin: A Peer-to-Peer Electronic Cash System\" in 2008 BIBREF0 , and after the official launch of Bitcoin in 2009, technologies such as blockchain and cryptocurrency have attracted attention from academia and industry. At present, the technologies have been applied to many fields such as medical science, economics, Internet of Things BIBREF1 . Since the launch of Ethereum (Next Generation Encryption Platform) BIBREF2 with smart contract function proposed by Vitalik Buterin in 2015, lots of attention has been obtained on its dedicated cryptocurrency Ether, smart contract, blockchain and its decentralized Ethereum Virtual Machine (EVM). The main reason is that its design method provides developers with the ability to develop Decentralized apps (Dapps), and thus obtain wider applications. A new application paradigm opens the door to many possibilities and opportunities.", "id": 2153, "question": "Was the introduced LSTM+CNN model trained on annotated data in a supervised fashion?", "title": "SOC: hunting the underground inside story of the ethereum Social-network Opinion and Comment" }, { "answers": [ "not researched as much as English" ], "context": "Offensive language in user-generated content on online platforms and its implications has been gaining attention over the last couple of years. This interest is sparked by the fact that many of the online social media platforms have come under scrutiny on how this type of content should be detected and dealt with. It is, however, far from trivial to deal with this type of language directly due to the gigantic amount of user-generated content created every day. For this reason, automatic methods are required, using natural language processing (NLP) and machine learning techniques.", "id": 2154, "question": "What is the challenge for other language except English", "title": "Offensive Language and Hate Speech Detection for Danish" }, { "answers": [ "3" ], "context": "Offensive language varies greatly, ranging from simple profanity to much more severe types of language. One of the more troublesome types of language is hate speech and the presence of hate speech on social media platforms has been shown to be in correlation with hate crimes in real life settings BIBREF1 . It can be quite hard to distinguish between generally offensive language and hate speech as few universal definitions exist BIBREF2 . There does, however, seem to be a general consensus that hate speech can be defined as language that targets a group with the intent to be harmful or to cause social chaos. This targeting is usually done on the basis of some characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . In section \"Background\" , hate speech is defined in more detail. Offensive language, on the other hand, is a more general category containing any type of profanity or insult. Hate speech can, therefore, be classified as a subset of offensive language. BIBREF0 propose guidelines for classifying offensive language as well as the type and the target of offensive language. These guidelines capture the characteristics of generally offensive language, hate speech and other types of targeted offensive language such as cyberbullying. However, despite offensive language detection being a burgeoning field, no dataset yet exists for Danish BIBREF4 despite this phenomenon being present BIBREF5 .", "id": 2155, "question": "How many categories of offensive language were there?", "title": "Offensive Language and Hate Speech Detection for Danish" }, { "answers": [ "" ], "context": "In this section we give a comprehensive overview of the structure of the task and describe the dataset provided in BIBREF0 . Our work adopts this framing of the offensive language phenomenon.", "id": 2156, "question": "How large was the dataset of Danish comments?", "title": "Offensive Language and Hate Speech Detection for Danish" }, { "answers": [ "" ], "context": "Offensive content is broken into three sub-tasks to be able to effectively identify both the type and the target of the offensive posts. These three sub-tasks are chosen with the objective of being able to capture different types of offensive language, such as hate speech and cyberbullying (section \"Background\" ).", "id": 2157, "question": "Who were the annotators?", "title": "Offensive Language and Hate Speech Detection for Danish" }, { "answers": [ "" ], "context": "In 2019, Freedom in the World, a yearly survey produced by Freedom House that measures the degree of civil liberties and political rights in every nation, recorded the 13th consecutive year of decline in global freedom. This decline spans across long-standing democracies such as USA as well as authoritarian regimes such as China and Russia. “Democracy is in retreat. The offensive against freedom of expression is being supercharged by a new and more effective form of digital authoritarianism.\" According to the report, China is now exporting its model of comprehensive internet censorship and surveillance around the world, offering trainings, seminars, and even study trips as well as advanced equipment.", "id": 2158, "question": "Is is known whether Sina Weibo posts are censored by humans or some automatic classifier?", "title": "Linguistic Fingerprints of Internet Censorship: the Case of SinaWeibo" }, { "answers": [ "Matching features from matching sentences from various perspectives." ], "context": "Natural Language Inference (NLI) is a crucial subtopic in Natural Language Processing (NLP). Most studies treat NLI as a classification problem, aiming at recognizing the relation types of hypothesis-premise sentence pairs, usually including “Entailment”, “Contradiction” and “Neutral”.", "id": 2159, "question": "Which matching features do they employ?", "title": "Multi-turn Inference Matching Network for Natural Language Inference" }, { "answers": [ "" ], "context": "Like other areas of linguistics, the study of new words has benefited from the development of natural language processing techniques in the last twenty years. In this research area, the digital revolution has significantly changed the way to collect the data necessary for empirical research BIBREF0 , BIBREF1 : The traditional practice of collecting new words by reading texts (newspapers, literature, scientific and technical texts, etc.), has been supplemented by an automatisation of the collection process. In fact, today there are many computer tools capable of searching through large amounts of texts (often newspapers) in order to automatically detect the newly created words in various languages (German, Catalan, Spanish, English, French, etc.).", "id": 2160, "question": "How often are the newspaper websites crawled daily?", "title": "The Logoscope: a Semi-Automatic Tool for Detecting and Documenting French New Words" }, { "answers": [ "" ], "context": "Recurrent neural network (RNN) based techniques such as language models are the most popular approaches for text generation. These RNN-based text generators rely on maximum likelihood estimation (MLE) solutions such as teacher forcing BIBREF0 (i.e. the model is trained to predict the next item given all previous observations); however, it is well-known in the literature that MLE is a simplistic objective for this complex NLP task BIBREF1 . MLE-based methods suffer from exposure bias BIBREF2 , which means that at training time the model is exposed to gold data only, but at test time it observes its own predictions.", "id": 2161, "question": "How much better in terms of JSD measure did their model perform?", "title": "TextKD-GAN: Text Generation using KnowledgeDistillation and Generative Adversarial Networks" }, { "answers": [ "" ], "context": "Generative adversarial networks include two separate deep networks: a generator and a discriminator. The generator takes in a random variable, INLINEFORM0 following a distribution INLINEFORM1 and attempt to map it to the data distribution INLINEFORM2 . The output distribution of the generator is expected to converge to the data distribution during the training. On the other hand, the discriminator is expected to discern real samples from generated ones by outputting zeros and ones, respectively. During training, the generator and discriminator generate samples and classify them, respectively by adversarially affecting the performance of each other. In this regard, an adversarial loss function is employed for training BIBREF16 : DISPLAYFORM0 ", "id": 2162, "question": "What does the Jensen-Shannon distance measure?", "title": "TextKD-GAN: Text Generation using KnowledgeDistillation and Generative Adversarial Networks" }, { "answers": [ "" ], "context": "Modern media generate a large amount of content at an ever increasing rate. Keeping an unbiased view on what media report on requires to understand the political bias of texts. In many cases it is obvious which political bias an author has. In other cases some expertise is required to judge the political bias of a text. When dealing with large amounts of text however there are simply not enough experts to examine all possible sources and publications. Assistive technology can help in this context to try and obtain a more unbiased sample of information.", "id": 2163, "question": "Which countries and languages do the political speeches and manifestos come from?", "title": "Automating Political Bias Prediction" }, { "answers": [ "" ], "context": "Throughout the last years automated content analyses for political texts have been conducted on a variety of text data sources (parliament data blogs, tweets, news articles, party manifestos) with a variety of methods, including sentiment analysis, stylistic analyses, standard bag-of-word (BOW) text feature classifiers and more advanced natural language processing tools. While a complete overview is beyond the scope of this work, the following paragraphs list similarities and differences between this study and previous work. For a more complete overview we refer the reader to BIBREF2 , BIBREF3 .", "id": 2164, "question": "Do changes in policies of the political actors account for all of the mistakes the model made?", "title": "Automating Political Bias Prediction" }, { "answers": [ "" ], "context": "All experiments were run on publicly available data sets of german political texts and standard libraries for processing the text. The following sections describe the details of data acquisition and feature extraction.", "id": 2165, "question": "What model are the text features used in to provide predictions?", "title": "Automating Political Bias Prediction" }, { "answers": [ "Their average improvement in Character Error Rate over the best MHA model was 0.33 percent points." ], "context": "Automatic speech recognition (ASR) is the task to convert a continuous speech signal into a sequence of discrete characters, and it is a key technology to realize the interaction between human and machine. ASR has a great potential for various applications such as voice search and voice input, making our lives more rich. Typical ASR systems BIBREF0 consist of many modules such as an acoustic model, a lexicon model, and a language model. Factorizing the ASR system into these modules makes it possible to deal with each module as a separate problem. Over the past decades, this factorization has been the basis of the ASR system, however, it makes the system much more complex.", "id": 2166, "question": "By how much does their method outperform the multi-head attention model?", "title": "Multi-Head Decoder for End-to-End Speech Recognition" }, { "answers": [ "449050" ], "context": "The overview of attention-based network architecture is shown in Fig. FIGREF1 .", "id": 2167, "question": "How large is the corpus they use?", "title": "Multi-Head Decoder for End-to-End Speech Recognition" }, { "answers": [ "" ], "context": "The overview of our proposed multi-head decoder (MHD) architecture is shown in Fig. FIGREF19 . In MHD architecture, multiple attentions are calculated with the same manner in the conventional multi-head attention (MHA) BIBREF12 . We first describe the conventional MHA, and extend it to our proposed multi-head decoder (MHD).", "id": 2168, "question": "Does each attention head in the decoder calculate the same output?", "title": "Multi-Head Decoder for End-to-End Speech Recognition" }, { "answers": [ "" ], "context": "Hierarchical relationships play a central role in knowledge representation and reasoning. Hypernym detection, i.e., the modeling of word-level hierarchies, has long been an important task in natural language processing. Starting with BIBREF0 , pattern-based methods have been one of the most influential approaches to this problem. Their key idea is to exploit certain lexico-syntactic patterns to detect is-a relations in text. For instance, patterns like “ INLINEFORM0 such as INLINEFORM1 ”, or “ INLINEFORM2 and other INLINEFORM3 ” often indicate hypernymy relations of the form INLINEFORM4 is-a INLINEFORM5 . Such patterns may be predefined, or they may be learned automatically BIBREF1 , BIBREF2 . However, a well-known problem of Hearst-like patterns is their extreme sparsity: words must co-occur in exactly the right configuration, or else no relation can be detected.", "id": 2169, "question": "Which distributional methods did they consider?", "title": "Hearst Patterns Revisited: Automatic Hypernym Detection from Large Text Corpora" }, { "answers": [ "" ], "context": "In the following, we discuss pattern-based and distributional methods to detect hypernymy relations. We explicitly consider only relatively simple pattern-based approaches that allow us to directly compare their performance to DIH-based methods.", "id": 2170, "question": "Which benchmark datasets are used?", "title": "Hearst Patterns Revisited: Automatic Hypernym Detection from Large Text Corpora" }, { "answers": [ "" ], "context": "First, let INLINEFORM0 denote the set of hypernymy relations that have been extracted via Hearst patterns from a text corpus INLINEFORM1 . Furthermore let INLINEFORM2 denote the count of how often INLINEFORM3 has been extracted and let INLINEFORM4 denote the total number extractions. In the first, most direct application of Hearst patterns, we then simply use the counts INLINEFORM5 or, equivalently, the extraction probability DISPLAYFORM0 ", "id": 2171, "question": "What hypernymy tasks do they study?", "title": "Hearst Patterns Revisited: Automatic Hypernym Detection from Large Text Corpora" }, { "answers": [ "" ], "context": "Multi-task learning (MTL) refers to machine learning approaches in which information and representations are shared to solve multiple, related tasks. Relative to single-task learning approaches, MTL often shows improved performance on some or all sub-tasks and can be more computationally efficient BIBREF0, BIBREF1, BIBREF2, BIBREF3. We focus here on a form of MTL known as hard parameter sharing. Hard parameter sharing refers to the use of deep learning models in which inputs to models first pass through a number of shared layers. The hidden representations produced by these shared layers are then fed as inputs to a number of task-specific layers.", "id": 2172, "question": "Do they repot results only on English data?", "title": "Deeper Task-Specificity Improves Joint Entity and Relation Extraction" }, { "answers": [ "" ], "context": "We focus in this section on previous deep learning approaches to solving the tasks of NER and RE, as this work is most directly comparable to our proposal. Most work on joint NER and RE has adopted a BIO or BILOU scheme for the NER task, where each token is labeled to indicate whether it is the (B)eginning of an entity, (I)nside an entity, or (O)utside an entity. The BILOU scheme extends these labels to indicate if a token is the (L)ast token of an entity or is a (U)nit, i.e. the only token within an entity span.", "id": 2173, "question": "What were the variables in the ablation study?", "title": "Deeper Task-Specificity Improves Joint Entity and Relation Extraction" }, { "answers": [ "1" ], "context": "The architecture proposed here is inspired by several previous proposals BIBREF10, BIBREF11, BIBREF12. We treat the NER task as a sequence labeling problem using BIO labels. Token representations are first passed through a series of shared, BiRNN layers. Stacked on top of these shared BiRNN layers is a sequence of task-specific BiRNN layers for both the NER and RE tasks. We take the number of shared and task-specific layers to be a hyperparameter of the model. Both sets of task-specific BiRNN layers are followed by task-specific scoring and output layers. Figure FIGREF4 illustrates this architecture. Below, we use superscript $e$ for NER-specific variables and layers and superscript $r$ for RE-specific variables and layers.", "id": 2174, "question": "How many shared layers are in the system?", "title": "Deeper Task-Specificity Improves Joint Entity and Relation Extraction" }, { "answers": [ "2 for the ADE dataset and 3 for the CoNLL04 dataset" ], "context": "We obtain contextual token embeddings using the pre-trained ELMo 5.5B model BIBREF13. For each token in the input text $t_i$, this model returns three vectors, which we combine via a weighted averaging layer. Each token $t_i$'s weighted ELMo embedding $\\mathbf {t}^{elmo}_{i}$ is concatenated to a pre-trained GloVe embedding BIBREF14 $\\mathbf {t}^{glove}_{i}$, a character-level word embedding $\\mathbf {t}^{char}_i$ learned via a single BiRNN layer BIBREF15 and a one-hot encoded casing vector $\\mathbf {t}^{casing}_i$. The full representation of $t_i$ is given by $\\mathbf {v}_i$ (where $\\circ $ denotes concatenation):", "id": 2175, "question": "How many additional task-specific layers are introduced?", "title": "Deeper Task-Specificity Improves Joint Entity and Relation Extraction" }, { "answers": [ "" ], "context": "With growing diversity in personal food preference and regional cuisine style, personalized information systems that can transform a recipe into any selected regional cuisine style that a user might prefer would help food companies and professional chefs create new recipes.", "id": 2176, "question": "What is barycentric Newton diagram?", "title": "A neural network system for transformation of regional cuisine style" }, { "answers": [ "" ], "context": "Recent work in the word embeddings literature has shown that embeddings encode gender and racial biases, BIBREF0, BIBREF1, BIBREF2. These biases can have harmful effects in downstream tasks including coreference resolution, BIBREF3 and machine translation, BIBREF4, leading to the development of a range of methods to try to mitigate such biases, BIBREF0, BIBREF5. In an adjacent literature, learning embeddings of knowledge graph (KG) entities and relations is becoming an increasingly common first step in utilizing KGs for a range of tasks, from missing link prediction, BIBREF6, BIBREF7, to more recent methods integrating learned embeddings into language models, BIBREF8, BIBREF9, BIBREF10.", "id": 2177, "question": "Do they propose any solution to debias the embeddings?", "title": "Measuring Social Bias in Knowledge Graph Embeddings" }, { "answers": [ "" ], "context": "Graph embeddings are a vector representation of dimension $d$ of all entities and relations in a KG. To learn these representations, we define a score function $g(.)$ which takes as input the embeddings of a fact in triple form and outputs a score, denoting how likely this triple is to be correct.", "id": 2178, "question": "How are these biases found?", "title": "Measuring Social Bias in Knowledge Graph Embeddings" }, { "answers": [ "1, 4, 8, 16, 32, 64" ], "context": "Task-oriented chatbots are a type of dialogue generation system which tries to help the users accomplish specific tasks, such as booking a restaurant table or buying movie tickets, in a continuous and uninterrupted conversational interface and usually in as few steps as possible. The development of such systems falls into the Conversational AI domain which is the science of developing agents which are able to communicate with humans in a natural way BIBREF0. Digital assistants such as Apple's Siri, Google Assistant, Amazon Alexa, and Alibaba's AliMe are examples of successful chatbots developed by giant companies to engage with their customers.", "id": 2179, "question": "How many layers of self-attention does the model have?", "title": "Self-Attentional Models Application in Task-Oriented Dialogue Generation Systems" }, { "answers": [ "" ], "context": "End-to-end architectures are among the most used architectures for research in the field of conversational AI. The advantage of using an end-to-end architecture is that one does not need to explicitly train different components for language understanding and dialogue management and then concatenate them together. Network-based end-to-end task-oriented chatbots as in BIBREF4, BIBREF8 try to model the learning task as a policy learning method in which the model learns to output a proper response given the current state of the dialogue. As discussed before, all encoder-decoder sequence modelling methods can be used for training end-to-end chatbots. Eric and Manning eric2017copy use the copy mechanism augmentation on simple recurrent neural sequence modelling and achieve good results in training end-to-end task-oriented chatbots BIBREF9.", "id": 2180, "question": "Is human evaluation performed?", "title": "Self-Attentional Models Application in Task-Oriented Dialogue Generation Systems" }, { "answers": [ "" ], "context": "Sequence modelling methods usually fall into recurrence-based, convolution-based, and self-attentional-based methods. In recurrence-based sequence modeling, the words are fed into the model in a sequential way, and the model learns the dependencies between the tokens given the context from the past (and the future in case of bidirectional Recurrent Neural Networks (RNNs)) BIBREF14. RNNs and their variations such as Long Short-term Memory (LSTM) BIBREF15, and Gated Recurrent Units (GRU) BIBREF16 are the most widely used recurrence-based models used in sequence modelling tasks. Convolution-based sequence modelling methods rely on Convolutional Neural Networks (CNN) BIBREF17 which are mostly used for vision tasks but can also be used for handling sequential data. In CNN-based sequence modelling, multiple CNN layers are stacked on top of each other to give the model the ability to learn long-range dependencies. The stacking of layers in CNNs for sequence modeling allows the model to grow its receptive field, or in other words context size, and thus can model complex dependencies between different sections of the input sequence BIBREF18, BIBREF19. WaveNet van2016wavenet, used in audio synthesis, and ByteNet kalchbrenner2016neural, used in machine translation tasks, are examples of models trained using convolution-based sequence modelling.", "id": 2181, "question": "What are the three datasets used?", "title": "Self-Attentional Models Application in Task-Oriented Dialogue Generation Systems" }, { "answers": [ "" ], "context": "When a group of people communicate in a common channel there are often multiple conversations occurring concurrently. Often there is no explicit structure identifying conversations or their structure, such as in Internet Relay Chat (IRC), Google Hangout, and comment sections on websites. Even when structure is provided it often has limited depth, such as threads in Slack, which provide one layer of branching. In all of these cases, conversations are entangled: all messages appear together, with no indication of separate conversations. Automatic disentanglement could be used to provide more interpretable results when searching over chat logs, and to help users understand what is happening when they join a channel. Over a decade of research has considered conversation disentanglement BIBREF0 , but using datasets that are either small BIBREF1 or not released BIBREF2 .", "id": 2182, "question": "Did they experiment with the corpus?", "title": "A Large-Scale Corpus for Conversation Disentanglement" }, { "answers": [ "" ], "context": "Neural networks for language processing have advanced rapidly in recent years. A key breakthrough was the introduction of transformer architectures BIBREF0 . One recent system based on this idea, BERT BIBREF1 , has proven to be extremely flexible: a single pretrained model can be fine-tuned to achieve state-of-the-art performance on a wide variety of NLP applications. This suggests the model is extracting a set of generally useful features from raw text. It is natural to ask, which features are extracted? And how is this information represented internally?", "id": 2183, "question": "How were the feature representations evaluated?", "title": "Visualizing and Measuring the Geometry of BERT" }, { "answers": [ "" ], "context": "Our object of study is the BERT model introduced in BIBREF1 . To set context and terminology, we briefly describe the model's architecture. The input to BERT is based on a sequence of tokens (words or pieces of words). The output is a sequence of vectors, one for each input token. We will often refer to these vectors as context embeddings because they include information about a token's context.", "id": 2184, "question": "What linguistic features were probed for?", "title": "Visualizing and Measuring the Geometry of BERT" }, { "answers": [ "" ], "context": "Reference to objects is one of the most basic and prevalent uses of language. In order to refer, speakers must choose from among a wealth of referring expressions they have at their disposal. How does a speaker choose whether to refer to an object as the animal, the dog, the dalmatian, or the big mostly white dalmatian? The context within which the object occurs (other non-dogs, other dogs, other dalmatians) plays a large part in determining which features the speaker chooses to include in their utterance – speakers aim to be sufficiently informative to establish unique reference to the intended object. However, speakers' utterances often exhibit what has been claimed to be overinformativeness: referring expressions are often more specific than necessary for establishing unique reference, and they are more specific in systematic ways. For instance, speakers are likely to produce referring expressions like the small blue pin instead of the small pin in contexts like Figure 1 , even though the color modifier provides no additional information BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Similar use of redundant size modifiers, in contrast, is rare. Providing a unified theory for speakers' systematic patterns of overinformativeness has so far proven elusive.", "id": 2185, "question": "Does the paper describe experiments with real humans?", "title": "When redundancy is rational: A Bayesian approach to 'overinformative' referring expressions" }, { "answers": [ "" ], "context": "Social media has become a popular medium for individuals to express opinions and concerns on issues impacting their lives BIBREF0 , BIBREF1 , BIBREF2 . In countries without adequate internet infrastructure, like Uganda, communities often use phone-in talk shows on local radio stations for the same purpose. In an ongoing project by the United Nations (UN), radio-browsing systems have been developed to monitor such radio shows BIBREF3 , BIBREF4 . These systems are actively and successfully supporting UN relief and developmental programmes. The development of such systems, however, remains dependent on the availability of transcribed speech in the target languages. This dependence has proved to be a key impediment to the rapid deployment of radio-browsing systems in new languages, since skilled annotators proficient in the target languages are hard to find, especially in crisis conditions.", "id": 2186, "question": "What are bottleneck features?", "title": "ASR-free CNN-DTW keyword spotting using multilingual bottleneck features for almost zero-resource languages" }, { "answers": [ "" ], "context": "The first radio browsing systems implemented as part of the UN's humanitarian monitoring programmes rely on ASR systems BIBREF3 . Human analysts filter speech segments identified by the system and add these to a searchable database to support decision making. To develop the ASR system, at least a small amount of annotated speech in the target language is required BIBREF4 . However, the collection of even a small fully transcribed corpus has proven difficult or impossible in some settings. In recent work, we have therefore proposed an ASR-free keyword spotting system based on CNNs BIBREF18 . CNN classifiers typically require a large number of training examples, which are not available in our setting. Instead, we therefore use a small set of recorded isolated keywords, which are then matched against a large collection of untranscribed speech drawn from the target domain using a DTW-based approach. The resulting DTW scores are then used as targets for a CNN. The key is that it is not necessary to know whether or not the keywords do in fact occur in this untranscribed corpus; the CNN is trained simply to emulate the behaviour of the DTW. Since the CNN does not perform any alignment, it is computationally much more efficient than DTW. The resulting CNN-DTW model can therefore be used to efficiently detect the presence of keywords in new input speech. Figure FIGREF2 show the structure of this CNN-DTW radio browsing system.", "id": 2187, "question": "What languages are considered?", "title": "ASR-free CNN-DTW keyword spotting using multilingual bottleneck features for almost zero-resource languages" }, { "answers": [ "" ], "context": "Code-switching (CS) speech is defined as the alternation of languages in an utterance, it is a pervasive communicative phenomenon in multilingual communities. Therefore, developing a CS speech recognition (CSSR) system is of great interest.", "id": 2188, "question": "Do they compare speed performance of their model compared to the ones using the LID model?", "title": "Rnn-transducer with language bias for end-to-end Mandarin-English code-switching speech recognition" }, { "answers": [ "" ], "context": "Although CTC has been applied successfully in the context of speech recognition, it assumes that outputs at each step are independent of the previous predictions BIBREF6. RNN-T is an improved model based on CTC, it augments with a prediction network, which is explicitly conditioned on the previous outputs BIBREF10, as illustrated in Fig. 2(a).", "id": 2189, "question": "How do they obtain language identities?", "title": "Rnn-transducer with language bias for end-to-end Mandarin-English code-switching speech recognition" }, { "answers": [ "" ], "context": "Knowledge bases (KB) are an essential part of many computational systems with applications in search, structured data management, recommendations, question answering, and information retrieval. However, KBs often suffer from incompleteness, noise in their entries, and inefficient inference under uncertainty. To address these issues, learning relational knowledge representations has been a focus of active research BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . These approaches represent relational triples, that consist of a subject entity, relation, and an object entity, by learning fixed, low-dimensional representations for each entity and relation from observations, encoding the uncertainty and inferring missing facts accurately and efficiently. The subject and the object entities come from a fixed, enumerable set of entities that appear in the knowledge base. Knowledge bases in the real world, however, contain a wide variety of data types beyond these direct links. Apart from relations to a fixed set of entities, KBs often not only include numerical attributes (such as ages, dates, financial, and geoinformation), but also textual attributes (such as names, descriptions, and titles/designations) and images (profile photos, flags, posters, etc.). These different types of data can play a crucial role as extra pieces of evidence for knowledge base completion. For example the textual descriptions and images might provide evidence for a person's age, profession, and designation. In the multimodal KB shown in Figure 1 for example, the image can be helpful in predicting of Carles Puyol's occupation, while the description contains his nationality. Incorporating this information into existing approaches as entities, unfortunately, is challenging as they assign each entity a distinct vector and predict missing links (or attributes) by enumerating over the possible values, both of which are only possible if the entities come from a small, enumerable set. There is thus a crucial need for relational modeling that goes beyond just the link-based view of KB completion, by not only utilizing multimodal information for better link prediction between existing entities, but also being able to generate missing multimodal values.", "id": 2190, "question": "What other multimodal knowledge base embedding methods are there?", "title": "Embedding Multimodal Relational Data for Knowledge Base Completion" }, { "answers": [ "" ], "context": "Machine Reading Comprehension (MRC) has gained growing interest in the research community BIBREF0 , BIBREF1 . In an MRC task, the machine reads a text passage and a question, and generates (or selects) an answer based on the passage. This requires the machine to possess strong comprehension, inference and reasoning capabilities. Over the past few years, there has been much progress in building end-to-end neural network models BIBREF2 for MRC. However, most public MRC datasets (e.g., SQuAD, MS MARCO, TriviaQA) are typically small (less than 100K) compared to the model size (such as SAN BIBREF3 , BIBREF4 with around 10M parameters). To prevent over-fitting, recently there have been some studies on using pre-trained word embeddings BIBREF5 and contextual embeddings in the MRC model training, as well as back-translation approaches BIBREF1 for data augmentation.", "id": 2191, "question": "What is the data selection paper in machine translation", "title": "Multi-task Learning with Sample Re-weighting for Machine Reading Comprehension" }, { "answers": [ "" ], "context": "Speaker Recognition is an essential task with applications in biometric authentication, identification, and security among others BIBREF0 . The field is divided into two main subtasks: Speaker Identification and Speaker Verification. In Speaker Identification, given an audio sample, the model tries to identify to which one in a list of predetermined speakers the locution belongs. In the Speaker Verification, the model verifies if a sampled audio belongs to a given speaker or not. Most of the literature techniques to tackle this problem are based on INLINEFORM0 -vectors methods BIBREF1 , which extract features from the audio samples and classify the features using methods such as PLDA BIBREF2 , heavy-tailed PLDA BIBREF3 , and Gaussian PLDA BIBREF4 .", "id": 2192, "question": "Do they compare computational time of AM-softmax versus Softmax?", "title": "Additive Margin SincNet for Speaker Recognition" }, { "answers": [ "" ], "context": "For some time, INLINEFORM0 -vectors BIBREF1 have been used as the state-of-the-art feature extraction method for speaker recognition tasks. Usually, the extracted features are classified using PLDA BIBREF2 or other similar techniques, such as heavy-tailed PLDA BIBREF3 and Gauss-PLDA BIBREF4 . The intuition behind these traditional methods and how they work can be better seem in BIBREF18 . Although they have been giving us some reasonable results, it is clear that there is still room for improvements BIBREF18 .", "id": 2193, "question": "Do they visualize the difference between AM-Softmax and regular softmax?", "title": "Additive Margin SincNet for Speaker Recognition" }, { "answers": [ "" ], "context": "With the development of digital media technology and popularity of Mobile Internet, online visual content has increased rapidly in recent couple of years. Subsequently, visual content analysis for retrieving BIBREF0 , BIBREF1 and understanding becomes a fundamental problem in the area of multimedia research, which has motivated world-wide researchers to develop advanced techniques. Most previous works, however, have focused on classification task, such as annotating an image BIBREF2 , BIBREF3 or video BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 with given fixed label sets. With some pioneering methods BIBREF8 , BIBREF9 tackling the challenge of describing images with natural language proposed, visual content understanding has attracted more and more attention. State-of-the-art techniques for image captioning have been surpassed by new advanced approaches in succession BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Recent researches BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 have been focusing on describing videos with more comprehensive sentences instead of simple keywords. Different from image, video is sequential data with temporal structure, which may pose significant challenge to video caption. Most of the existing works in video description employed max or mean pooling across video frames to obtain video-level representation, which failed to capture temporal knowledge. To address this problem, Yao et al. proposed to use 3-D Convolutional Neural Networks to explore local temporal information in video clips, where the most relevant temporal fragments were automatically chosen for generating natural language description with attention mechanism BIBREF17 . In BIBREF19 , Venugopanlan et al. implemented a Long-Short Term Memory (LSTM) network, a variant of Recurrent Neural Networks (RNNs), to model the global temporal structure in whole video snippet. However, these methods failed to exploit bidirectional global temporal structure, which could benefit from not only previous video frames, but also information in future frames. Also, existing video captioning schemes cannot adaptively learn dense video representation and generate sparse semantic sentences.", "id": 2194, "question": "what metrics were used for evaluation?", "title": "Bidirectional Long-Short Term Memory for Video Description" }, { "answers": [ "S2VT, RGB (VGG), RGB (VGG)+Flow (AlexNet), LSTM-E (VGG), LSTM-E (C3D) and Yao et al." ], "context": "In this section, we elaborate the proposed video captioning framework, including an introduction of the overall flowchart (as illustrated in Figure FIGREF1 ), a brief review of LSTM-based Sequential Model, the joint visual modelling with bidirectional LSTM and CNNs, as well as the sentence generation process.", "id": 2195, "question": "what are the state of the art methods?", "title": "Bidirectional Long-Short Term Memory for Video Description" }, { "answers": [ "" ], "context": "We implement the deep neural network model described in BIBREF5 . This model is a combination of Bi-directional Long Short-Term Memory (Bi-LSTM), Convolutional Neural Network (CNN), and Conditional Random Field (CRF). In particular, this model takes as input a sequence of the concatenation of word embedding pre-trained by word2vec tool and character-level word feature trained by CNN. That sequence is then passed to a Bi-LSTM, and then a CRF layer takes as input the output of the Bi-LSTM to predict the best named entity output sequence. Figure FIGREF9 and Figure FIGREF10 describe the architectures of BI-LSTM-CRF layers, and CNN layer respectively.", "id": 2196, "question": "What datasets do they use for the tasks?", "title": "NNVLP: A Neural Network-Based Vietnamese Language Processing Toolkit" }, { "answers": [ "" ], "context": "[block] 1 5mm *", "id": 2197, "question": "What evaluation metrics do they use?", "title": "A Deep Learning Architecture for De-identification of Patient Notes: Implementation and Evaluation" }, { "answers": [ "" ], "context": " Electronic Health Records (EHR) have become ubiquitous in recent years in the United States, owing much to the The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009. BIBREF0 Their ubiquity have given researchers a treasure trove of new data, especially in the realm of unstructured textual data. However, this new data source comes with usage restrictions in order to preserve the privacy of individual patients as mandated by the Health Insurance Portability and Accountability Act (HIPAA). HIPAA demands any researcher using this sensitive data to first strip the medical records of any protected health information (PHI), a process known as de-identification.", "id": 2198, "question": "What performance is achieved?", "title": "A Deep Learning Architecture for De-identification of Patient Notes: Implementation and Evaluation" }, { "answers": [ "" ], "context": "The task of automatic de-identification has been heavily studied recently, in part due to two main challenges organized by i2b2 in 2006 and in 2014. The task of de-identification can be classified as a named entity recognition (NER) problem which has been extensively studied in machine learning literature. Automated de-identification systems can be roughly broken down into four main categories:", "id": 2199, "question": "Do they use BERT?", "title": "A Deep Learning Architecture for De-identification of Patient Notes: Implementation and Evaluation" }, { "answers": [ "" ], "context": "Rule-based systems make heavy use of pattern matching such as dictionaries (or gazetteers), regular expressions and other patterns. BIBREF2 Systems such as the ones described in BIBREF5 , BIBREF6 do not require the use any labeled data. Hence, they are considered as unsupervised learning systems. Advantages of such systems include their ease of use, ease of adding new patterns and easy interpretability. However, these methods suffer from a lack of robustness with regards to the input. For example, different casings of the same word could be misinterpreted as an unknown word. Furthermore, typographical errors are almost always present in most documents and rule-based systems often cannot correctly handle these types of inaccuracies present in the data. Critically, these systems cannot handle context which could render a medical text unreadable. For example, a diagnosis of “Lou Gehring disease” could be misidentified by such a system as a PHI of type Name. The system might replace the tokens “Lou” and “Gehring” with randomized names rendering the text meaningless if enough of these tokens were replaced.", "id": 2200, "question": "What is their baseline?", "title": "A Deep Learning Architecture for De-identification of Patient Notes: Implementation and Evaluation" }, { "answers": [ "" ], "context": "The drawbacks of such rule-based systems led researchers to adopt a machine learning approach. A comprehensive review of such systems can be found in BIBREF7 , BIBREF8 . In machine learning systems, given a sequence of input vectors INLINEFORM0 , a machine learning algorithm outputs label predictions INLINEFORM1 . Since the task of de-identification is a classification task, traditional classification algorithms such as support vector machines, conditional random fields (CRFs) and decision trees BIBREF9 have been used for building de-identification systems.", "id": 2201, "question": "Which two datasets is the system tested on?", "title": "A Deep Learning Architecture for De-identification of Patient Notes: Implementation and Evaluation" }, { "answers": [ "German, English, Italian, Chinese" ], "context": "Speech conveys human emotions most naturally. In recent years there has been an increased research interest in speech emotion recognition domain. The first step in a typical SER system is extracting linguistic and acoustic features from speech signal. Some para-linguistic studies find Low-Level Descriptor (LLD) features of the speech signal to be most relevant to studying emotions in speech. These features include frequency related parameters like pitch and jitter, energy parameters like shimmer and loudness, spectral parameters like alpha ratio and other parameters that convey cepstral and dynamic information. Feature extraction is followed with a classification task to predict the emotions of the speaker.", "id": 2202, "question": "Which four languages do they experiment with?", "title": "Cross Lingual Cross Corpus Speech Emotion Recognition" }, { "answers": [ "About the same performance" ], "context": "Sequence-to-sequence models that use an attention mechanism to align the input and output sequences BIBREF0, BIBREF1 are currently the predominant paradigm in end-to-end TTS. Approaches based on the seminal Tacotron system BIBREF2 have demonstrated naturalness that rivals that of human speech for certain domains BIBREF3. Despite these successes, there are sometimes complaints of a lack of robustness in the alignment procedure that leads to missing or repeating words, incomplete synthesis, or an inability to generalize to longer utterances BIBREF4, BIBREF5, BIBREF6.", "id": 2203, "question": "Does DCA or GMM-based attention perform better in experiments?", "title": "Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis" }, { "answers": [ "" ], "context": "The system that we use in this paper is based on the original Tacotron system BIBREF2 with architectural modifications from the baseline model detailed in the appendix of BIBREF11. We use the CBHG encoder from BIBREF2 to produce a sequence of encoder outputs, $\\lbrace j\\rbrace _{j=1}^L$, from a length-$L$ input sequence of target phonemes, $\\lbrace \\mathbf {x}_j\\rbrace _{j=1}^L$. Then an attention RNN, (DISPLAY_FORM2), produces a sequence of states, $\\lbrace \\mathbf {s}_i\\rbrace _{i=1}^T$, that the attention mechanism uses to compute $\\mathbf {\\alpha }_i$, the alignment at decoder step $i$. Additional arguments to the attention function in () depend on the specific attention mechanism (e.g., whether it is content-based, location-based, or both). The context vector, $\\mathbf {c}_i$, that is fed to the decoder RNN is computed using the alignment, $\\mathbf {\\alpha }_i$, to produce a weighted average of encoder states. The decoder is fed both the context vector and the current attention RNN state, and an output function produces the decoder output, $\\mathbf {y}_i$, from the decoder RNN state, $\\mathbf {d}_i$.", "id": 2204, "question": "How they compare varioius mechanisms in terms of naturalness?", "title": "Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis" }, { "answers": [ "F1 and Weighted-F1" ], "context": "The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1.", "id": 2205, "question": "What evaluation metric is used?", "title": "Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack" }, { "answers": [ "" ], "context": "The task of detecting offensive language has been studied across a variety of content classes. Perhaps the most commonly studied class is hate speech, but work has also covered bullying, aggression, and toxic comments BIBREF13.", "id": 2206, "question": "What datasets are used?", "title": "Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack" }, { "answers": [ "" ], "context": "The recent adoption of deep learning methods in natural language generation (NLG) for dialogue systems resulted in an explosion of neural data-to-text generation models, which depend on large training data. These are typically trained on one of the few parallel corpora publicly available, in particular the E2E BIBREF0 and the WebNLG BIBREF1 datasets. Crowdsourcing large NLG datasets tends to be a costly and time-consuming process, making it impractical outside of task-oriented dialogue systems. At the same time, current neural NLG models struggle to replicate the high language diversity of the training sentences present in these large datasets, and instead they learn to produce the same generic type of sentences as with considerably less training data BIBREF2, BIBREF3, BIBREF4.", "id": 2207, "question": "Is the origin of the dialogues in corpus some video game and what game is that?", "title": "ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation" }, { "answers": [ "Yes, Transformer based seq2seq is evaluated with average BLEU 0.519, METEOR 0.388, ROUGE 0.631 CIDEr 2.531 and SER 2.55%." ], "context": "ViGGO features more than 100 different video game titles, whose attributes were harvested using free API access to two of the largest online video game databases: IGDB and GiantBomb. Using these attributes, we generated a set of 2,300 structured MRs. The human reference utterances for the generated MRs were then crowdsourced using vetted workers on the Amazon Mechanical Turk (MTurk) platform BIBREF18, resulting in 6,900 MR-utterance pairs altogether. With the goal of creating a clean, high-quality dataset, we strived to obtain reference utterances with correct mentions of all slots in the corresponding MR through post-processing.", "id": 2208, "question": "Is any data-to-text generation model trained on this new corpus, what are the results?", "title": "ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation" }, { "answers": [ "" ], "context": "The MRs in the ViGGO dataset range from 1 to 8 slot-value pairs, and the slots come from a set of 14 different video game attributes. Table TABREF6 details how these slots may be distributed across the 9 different DAs. The inform DA, represented by 3,000 samples, is the most prevalent one, as the average number of slots it contains is significantly higher than that of all the other DAs. Figure FIGREF7 visualizes the MR length distribution across the entire dataset.", "id": 2209, "question": "How the authors made sure that corpus is clean despite being crowdsourced?", "title": "ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation" }, { "answers": [ "" ], "context": "The main motivation here is to equip Sign Language (SL) with software and foster implementation as available tools for SL are paradoxically limited in such digital times. For example, translation assisting software would help respond to the high demand for accessible content and information. But equivalent text-to-text software relies on source and target written forms to work, whereas similar SL support seems impossible without major revision of the typical user interface.", "id": 2210, "question": "Do they build a generative probabilistic language model for sign language?", "title": "A human-editable Sign Language representation for software editing---and a writing system?" }, { "answers": [ "" ], "context": "Documents have sequential structure at different hierarchical levels of abstraction: a document is typically composed of a sequence of sections that have a sequence of paragraphs, a paragraph is essentially a sequence of sentences, each sentence has sequences of phrases that are comprised of a sequence of words, etc. Capturing this hierarchical sequential structure in a language model (LM) BIBREF0 can potentially give the model more predictive accuracy, as we have seen in previous work BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 .", "id": 2211, "question": "Does CLSTM have any benefits over BERT?", "title": "Contextual LSTM (CLSTM) models for Large scale NLP tasks" }, { "answers": [ "" ], "context": "Humans in general find it relatively easy to have chat-like conversations that are both coherent and engaging at the same time. While not all human chat is engaging, it is arguably coherent BIBREF0 , and it can cover large vocabularies across a wide range of conversational topics. In addition, each contribution by a partner conversant may exhibit multiple sentences, such as greeting+question or acknowledgement+statement+question. The topics raised in a conversation may go back and forth without losing coherence. All of these phenomena represent big challenges for current data-driven chatbots.", "id": 2212, "question": "How do they obtain human generated policies?", "title": "Ensemble-Based Deep Reinforcement Learning for Chatbots" }, { "answers": [ "" ], "context": "A reinforcement learning agent induces its behaviour from interacting with an environment through trial and error, where situations (representations of sentences in a dialogue history) are mapped to actions (follow-up sentences) by maximising a long-term reward signal. Such an agent is typically characterised by: (i) a finite set of states INLINEFORM0 that describe all possible situations in the environment; (ii) a finite set of actions INLINEFORM1 to change in the environment from one situation to another; (iii) a state transition function INLINEFORM2 that specifies the next state INLINEFORM3 for having taken action INLINEFORM4 in the current state INLINEFORM5 ; (iv) a reward function INLINEFORM6 that specifies a numerical value given to the agent for taking action INLINEFORM7 in state INLINEFORM8 and transitioning to state INLINEFORM9 ; and (v) a policy INLINEFORM10 that defines a mapping from states to actions BIBREF1 , BIBREF29 . The goal of a reinforcement learning agent is to find an optimal policy by maximising its cumulative discounted reward defined as DISPLAYFORM0 ", "id": 2213, "question": "How many agents do they ensemble over?", "title": "Ensemble-Based Deep Reinforcement Learning for Chatbots" }, { "answers": [ "" ], "context": "Coreference resolution systems group noun phrases (mentions) that refer to the same entity into the same chain. Mentions can be full names (e.g., John Miller), pronouns (e.g., he), demonstratives (e.g., this), comparatives (e.g., the first) or descriptions of the entity (e.g. the 40-year-old) BIBREF0 . Although coreference resolution has been a research focus for several years, systems are still far away from being perfect. Nevertheless, there are many tasks in natural language processing (NLP) which would benefit from coreference information, such as information extraction, question answering or summarization BIBREF1 . In BIBREF2 , for example, we showed that coreference information can also be incorporated into word embedding training. In general, coreference resolution systems can be used as a pre-processing step or as a part of a pipeline of different modules.", "id": 2214, "question": "What is the task of slot filling?", "title": "Impact of Coreference Resolution on Slot Filling" }, { "answers": [ "" ], "context": "Recent years have seen the proliferation of deceptive information online. With the increasing necessity to validate the information from the Internet, automatic fact checking has emerged as an important research topic. It is at the core of multiple applications, e.g., discovery of fake news, rumor detection in social media, information verification in question answering systems, detection of information manipulation agents, and assistive technologies for investigative journalism. At the same time, it touches many aspects, such as credibility of users and sources, information veracity, information verification, and linguistic aspects of deceptive language.", "id": 2215, "question": "Do they report results only on English data?", "title": "Fully Automated Fact Checking Using External Sources" }, { "answers": [ "" ], "context": "Given a claim, our system searches for support information on the Web in order to verify whether the claim is likely to be true. The three steps in this process are (i) external support retrieval, (ii) text representation, and (iii) veracity prediction.", "id": 2216, "question": "Does this system improve on the SOTA?", "title": "Fully Automated Fact Checking Using External Sources" }, { "answers": [ " Generate a query out of the claim and querying a search engine, rank the words by means of TF-IDF, use IBM's AlchemyAPI to identify named entities, generate queries of 5–10 tokens, which execute against a search engine, and collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable." ], "context": "This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 .", "id": 2217, "question": "How are the potentially relevant text fragments identified?", "title": "Fully Automated Fact Checking Using External Sources" }, { "answers": [ "" ], "context": "Next, we build the representation of a claim and the corresponding snippets and Web pages. First, we calculate three similarities (a) between the claim and a snippet, or (b) between the claim and a Web page: (i) cosine with tf-idf, (ii) cosine over embeddings, and (iii) containment BIBREF1 . We calculate the embedding of a text as the average of the embeddings of its words; for this, we use pre-trained embeddings from GloVe BIBREF2 . Moreover, as a Web page can be long, we first split it into a set of rolling sentence triplets, then we calculate the similarities between the claim and each triplet, and we take the highest scoring triplet. Finally, as we have up to ten hits from the search engine, we take the maximum and also the average of the three similarities over the snippets and over the Web pages.", "id": 2218, "question": "What algorithm and embedding dimensions are used to build the task-specific embeddings?", "title": "Fully Automated Fact Checking Using External Sources" }, { "answers": [ "" ], "context": "Next, we build classifiers: neural network (NN), support vector machines (SVM), and a combination thereof (SVM+NN).", "id": 2219, "question": "What data is used to build the task-specific embeddings?", "title": "Fully Automated Fact Checking Using External Sources" }, { "answers": [ "" ], "context": "Accurate and efficient semantic parsing is a long-standing goal in natural language processing. There are countless applications for methods that provide deep semantic analyses of sentences. Leveraging semantic information in text may provide improved algorithms for many problems in NLP, such as named entity recognition BIBREF0 , BIBREF1 , BIBREF2 , word sense disambiguation BIBREF3 , BIBREF4 , semantic role labeling BIBREF5 , co-reference resolution BIBREF6 , BIBREF7 , etc. A sufficiently expressive semantic parser may directly provide the solutions to many of these problems. Lower-level language processing tasks, such as those mentioned, may even benefit by incorporating semantic information, especially if the task can be solved jointly during semantic parsing.", "id": 2220, "question": "Do they evaluate the syntactic parses?", "title": "A Probabilistic Generative Grammar for Semantic Parsing" }, { "answers": [ "" ], "context": "Our model is an extension of context-free grammars (CFGs) BIBREF18 that couples syntax and semantics. To generate a sentence in our framework, the semantic statement is first drawn from a prior. A grammar then recursively constructs a syntax tree top-down, randomly selecting production rules from distributions that depend on the semantic statement. We present a particular incarnation of a grammar in this framework, where hierarchical Dirichlet processes (HDPs) BIBREF19 are used to select production rules randomly. The application of HDPs in our setting is novel, requiring a new inference technique.", "id": 2221, "question": "What knowledge bases do they use?", "title": "A Probabilistic Generative Grammar for Semantic Parsing" }, { "answers": [ "" ], "context": "An important property of human communication is that listeners can infer information beyond the literal meaning of an utterance. One well-studied type of inference is scalar inference BIBREF0, BIBREF1, whereby a listener who hears an utterance with a scalar item like some infers the negation of a stronger alternative with all:", "id": 2222, "question": "Which dataset do they use?", "title": "Harnessing the richness of the linguistic signal in predicting pragmatic inferences" }, { "answers": [ "" ], "context": "Pre-trained language Models (PLM) such as ELMo BIBREF0, BERT BIBREF1, ERNIE BIBREF2 and XLNet BIBREF3 have been proven to capture rich language information from text and then benefit many NLP applications by simple fine-tuning, including sentiment classification, natural language inference, named entity recognition and so on.", "id": 2223, "question": "What pre-trained models did they compare to?", "title": "Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention" }, { "answers": [ "" ], "context": "The primary goal of this work is to inject the word segmentation knowledge into character-level Chinese PLM and enhance original models. Given the strong performance of recent deep transformers trained on language modeling, we adopt BERT and its updated variants (ERNIE, BERT-wwm) as the basic encoder for our work, and the outputs $\\mathbf {H}$ from the last layer of encoder are treated as the enriched contextual representations.", "id": 2224, "question": "How does the fusion method work?", "title": "Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention" }, { "answers": [ "weibo-100k, Ontonotes, LCQMC and XNLI" ], "context": "Although the character-level PLM can well capture language knowledge from text, it neglects the semantic information expressed in the word level. Therefore we apply a word-aligned layer on top of the encoder to integrate the word boundary information into representation of character with the attention aggregation mechanism.", "id": 2225, "question": "What dataset did they use?", "title": "Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention" }, { "answers": [ "" ], "context": "As mentioned in Section SECREF1, our proposed word-aligned attention relies on the segmentation results of CWS tool $\\pi $. Unfortunately, a segmenter is usually unreliable due to the risk of ambiguous and non-formal input, especially on out-of-domain data, which may lead to error propagation and an unsatisfactory model performance. In practice, The ambiguous distinction between morphemes and compound words leads to the cognitive divergence of words concepts, thus different $\\pi $ may provide diverse $\\pi (S)$ with various granularities. To reduce the impact of segmentation error and effectively mine the common knowledge of different segmenters, it’s natural to enhance the word-aligned attention layer with multi-source segmentation input. Formally, assume that there are $M$ popular CWS tools employed, we can obtain $M$ different representations $\\overline{\\textbf {H}}^1, ..., \\overline{\\textbf {H}}^M $ by Eq. DISPLAY_FORM11. Then we propose to fuse these semantically different representations as follows:", "id": 2226, "question": "What benchmarks did they experiment on?", "title": "Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention" }, { "answers": [ "" ], "context": "In this work, we investigate the problem of task-oriented dialogue in mixed-domain settings. Our work is related to two lines of research in Spoken Dialogue System (SDS), namely task-oriented dialogue system and multi-domain dialogue system. We briefly review the recent literature related to these topics as follows.", "id": 2227, "question": "What were the evaluation metrics used?", "title": "Towards Task-Oriented Dialogue in Mixed Domains" }, { "answers": [ "3029" ], "context": "In this section, we present briefly two methods that we use in our experiments which have been mentioned in the previous section. The first method is the Sequicity framework and the second one is the state-of-the-art multi-domain dialogue state tracking approach.", "id": 2228, "question": "What is the size of the dataset?", "title": "Towards Task-Oriented Dialogue in Mixed Domains" }, { "answers": [ "" ], "context": "Figure FIGREF1 shows the architecture of the Sequicity framework as described in BIBREF8. In essence, in each turn, the Sequicity model first takes a bspan ($B_1$) and a response ($R_1$) which are determined in the previous step, and the current human question ($U_2$) to generate the current bspan. This bspan is then used together with a knowledge base to generate the corresponding machine answer ($R_2$), as shown in the right part of Figure FIGREF1.", "id": 2229, "question": "What multi-domain dataset is used?", "title": "Towards Task-Oriented Dialogue in Mixed Domains" }, { "answers": [ "" ], "context": "Figure FIGREF8 shows the architecture of the multi-domain belief tracking with knowledge sharing as described in BIBREF9. This is the state-of-the-art belief tracker for multi-domain dialogue.", "id": 2230, "question": "Which domains did they explored?", "title": "Towards Task-Oriented Dialogue in Mixed Domains" }, { "answers": [ "" ], "context": "Digital text forensics aims at examining the originality and credibility of information in electronic documents and, in this regard, at extracting and analyzing information about the authors of the respective texts BIBREF0 . Among the most important tasks of this field are authorship attribution (AA) and authorship verification (AV), where the former deals with the problem of identifying the most likely author of a document INLINEFORM0 with unknown authorship, given a set of texts of candidate authors. AV, on the other hand, focuses on the question whether INLINEFORM1 was in fact written by a known author INLINEFORM2 , where only a set of reference texts INLINEFORM3 of this author is given. Both disciplines are strongly related to each other, as any AA problem can be broken down into a series of AV problems BIBREF1 . Breaking down an AA problem into multiple AV problems is especially important in such scenarios, where the presence of the true author of INLINEFORM4 in the candidate set cannot be guaranteed.", "id": 2231, "question": "Do they report results only on English data?", "title": "Assessing the Applicability of Authorship Verification Methods" }, { "answers": [ "" ], "context": "Over the years, researchers in the field of authorship analysis identified a number of challenges and limitations regarding existing studies and approaches. Azarbonyad et al. BIBREF8 , for example, focused on the questions if the writing styles of authors of short texts change over time and how this affects AA. To answer these questions, the authors proposed an AA approach based on time-aware language models that incorporate the temporal changes of the writing style of authors. In one of our experiments, we focus on a similar question, namely, whether it is possible to recognize the writing style of authors, despite of large time spans between their documents. However, there are several differences between our experiment and the study of Azarbonyad et al. First, the authors consider an AA task, where one anonymous document INLINEFORM0 has to be attributed to one of INLINEFORM1 possible candidate authors, while we focus on an AV task, where INLINEFORM2 is compared against one document INLINEFORM3 of a known author. Second, the authors focus on texts with informal language (emails and tweets) in their study, while in our experiment we consider documents written in a formal language (scientific works). Third, Azarbonyad et al. analyzed texts with a time span of four years, while in our experiment the average time span is 15.6 years. Fourth, in contrast to the approach of the authors, none of the 12 examined AV approaches in our experiment considers a special handling of temporal stylistic changes.", "id": 2232, "question": "Which is the best performing method?", "title": "Assessing the Applicability of Authorship Verification Methods" }, { "answers": [ "" ], "context": "Before we can assess the applicability of AV methods, it is important to understand their fundamental characteristics. Due to the increasing number of proposed AV approaches in the last two decades, the need arose to develop a systematization including the conception, implementation and evaluation of authorship verification methods. In regard to this, only a few attempts have been made so far. In 2004, for example, Koppel and Schler BIBREF13 described for the first time the connection between AV and unary classification, also known as one-class classification. In 2008, Stein et al. BIBREF14 compiled an overview of important algorithmic building blocks for AV where, among other things, they also formulated three AV problems as decision problems. In 2009, Stamatatos BIBREF15 coined the phrases profile- and instance-based approaches that initially were used in the field of AA, but later found their way also into AV. In 2013 and 2014, Stamatatos et al. BIBREF11 , BIBREF16 introduced the terms intrinsic- and extrinsic models that aim to further distinguish between AV methods. However, a closer look at previous attempts to characterize authorship verification approaches reveals a number of misunderstandings, for instance, when it comes to draw the borders between their underlying classification models. In the following subsections, we clarify these misunderstandings, where we redefine previous definitions and propose new properties that enable a better comparison between AV methods.", "id": 2233, "question": "What size are the corpora?", "title": "Assessing the Applicability of Authorship Verification Methods" }, { "answers": [ "" ], "context": "Reliability is a fundamental property any AV method must fulfill in order to be applicable in real-world forensic settings. However, since there is no consistent concept nor a uniform definition of the term “reliability” in the context of authorship verification according to the screened literature, we decided to reuse a definition from applied statistics, and adapt it carefully to AV.", "id": 2234, "question": "What is a self-compiled corpus?", "title": "Assessing the Applicability of Authorship Verification Methods" }, { "answers": [ "MOCC, OCCAV, COAV, AVeer, GLAD, DistAV, Unmasking, Caravel, GenIM, ImpGI, SPATIUM and NNCD" ], "context": "Another important property of an AV method is optimizability. We define an AV method as optimizable, if it is designed in such a way that it offers adjustable hyperparameters that can be tuned against a training/validation corpus, given an optimization method such as grid or random search. Hyperparameters might be, for instance, the selected distance/similarity function, the number of layers and neurons in a neural network or the choice of a kernel method. The majority of existing AV approaches in the literature (for example, BIBREF13 , BIBREF23 , BIBREF24 , BIBREF22 , BIBREF31 , BIBREF4 , BIBREF32 , BIBREF16 ) belong to this category. On the other hand, if a published AV approach involves hyperparameters that have been entirely fixed such that there is no further possibility to improve its performance from outside (without deviating from the definitions in the publication of the method), the method is considered to be non-optimizable. Non-optimizable AV methods are preferable in forensic settings as, here, the existence of a training/validation corpus is not always self-evident. Among the proposed AV approaches in the respective literature, we identified only a small fraction BIBREF21 , BIBREF2 , BIBREF30 that fall into this category.", "id": 2235, "question": "What are the 12 AV approaches which are examined?", "title": "Assessing the Applicability of Authorship Verification Methods" }, { "answers": [ "Annotation was done with the help of annotators from Amazon Mechanical Turk on snippets of conversations" ], "context": "In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. One of these forms of interaction is the presence of comments sections that are found in many websites. The comments section allows visitors, authenticated in some cases and unauthenticated in others, to leave a message for others to read. This is a type of multi-party asynchronous conversation that offers interesting insights: one can learn what is the commenting community thinking about the topic being discussed, their sentiment, recommendations among many other. There are some comment sections in which commentators are allowed to directly respond to others, creating a comment hierarchy. These kind of written conversations are interesting because they bring light to the types interaction between participants with minimal supervision. This lack of supervision and in some forums, anonymity, give place to interactions that may not be necessarily related with the original topic being discussed, and as in regular conversations, there are participants with not the best intentions. Such participants are called trolls in some communities.", "id": 2236, "question": "how was annotation done?", "title": "A Trolling Hierarchy in Social Media and A Conditional Random Field For Trolling Detection" }, { "answers": [ "" ], "context": "Based on the previous definitions we identify four aspects that uniquely define a trolling event-response pair: 1) Intention: what is the author of the comment in consideration purpose, a) trolling, the comment is malicious in nature, aims to disrupt, annoy, offend, harm or spread purposely false information, b) playing the comment is playful, joking, teasing others without the malicious intentions as in a), or c) none, the comment has no malicious intentions nor is playful, it is a simple comment. 2) Intention Disclosure: this aspect is meant to indicate weather a trolling comment is trying to deceive its readers, the possible values for this aspect are a) the comment's author is a troll and is trying to hide its real intentions, and pretends to convey a different meaning, at least temporarily, b) the comment's author is a troll but is clearly exposing its malicious intentions and c) the comment's author is not a troll, therefore there are not hidden or exposed malicious or playful intentions. There are two aspects defined on the comments that direct address the comment in consideration, 3) Intentions Interpretation: this aspect refers to the responder's understanding of the parent's comment intentions. There possible interpretations are the same as the intentions aspect: trolling, playing or none. The last element, is the 4) Response strategy employed by the commentators directly replaying to a comment, which can be a trolling event. The response strategy is influenced directly by the responder's interpretation of the parent's comment intention. We identify 14 possible response strategies. Some of these strategies are tied with combinations of the three other aspects. We briefly define each of them in the appendix.", "id": 2237, "question": "what is the source of the new dataset?", "title": "A Trolling Hierarchy in Social Media and A Conditional Random Field For Trolling Detection" }, { "answers": [ "" ], "context": "Catastrophic global circumstances have a pronounced effect on the lives of human beings across the world. The ramifications of such a scenario are experienced in diverse and multiplicative ways spanning routine tasks, media and news reports, detrimental physical and mental health, and also routine conversations. A similar footprint has been left by the global pandemic Coronavirus particularly since February 2020. The outbreak has not only created havoc in the economic conditions, physical health, working conditions, and manufacturing sector to name a few but has also created a niche in the minds of the people worldwide. It has had serious repercussions on the psychological state of the humans that is most evident now.", "id": 2238, "question": "Do the authors give examples of positive and negative sentiment with regard to the virus?", "title": "Word frequency and sentiment analysis of twitter messages during Coronavirus pandemic" }, { "answers": [ "" ], "context": "Several researchers have devised statistical and mathematical techniques to analyze literary artifacts. A substantially significant approach among these is inferring the pattern of frequency distributions of the words in the text BIBREF2. Zipf's law is mostly immanent in word frequency distributions BIBREF3, BIBREF4. The law essentially proclaims that for the words' vector $x$, the word frequency distribution $\\nu $ varies as an inverse power of $x$. Some other distributions that are prevalent include Zipf–Mandelbrot BIBREF5, lognormal BIBREF6, BIBREF7, and Gauss–Poisson BIBREF6. Such studies have been conducted for several languages such as Chinese BIBREF8, Japanese BIBREF9, Hindi BIBREF10 and many others BIBREF2. Not only single word frequencies, but also multi-word frequencies have been exhaustively explored. One of the examples is BIBREF11 wherein bigram and trigram frequencies and versatilities were analyzed and 577 different bigrams and 6,140 different trigrams were reported.", "id": 2239, "question": "Which word frequencies reflect on the psychology of the twitter users, according to the authors?", "title": "Word frequency and sentiment analysis of twitter messages during Coronavirus pandemic" }, { "answers": [ "" ], "context": "In this section, we present the details of the analysis performed on the data obtained pertaining to Twitter messages from January 2020 upto now, that is the time since the news of the Coronavirus outbreak in China was spread across nations. The word frequency data corresponding to the twitter messages has been taken from BIBREF16. The data source indicates that during March 11th to March 30th there were over 4 million tweets a day with the surge in the awareness. Also, the data prominently captures the tweets in English, Spanish, and French languages. A total of four datasets have been used to carry out the study.", "id": 2240, "question": "Do they specify which countries they collected twitter data from?", "title": "Word frequency and sentiment analysis of twitter messages during Coronavirus pandemic" }, { "answers": [ "" ], "context": "First, we consider the data corresponding to the number of twitter ids tweeting about coronavirus at a particular time. Fig. FIGREF1 depicts the pattern of the twitter id evolution. A couple of peaks can be observed in its evolution in the months of February and March.", "id": 2241, "question": "Do they collect only English data?", "title": "Word frequency and sentiment analysis of twitter messages during Coronavirus pandemic" }, { "answers": [ "They look at the performance accuracy of explanation and the prediction performance" ], "context": "In recent years deep neural network models have been successfully applied in a variety of applications such as machine translation BIBREF0 , object recognition BIBREF1 , BIBREF2 , game playing BIBREF3 , dialog BIBREF4 and more. However, their lack of interpretability makes them a less attractive choice when stakeholders must be able to understand and validate the inference process. Examples include medical diagnosis, business decision-making and reasoning, legal and safety compliance, etc. This opacity also presents a challenge simply for debugging and improving model performance. For neural systems to move into realms where more transparent, symbolic models are currently employed, we must find mechanisms to ground neural computation in meaningful human concepts, inferences, and explanations. One approach to this problem is to treat the explanation problem itself as a learning problem and train a network to explain the results of a neural computation. This can be done either with a single network learning jointly to explain its own predictions or with separate networks for prediction and explanation. Regardless, the availability of sufficient labelled training data is a key impediment. In previous work BIBREF5 we developed a synthetic conversational reasoning dataset in which the User presents the Agent with a simple, ambiguous story and a challenge question about that story. Ambiguities arise because some of the entities in the story have been replaced by variables, some of which may need to be known to answer the challenge question. A successful Agent must reason about what the answers might be, given the ambiguity, and, if there is more than one possible answer, ask for the value of a relevant variable to reduce the possible answer set. In this paper we present a new dataset e-QRAQ constructed by augmenting the QRAQ simulator with the ability to provide detailed explanations about whether the Agent's response was correct and why. Using this dataset we perform some preliminary experiments, training an extended End-to-End Memory Network architecture BIBREF6 to jointly predict a response and a partial explanation of its reasoning. We consider two types of partial explanation in these experiments: the set of relevant variables, which the Agent must know to ask a relevant, reasoned question; and the set of possible answers, which the Agent must know to answer correctly. We demonstrate a strong correlation between the qualities of the prediction and explanation.", "id": 2242, "question": "How do they measure correlation between the prediction and explanation quality?", "title": "e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations" }, { "answers": [ "" ], "context": "Current interpretable machine learning algorithms for deep learning can be divided into two approaches: one approach aims to explain black box models in a model-agnostic fashion BIBREF7 , BIBREF8 ; another studies learning models, in particular deep neural networks, by visualizing for example the activations or gradients inside the networks BIBREF9 , BIBREF10 , BIBREF11 . Other work has studied the interpretability of traditional machine learning algorithms, such as decision trees BIBREF12 , graphical models BIBREF13 , and learned rule-based systems BIBREF14 . Notably, none of these algorithms produces natural language explanations, although the rule-based system is close to a human-understandable form if the features are interpretable. We believe one of the major impediments to getting NL explanations is the lack of datasets containing supervised explanations.", "id": 2243, "question": "Does the Agent ask for a value of a variable using natural language generated text?", "title": "e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations" }, { "answers": [ "" ], "context": "Before introducing the KB completion task in details, let us return to the classic Word2Vec example of a “royal” relationship between “ $\\mathsf {king}$ ” and “ $\\mathsf {man}$ ”, and between “ $\\mathsf {queen}$ ” and “ $\\mathsf {woman}$ .” As illustrated in this example: $v_{king} - v_{man} \\approx v_{queen} - v_{woman}$ , word vectors learned from a large corpus can model relational similarities or linguistic regularities between pairs of words as translations in the projected vector space BIBREF0 , BIBREF1 . Figure 1 shows another example of a relational similarity between word pairs of countries and capital cities: $\nv_{Japan} - v_{Tokyo} &\\approx & v_{Germany} - v_{Berlin}\\\\\nv_{Germany} - v_{Berlin} &\\approx & v_{Italy} - v_{Rome} \\\\\nv_{Italy} - v_{Rome} &\\approx & v_{Portugal} - v_{Lisbon}\n$ ", "id": 2244, "question": "What models does this overview cover?", "title": "An overview of embedding models of entities and relationships for knowledge base completion" }, { "answers": [ "They used a dataset from Taobao which contained a collection of conversation records between customers and customer service staffs. It contains over five kinds of conversations,\nincluding chit-chat, product and discount consultation, querying delivery progress and after-sales feedback. " ], "context": " $\\dagger $ Corresponding author. This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100), National Natural Science Foundation of China (No. 61672343 and No. 61733011), Key Project of National Society Science Foundation of China (No. 15-ZDA041), The Art and Science Interdisciplinary Funds of Shanghai Jiao Tong University (No. 14JCRZ04). This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/.", "id": 2245, "question": "What datasets are used to evaluate the introduced method?", "title": "Lingke: A Fine-grained Multi-turn Chatbot for Customer Service" }, { "answers": [ "Their model resulted in values of 0.476, 0.672 and 0.893 for recall at position 1,2 and 5 respectively in 10 candidates." ], "context": "This section presents the architecture of Lingke, which is overall shown in Figure 1 .", "id": 2246, "question": "What are the results achieved from the introduced method?", "title": "Lingke: A Fine-grained Multi-turn Chatbot for Customer Service" }, { "answers": [ "by converting human advice to first-order logic format and use as an input to calculate gradient" ], "context": "The problem of knowledge base population (KBP) – constructing a knowledge base (KB) of facts gleaned from a large corpus of unstructured data – poses several challenges for the NLP community. Commonly, this relation extraction task is decomposed into two subtasks – entity linking, in which entities are linked to already identified identities within the document or to entities in the existing KB, and slot filling, which identifies certain attributes about a target entity.", "id": 2247, "question": "How do they incorporate human advice?", "title": "Learning Relational Dependency Networks for Relation Extraction" }, { "answers": [ "" ], "context": "We present the different aspects of our pipeline, depicted in Figure FIGREF1 . We will first describe our approach to generating features and training examples from the KBP corpus, before describing the core of our framework – the RDN Boost algorithm.", "id": 2248, "question": "What do they learn jointly?", "title": "Learning Relational Dependency Networks for Relation Extraction" }, { "answers": [ "" ], "context": "Nowadays, people increasingly tend to use social media like Facebook and Twitter as their primary source of information and news consumption. There are several reasons behind this tendency, such as the simplicity to gather and share the news and the possibility of staying abreast of the latest news and updated faster than with traditional media. An important factor is also that people can be engaged in conversations on the latest breaking news with their contacts by using these platforms. Pew Research Center's newest report shows that two-thirds of U.S. adults gather their news from social media, where Twitter is the most used platform. However, the absence of a systematic approach to do some form of fact and veracity checking may also encourage the spread of rumourous stories and misinformation BIBREF0 . Indeed, in social media, unverified information can spread very quickly and becomes viral easily, enabling the diffusion of false rumours and fake information.", "id": 2249, "question": "Is this an English-language dataset?", "title": "Stance Classification for Rumour Analysis in Twitter: Exploiting Affective Information and Conversation Structure" }, { "answers": [ "affective features provided by different emotion models such as Emolex, EmoSenticNet, Dictionary of Affect in Language, Affective Norms for English Words and Linguistics Inquiry and Word Count" ], "context": "The SemEval-2017 Task 8 Task A BIBREF2 has as its main objective to determine the stance of the users in a Twitter thread towards a given rumour, in terms of support, denying, querying or commenting (SDQC) on the original rumour. Rumour is defined as a “circulating story of questionable veracity, which is apparently credible but hard to verify, and produces sufficient skepticism and/or anxiety so as to motivate finding out the actual truth” BIBREF7 . The task was very timing due to the growing importance of rumour resolution in the breaking news and to the urgency of preventing the spreading of misinformation.", "id": 2250, "question": "What affective-based features are used?", "title": "Stance Classification for Rumour Analysis in Twitter: Exploiting Affective Information and Conversation Structure" }, { "answers": [ "" ], "context": "We developed a new model by exploiting several stylistic and structural features characterizing Twitter language. In addition, we propose to utilize conversational-based features by exploiting the peculiar tree structure of the dataset. We also explored the use of affective based feature by extracting information from several affective resources including dialogue-act inspired features.", "id": 2251, "question": "What conversation-based features are used?", "title": "Stance Classification for Rumour Analysis in Twitter: Exploiting Affective Information and Conversation Structure" }, { "answers": [ "" ], "context": "Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . It is of interest to several applications including personalized machine translation, forensics, and marketing BIBREF7 , BIBREF8 .", "id": 2252, "question": "What are the evaluation metrics used?", "title": "Bleaching Text: Abstract Features for Cross-lingual Gender Prediction" }, { "answers": [ "" ], "context": "Depression is a leading contributor to the burden of disability worldwideBIBREF0, BIBREF1, with some evidence that disability attributed to depression is rising, particularly among youthBIBREF2, BIBREF3. A key challenge in reducing the prevalence of depression has been that it is often under-recognizedBIBREF4 as well as under-treatedBIBREF5. Cognitive-behavioral therapy (CBT), is the most widely researched psychotherapy for depression. It is equivalent to antidepressant medications in its short-term efficacy and evidences superior outcomes in the long-termBIBREF6, BIBREF7. The cognitive theory underlying CBT argues that the ways in which individuals process and interpret information about themselves and their world is directly related to the onset, maintenance, and recurrence of their depressionBIBREF8, BIBREF9. This model is consistent with information processing accounts of mood regulationBIBREF10 and its dynamicsBIBREF11, as well as basic research that supports the role of cognitive reappraisal and language in emotion regulationBIBREF12, BIBREF13, BIBREF14, BIBREF15.", "id": 2253, "question": "Do they report results only on English datasets?", "title": "Depressed individuals express more distorted thinking on social media" }, { "answers": [ "" ], "context": "In NLP, Neural language model pre-training has shown to be effective for improving many tasks BIBREF0 , BIBREF1 . Transformer BIBREF2 is based solely on the attention mechanism, and dispensing with recurrent and convolutions entirely. At present, this model has received extensive attentions and plays an key role in many neural language models, such as BERT BIBREF0 , GPT BIBREF3 and Universal Transformer BIBREF4 . However, in Transformer based model, a lot of model parameters may cause problems in training and deploying these parameters in a limited resource setting. Thus, the compression of large neural pre-training language model has been an essential problem in NLP research.", "id": 2254, "question": "What datasets or tasks do they conduct experiments on?", "title": "A Tensorized Transformer for Language Modeling" }, { "answers": [ "Data augmentation (es) improved Adv es by 20% comparing to baseline \nData augmentation (cs) improved Adv cs by 16.5% comparing to baseline\nData augmentation (cs+es) improved both Adv cs and Adv es by at least 10% comparing to baseline \nAll models show improvements over adversarial sets \n" ], "context": "In computer vision, it is well known that otherwise competitive models can be \"fooled\" by adding intentional noise to the input images BIBREF0, BIBREF1. Such changes, imperceptible to the human eye, can cause the model to reverse its initial correct decision on the original input. This has also been studied for Automatic Speech Recognition (ASR) by including hidden commands BIBREF2 in the voice input. Devising such adversarial examples for machine learning algorithms, in particular for neural networks, along with defense mechanisms against them, has been of recent interest BIBREF3. The lack of smoothness of the decision boundaries BIBREF4 and reliance on weakly correlated features that do not generalize BIBREF5 seem to be the main reasons for confident but incorrect predictions for instances that are far from the training data manifold. Among the most successful techniques to increase resistance to such attacks is perturbing the training data and enforcing the output to remain the same BIBREF4, BIBREF6. This is expected to improve the smoothing of the decision boundaries close to the training data but may not help with points that are far from them.", "id": 2255, "question": "How big is performance improvement proposed methods are used?", "title": "Improving Robustness of Task Oriented Dialog Systems" }, { "answers": [ "" ], "context": "block = [text width=15em, text centered]", "id": 2256, "question": "How authors create adversarial test set to measure model robustness?", "title": "Improving Robustness of Task Oriented Dialog Systems" }, { "answers": [ "" ], "context": "Few-shot learning (FSL) BIBREF0 , BIBREF1 , BIBREF2 aims to learn classifiers from few examples per class. Recently, deep learning has been successfully exploited for FSL via learning meta-models from a large number of meta-training tasks. These meta-models can be then used for rapid-adaptation for the target/meta-testing tasks that only have few training examples. Examples of such meta-models include: (1) metric-/similarity-based models, which learn contextual, and task-specific similarity measures BIBREF3 , BIBREF4 , BIBREF5 ; and (2) optimization-based models, which receive the input of gradients from a FSL task and predict either model parameters or parameter updates BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .", "id": 2257, "question": "Do they compare with the MAML algorithm?", "title": "Diverse Few-Shot Text Classification with Multiple Metrics" }, { "answers": [ "In task 1 best transfer learning strategy improves F1 score by 4.4% and accuracy score by 3.3%, in task 2 best transfer learning strategy improves F1 score by 2.9% and accuracy score by 1.7%" ], "context": "User-generated content in forums, blogs, and social media not only contributes to a deliberative exchange of opinions and ideas but is also contaminated with offensive language such as threats and discrimination against people, swear words or blunt insults. The automatic detection of such content can be a useful support for moderators of public platforms as well as for users who could receive warnings or would be enabled to filter unwanted content.", "id": 2258, "question": "By how much does transfer learning improve performance on this task?", "title": "Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter" }, { "answers": [ "SVM" ], "context": "Automatic detection of offensive language is a well-studied phenomenon for the English language. Initial works on the detection of `hostile messages' have been published already during the 1990s BIBREF4 . An overview of recent approaches comparing the different task definitions, feature sets and classification methods is given by Schmidt.2017. A major step forward to support the task was the publication of a large publicly available, manually annotated dataset by Yahoo research BIBREF5 . They provide a classification approach for detection of abusive language in Yahoo user comments using a variety of linguistic features in a linear classification model. One major result of their work was that learning text features from comments which are temporally close to the to-be-predicted data is more important than learning features from as much data as possible. This is especially important for real-life scenarios of classifying streams of comment data. In addition to token-based features, Xiang.2012 successfully employed topical features to detect offensive tweets. We will build upon this idea by employing topical data in our transfer learning setup. Transfer learning recently has gained a lot of attention since it can be easily applied to neural network learning architectures. For instance, Howard.2018 propose a generic transfer learning setup for text classification based on language modeling for pre-training neural models with large background corpora. To improve offensive language detection for English social media texts, a transfer learning approach was recently introduced by Felbo.2017. Their `deepmoji' approach relies on the idea to pre-train a neural network model for an actual offensive language classification task by using emojis as weakly supervised training labels. On a large collection of millions of randomly collected English tweets containing emojis, they try to predict the specific emojis from features obtained from the remaining tweet text. We will follow this idea of transfer learning to evaluate it for offensive language detection in German Twitter data together with other transfer learning strategies.", "id": 2259, "question": "What baseline is used?", "title": "Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter" }, { "answers": [ "Clusters of Twitter user ids from accounts of American or German political actors, musicians, media websites or sports club" ], "context": "Organizers of GermEval 2018 provide training and test datasets for two tasks. Task 1 is a binary classification for deciding whether or not a German tweet contains offensive language (the respective category labels are `offense' and `other'). Task 2 is a multi-class classification with more fine-grained labels sub-categorizing the same tweets into either `insult', `profanity', `abuse', or `other'.", "id": 2260, "question": "What topic clusters are identified by LDA?", "title": "Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter" }, { "answers": [ "" ], "context": "Since the provided dataset for offensive language detection is rather small, we investigate the potential of transfer learning to increase classification performance. For this, we use the following labeled as well as unlabeled datasets.", "id": 2261, "question": "What are the near-offensive language categories?", "title": "Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter" }, { "answers": [ "On subtask 3 best proposed model has F1 score of 92.18 compared to best previous F1 score of 88.58.\nOn subtask 4 best proposed model has 85.9, 89.9 and 95.6 compared to best previous results of 82.9, 84.0 and 89.9 on 4-way, 3-way and binary aspect polarity." ], "context": "Sentiment analysis (SA) is an important task in natural language processing. It solves the computational processing of opinions, emotions, and subjectivity - sentiment is collected, analyzed and summarized. It has received much attention not only in academia but also in industry, providing real-time feedback through online reviews on websites such as Amazon, which can take advantage of customers' opinions on specific products or services. The underlying assumption of this task is that the entire text has an overall polarity.", "id": 2262, "question": "How much do they outperform previous state-of-the-art?", "title": "Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence" }, { "answers": [ "" ], "context": "In this section, we describe our method in detail.", "id": 2263, "question": "How do they generate the auxiliary sentence?", "title": "Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence" }, { "answers": [ "" ], "context": "Abusive online content, such as hate speech and harassment, has received substantial attention over the past few years for its malign social effects. Left unchallenged, abusive content risks harminng those who are targeted, toxifying public discourse, exacerbating social tensions and could lead to the exclusion of some groups from public spaces. As such, systems which can accurately detect and classify online abuse at scale, in real-time and without bias are of central interest to tech companies, policymakers and academics.", "id": 2264, "question": "Have any baseline model been trained on this abusive language dataset?", "title": "Directions in Abusive Language Training Data: Garbage In, Garbage Out" }, { "answers": [ "" ], "context": "The volume of research examining the social and computational aspects of abusive content detection has expanded prodigiously in the past five years. This has been driven by growing awareness of the importance of the Internet more broadly BIBREF1, greater recognition of the harms caused by online abuse BIBREF2, and policy and regulatory developments, such as the EU's Code of Conduct on Hate, the UK Government's `Online Harms' white paper BIBREF3, Germany's NetzDG laws, the Public Pledge on Self-Discipline for the Chinese Internet Industry, and France's anti-hate regulation BIBREF2. In 2020 alone, three computer science venues will host workshops on online hate (TRAC and STOC at LREC, and WOAH at EMNLP), and a shared task at 2019's SemEval on online abuse detection reports that 800 teams downloaded the training data and 115 submitted detection systems BIBREF4. At the same time, social scientific interventions have also appeared, deepening our understanding of how online abuse spreads BIBREF5 and how its harmful impact can be mitigated and challenged BIBREF6.", "id": 2265, "question": "How big are this dataset and catalogue?", "title": "Directions in Abusive Language Training Data: Garbage In, Garbage Out" }, { "answers": [ "" ], "context": "Relevant publications have been identified from four sources to identify training datasets for abusive content detection:", "id": 2266, "question": "What is open website for cataloguing abusive language data?", "title": "Directions in Abusive Language Training Data: Garbage In, Garbage Out" }, { "answers": [ "" ], "context": "The rise of Artificial Intelligence (AI) brings many potential benefits to society, as well as significant risks. These risks take a variety of forms, from autonomous weapons and sophisticated cyber-attacks, to the more subtle techniques of societal manipulation. In particular, the threat this technology poses to maintaining peace and political stability is especially relevant to the United Nations (UN) and other international organisations. In terms of applications of AI, several major risks to peace and political stability have been identified, including: the use of automated surveillance platforms to suppress dissent; fake news reports with realistic fabricated video and audio; and the manipulation of information availability BIBREF0 .", "id": 2267, "question": "how many speeches are in the dataset?", "title": "Automated Speech Generation from UN General Assembly Statements: Mapping Risks in AI Generated Texts" }, { "answers": [ "1448 sentences more than the dataset from Bhat et al., 2017" ], "context": "Code-switching (henceforth CS) is the juxtaposition, within the same speech utterance, of grammatical units such as words, phrases, and clauses belonging to two or more different languages BIBREF0 . The phenomenon is prevalent in multilingual societies where speakers share more than one language and is often prompted by multiple social factors BIBREF1 . Moreover, code-switching is mostly prominent in colloquial language use in daily conversations, both online and offline.", "id": 2268, "question": "How big is the provided treebank?", "title": "Universal Dependency Parsing for Hindi-English Code-switching" }, { "answers": [ "" ], "context": "As preliminary steps before parsing of CS data, we need to identify the language of tokens and normalize and/or back-transliterate them to enhance the parsing performance. These steps are indispensable for processing CS data and without them the performance drops drastically as we will see in Results Section. We need normalization of non-standard word forms and back-transliteration of Romanized Hindi words for addressing out-of-vocabulary problem, and lexical and syntactic ambiguity introduced due to contracted word forms. As we will train separate normalization and back-transliteration models for Hindi and English, we need language identification for selecting which model to use for inference for each word form separately. Moreover, we also need language information for decoding best word sequences.", "id": 2269, "question": "What is LAS metric?", "title": "Universal Dependency Parsing for Hindi-English Code-switching" }, { "answers": [ "" ], "context": "Customer feedback analysis is the task of classifying short text messages into a set of predefined labels (e.g., bug, request). It is an important step towards effective customer support.", "id": 2270, "question": "is the dataset balanced across the four languages?", "title": "ALL-IN-1: Short Text Classification with One Model for All Languages" }, { "answers": [ "" ], "context": "Motivated by the goal to evaluate how good a single model for multiple languages fares, we decided to build a very simple model that can handle any of the four languages. We aimed at an approach that does not require any language-specific processing (beyond tokenization) nor requires any parallel data. We set out to build a simple baseline, which turned out to be surprisingly effective. Our model is depicted in Figure FIGREF7 .", "id": 2271, "question": "what evaluation metrics were used?", "title": "ALL-IN-1: Short Text Classification with One Model for All Languages" }, { "answers": [ "The dataset from a joint ADAPT-Microsoft project" ], "context": "In this section we first describe the IJCNLP 2017 shared task 4 including the data, the features, model and evaluation metrics.", "id": 2272, "question": "what dataset was used?", "title": "ALL-IN-1: Short Text Classification with One Model for All Languages" }, { "answers": [ "Background, extends, uses, motivation, compare/contrast, and future work for the ACL-ARC dataset. Background, method, result comparison for the SciCite dataset." ], "context": "Citations play a unique role in scientific discourse and are crucial for understanding and analyzing scientific work BIBREF0 , BIBREF1 . They are also typically used as the main measure for assessing impact of scientific publications, venues, and researchers BIBREF2 . The nature of citations can be different. Some citations indicate direct use of a method while some others merely serve as acknowledging a prior work. Therefore, identifying the intent of citations (Figure 1 ) is critical in improving automated analysis of academic literature and scientific impact measurement BIBREF1 , BIBREF3 . Other applications of citation intent classification are enhanced research experience BIBREF4 , information retrieval BIBREF5 , summarization BIBREF6 , and studying evolution of scientific fields BIBREF7 .", "id": 2273, "question": "What are the citation intent labels in the datasets?", "title": "Structural Scaffolds for Citation Intent Classification in Scientific Publications" }, { "answers": [ "" ], "context": "We propose a neural multitask learning framework for classification of citation intents. In particular, we introduce and use two structural scaffolds, auxiliary tasks related to the structure of scientific papers. The auxiliary tasks may not be of interest by themselves but are used to inform the main task. Our model uses a large auxiliary dataset to incorporate this structural information available in scientific documents into the citation intents. The overview of our model is illustrated in Figure 2 .", "id": 2274, "question": "What is the size of ACL-ARC datasets?", "title": "Structural Scaffolds for Citation Intent Classification in Scientific Publications" }, { "answers": [ "" ], "context": " This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Affect refers to the experience of a feeling or emotion BIBREF0 , BIBREF1 . This definition includes emotions, sentiments, personality, and moods. The importance of affect analysis in human communication and interactions has been discussed by Picard Picard1997. Historically, affective computing has focused on studying human communication and reactions through multi-modal data gathered through various sensors. The study of human affect from text and other published content is an important topic in language understanding. Word correlation with social and psychological processes is discussed by Pennebaker Pennebaker2011. Preotiuc-Pietro et al. perspara17nlpcss studied personality and psycho-demographic preferences through Facebook and Twitter content. Sentiment analysis in Twitter with a detailed discussion on human affect BIBREF2 and affect analysis in poetry BIBREF3 have also been explored. Human communication not only contains semantic and syntactic information but also reflects the psychological and emotional states. Examples include the use of opinion and emotion words BIBREF4 . The analysis of affect in interpersonal communication such as emails, chats, and longer written articles is necessary for various applications including the study of consumer behavior and psychology, understanding audiences and opinions in computational social science, and more recently for dialogue systems and conversational agents. This is a open research space today.", "id": 2275, "question": "Is the affect of a word affected by context?", "title": "Aff2Vec: Affect--Enriched Distributional Word Representations" }, { "answers": [ "" ], "context": "The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering.", "id": 2276, "question": "asdfasdaf", "title": "Unsupervised Cross-lingual Representation Learning at Scale" }, { "answers": [ "" ], "context": "From pretrained word embeddings BIBREF11, BIBREF12 to pretrained contextualized representations BIBREF13, BIBREF14 and transformer based language models BIBREF15, BIBREF0, unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding BIBREF16, BIBREF14, BIBREF1 extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages.", "id": 2277, "question": "asdfasdf", "title": "Unsupervised Cross-lingual Representation Learning at Scale" }, { "answers": [ "" ], "context": "In this section, we present the training objective, languages, and data we use. We follow the XLM approach BIBREF1 as closely as possible, only introducing changes that improve performance at scale.", "id": 2278, "question": "asdfasd", "title": "Unsupervised Cross-lingual Representation Learning at Scale" }, { "answers": [ "" ], "context": "We use a Transformer model BIBREF2 trained with the multilingual MLM objective BIBREF0, BIBREF1 using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tokenization directly on raw text data using Sentence Piece BIBREF26 with a unigram language model BIBREF27. We sample batches from different languages using the same sampling distribution as BIBREF1, but with $\\alpha =0.3$. Unlike BIBREF1, we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix SECREF8 goes into more details about the architecture of the different models referenced in this paper.", "id": 2279, "question": "asdf", "title": "Unsupervised Cross-lingual Representation Learning at Scale" }, { "answers": [ "Annotators went through various phases to make sure their annotations did not deviate from the mean." ], "context": "Research in emotion analysis from text focuses on mapping words, sentences, or documents to emotion categories based on the models of Ekman1992 or Plutchik2001, which propose the emotion classes of joy, sadness, anger, fear, trust, disgust, anticipation and surprise. Emotion analysis has been applied to a variety of tasks including large scale social media mining BIBREF0, literature analysis BIBREF1, BIBREF2, lyrics and music analysis BIBREF3, BIBREF4, and the analysis of the development of emotions over time BIBREF5.", "id": 2280, "question": "How is quality of annotation measured?", "title": "GoodNewsEveryone: A Corpus of News Headlines Annotated with Emotions, Semantic Roles, and Reader Perception" }, { "answers": [ "" ], "context": "Reasoning over multi-relational data is a key concept in Artificial Intelligence and knowledge graphs have appeared at the forefront as an effective tool to model such multi-relational data. Knowledge graphs have found increasing importance due to its wider range of important applications such as information retrieval BIBREF0 , natural language processing BIBREF1 , recommender systems BIBREF2 , question-answering BIBREF3 and many more. This has led to the increased efforts in constructing numerous large-scale Knowledge Bases (e.g. Freebase BIBREF4 , DBpedia BIBREF5 , Google's Knowledge graph BIBREF6 , Yago BIBREF7 and NELL BIBREF8 ), that can cater to these applications, by representing information available on the web in relational format.", "id": 2281, "question": "On what data is the model evaluated?", "title": "LinkNBed: Multi-Graph Representation Learning with Entity Linkage" }, { "answers": [ "the best performing model obtained an accuracy of 0.86" ], "context": "Social media such as Facebook, Twitter, and Short Text Messaging Service (SMS) are popular channels for getting feedback from consumers on products and services. In Pakistan, with the emergence of e-government practices, SMS is being used for getting feedback from the citizens on different public services with the aim to reduce petty corruption and deficient delivery in services. Automatic classification of these SMS into predefined categories can greatly decrease the response time on complaints and consequently improve the public services rendered to the citizens. While Urdu is the national language of Pakistan, English is treated as the official language of the country. This leads to the development of a distinct dialect of communication known as Roman Urdu, which utilizes English alphabets to write Urdu. Hence, the SMS texts contain multilingual text written in the non-native script and informal diction. The utilization of two or more languages simultaneously is known as multilingualism BIBREF0. Consequently, alternation of two languages in a single conversation, a phenomenon known as code-switching, is inevitable for a multilingual speaker BIBREF1. Factors like informal verbiage, improper grammar, variation in spellings, code-switching, and short text length make the problem of automatic bilingual SMS classification highly challenging.", "id": 2282, "question": "What accuracy score do they obtain?", "title": "A Multi-cascaded Deep Model for Bilingual SMS Classification" }, { "answers": [ "" ], "context": "The dataset consists of SMS feedbacks of the citizens of Pakistan on different public services availed by them. The objective of collecting these responses is to measure the performance of government departments rendering different public services. Preprocessing of the data is kept minimal. All records having only single word in SMS were removed as cleaning step. To construct the “gold standard\", $313,813$ samples are manually annotated into 12 predefined categories by two annotators in supervision of a domain-expert. Involvement of the domain-expert was to ensure the practicality and quality of the “gold standard\". Finally, stratified sampling method was opted for splitting the data into train and test partitions with $80-20$ ratio (i.e., $80\\%$ records for training and $20\\%$ records for testing). This way, training split has $251,050$ records while testing split has $62,763$ records. The rationale behind stratified sampling was to maintain the ratio of every class in both splits. The preprocessed and annotated data along with train and test split is made available . Note that the department names and service availed by the citizens is mapped to an integer identifier for anonymity.", "id": 2283, "question": "What is their baseline model?", "title": "A Multi-cascaded Deep Model for Bilingual SMS Classification" }, { "answers": [ "" ], "context": "The proposed model, named McM, is mainly inspired by the findings by Reimers, N., & Gurevych (2017) , who concluded that deeper model have minimal effect on the predictive performance of the model BIBREF7. McM manifests a wider model, which employ three feature learners (cascades) that are trained for classification independently (in parallel).", "id": 2284, "question": "What is the size of the dataset?", "title": "A Multi-cascaded Deep Model for Bilingual SMS Classification" }, { "answers": [ "Appreciation, Satisfied, Peripheral complaint, Demanded inquiry, Corruption, Lagged response, Unresponsive, Medicine payment, Adverse behavior, Grievance ascribed and Obnoxious/irrelevant" ], "context": "CNN learner is employed to learn $n$-gram features for identification of relationships between words. A 1-d convolution filter is used with a sliding window (kernel) of size $k$ (number of $n$-grams) in order to extract the features. A filter $W$ is defined as $W \\in \\mathbb {R}^{k \\times d}$ for the convolution function. The word vectors starting from the position $j$ to the position $j + k -1$ are processed by the filter $W$ at a time. The window $h_j$ is expressed as:", "id": 2285, "question": "What is the 12 class bilingual text?", "title": "A Multi-cascaded Deep Model for Bilingual SMS Classification" }, { "answers": [ "" ], "context": "2144 of all 7111 (30.15%) living languages today are African languages BIBREF1. But only a small portion of linguistic resources for NLP research are built for African languages. As a result, there are only few NLP publications: In all ACL conferences in 2019, only 5 out of 2695 (0.19%) author affiliations were based in Africa BIBREF2. This stark contrast of linguistic richness versus poor representation of African languages in NLP is caused by multiple factors.", "id": 2286, "question": "Which languages do they focus on?", "title": "Masakhane -- Machine Translation For Africa" }, { "answers": [ "The sequence model architectures which this method is transferred to are: LSTM and Transformer-based models" ], "context": "Named Entity Recognition (NER) is concerned with identifying named entities, such as person, location, product, and organization names, in unstructured text. In languages where words are naturally separated (e.g., English), NER was conventionally formulated as a sequence labeling problem, and the state-of-the-art results have been achieved by those neural-network-based models BIBREF1, BIBREF2, BIBREF3, BIBREF4.", "id": 2287, "question": "Which are the sequence model architectures this method can be transferred across?", "title": "Simplify the Usage of Lexicon in Chinese NER" }, { "answers": [ "Across 4 datasets, the best performing proposed model (CNN) achieved an average of 363% improvement over the state of the art method (LR-CNN)" ], "context": "In this section, we provide a concise description of the generic character-based neural NER model, which conceptually contains three stacked layers. The first layer is the character representation layer, which maps each character of a sentence into a dense vector. The second layer is the sequence modeling layer. It plays the role of modeling the dependence between characters, obtaining a hidden representation for each character. The final layer is the label inference layer. It takes the hidden representation sequence as input and outputs the predicted label (with probability) for each character. We detail these three layers below.", "id": 2288, "question": " What percentage of improvement in inference speed is obtained by the proposed method over the newest state-of-the-art methods?", "title": "Simplify the Usage of Lexicon in Chinese NER" }, { "answers": [ "error rate in a minimal pair ABX discrimination task" ], "context": "Current speech and language technologies based on Deep Neural Networks (DNNs) BIBREF0 require large quantities of transcribed data and additional linguistic resources (phonetic dictionary, transcribed data). Yet, for many languages in the world, such resources are not available and gathering them would be very difficult due to a lack of stable and widespread orthography BIBREF1 .", "id": 2289, "question": "What is the metric that is measures in this paper?", "title": "Sampling strategies in Siamese Networks for unsupervised speech representation learning" }, { "answers": [ "" ], "context": "Attentional sequence-to-sequence models have become the new standard for machine translation over the last two years, and with the unprecedented improvements in translation accuracy comes a new set of technical challenges. One of the biggest challenges is the high training and decoding costs of these neural machine translation (NMT) system, which is often at least an order of magnitude higher than a phrase-based system trained on the same data. For instance, phrasal MT systems were able achieve single-threaded decoding speeds of 100-500 words/sec on decade-old CPUs BIBREF0 , while BIBREF1 reported single-threaded decoding speeds of 8-10 words/sec on a shallow NMT system. BIBREF2 was able to reach CPU decoding speeds of 100 words/sec for a deep model, but used 44 CPU cores to do so. There has been recent work in speeding up decoding by reducing the search space BIBREF3 , but little in computational improvements.", "id": 2290, "question": "Do they only test on one dataset?", "title": "Sharp Models on Dull Hardware: Fast and Accurate Neural Machine Translation Decoding on the CPU" }, { "answers": [ "" ], "context": "The data set we evaluate on in this work is WMT English-French NewsTest2014, which has 380M words of parallel training data and a 3003 sentence test set. The NewsTest2013 set is used for validation. In order to compare our architecture to past work, we train a word-based system without any data augmentation techniques. The network architecture is very similar to BIBREF4 , and specific details of layer size/depth are provided in subsequent sections. We use an 80k source/target vocab and perform standard unk-replacement BIBREF1 on out-of-vocabulary words. Training is performed using an in-house toolkit.", "id": 2291, "question": "What baseline decoder do they use?", "title": "Sharp Models on Dull Hardware: Fast and Accurate Neural Machine Translation Decoding on the CPU" }, { "answers": [ "BLEU scores" ], "context": "The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system.", "id": 2292, "question": "Was evaluation metrics and criteria were used to evaluate the output of the cascaded multimodal speech translation?", "title": "Transformer-based Cascaded Multimodal Speech Translation" }, { "answers": [ "" ], "context": "In this section, we briefly describe the proposed multimodal speech translation system and its components.", "id": 2293, "question": "What dataset was used in this work?", "title": "Transformer-based Cascaded Multimodal Speech Translation" }, { "answers": [ "" ], "context": "Learning sentence representations from unlabelled data is becoming increasingly prevalent in both the machine learning and natural language processing research communities, as it efficiently and cheaply allows knowledge extraction that can successfully transfer to downstream tasks. Methods built upon the distributional hypothesis BIBREF0 and distributional similarity BIBREF1 can be roughly categorised into two types:", "id": 2294, "question": "How do they evaluate the sentence representations?", "title": "Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning" }, { "answers": [ "a linear projection and a bijective function with continuous transformation though ‘affine coupling layer’ of (Dinh et al.,2016). " ], "context": "Learning vector representations for words with a word embedding matrix as the encoder and a context word embedding matrix as the decoder BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 can be considered as a word-level example of our approach, as the models learn to predict the surrounding words in the context given the current word, and the context word embeddings can also be utilised to augment the word embeddings BIBREF14 , BIBREF16 . We are thus motivated to explore the use of sentence decoders after learning instead of ignoring them as most sentence encoder-decoder models do.", "id": 2295, "question": "What are the two decoding functions?", "title": "Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning" }, { "answers": [ "" ], "context": "Swiss German (“Schwyzerdütsch” or “Schwiizertüütsch”, abbreviated “GSW”) is the name of a large continuum of dialects attached to the Germanic language tree spoken by more than 60% of the Swiss population BIBREF0. Used every day from colloquial conversations to business meetings, Swiss German in its written form has become more and more popular in recent years with the rise of blogs, messaging applications and social media. However, the variability of the written form is rather large as orthography is more based on local pronunciations and emerging conventions than on a unique grammar.", "id": 2296, "question": "How is language modelling evaluated?", "title": "Automatic Creation of Text Corpora for Low-Resource Languages from the Internet: The Case of Swiss German" }, { "answers": [ "" ], "context": "In recent years, Twitter, a social media platform with hundreds of millions of users, has become a major source of news for people BIBREF0 . This is especially true for breaking-news about real-world events BIBREF1 . The 2011 Japanese earthquake, the 2013 Boston marathon bombings, and the 2015 Paris shootings are just three examples of an event where Twitter played major role in the dissemination of information. However, given the great volume of tweets generated during these events, it becomes extremely difficult to make sense of all the information that is being shared. In this paper, we present a semi-automatic tool that combines state-of-the-art natural language processing and clustering algorithms in a novel way, enabling users to efficiently and accurately identify and track stories that spread on Twitter about particular events. The output of our system can also be used by rumor verification systems to substantiate the veracity of rumors on Twitter BIBREF2 .", "id": 2297, "question": "Why there is only user study to evaluate the model?", "title": "A Semi-automatic Method for Efficient Detection of Stories on Social Media" }, { "answers": [ "" ], "context": "While machine learning methods conventionally model functions given sample inputs and outputs, a subset of statistical relational learning(SRL) BIBREF0 , BIBREF1 approaches specifically aim to model “things” (entities) and relations between them. These methods usually model human knowledge which is structured in the form of multi-relational Knowledge Graphs(KG). KGs allow semantically rich queries in search engines, natural language processing (NLP) and question answering. However, they usually miss a substantial portion of true relations, i.e. they are incomplete. Therefore, the prediction of missing links/relations in KGs is a crucial challenge for SRL approaches.", "id": 2298, "question": "What datasets are used to evaluate the model?", "title": "MDE: Multi Distance Embeddings for Link Prediction in Knowledge Graphs" }, { "answers": [ "" ], "context": "Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants and, more recently, navigating user interfaces, by providing a natural language interface to services and APIs on the web. The recent popularity of conversational interfaces and the advent of frameworks like Actions on Google and Alexa Skills, which allow developers to easily add support for new services, has resulted in a major increase in the number of application domains and individual services that assistants need to support, following the pattern of smartphone applications.", "id": 2299, "question": "How did they gather the data?", "title": "Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset" }, { "answers": [ "Alarm\nBank\nBus\nCalendar\nEvent\nFlight\nHome\nHotel\nMedia\nMovie\nMusic\nRentalCar\nRestaurant\nRideShare\nService\nTravel\nWeather" ], "context": "Task-oriented dialogue systems have constituted an active area of research for decades. The growth of this field has been consistently fueled by the development of new datasets. Initial datasets were limited to one domain, such as ATIS BIBREF6 for spoken language understanding for flights. The Dialogue State Tracking Challenges BIBREF7, BIBREF8, BIBREF9, BIBREF10 contributed to the creation of dialogue datasets with increasing complexity. Other notable related datasets include WOZ2.0 BIBREF11, FRAMES BIBREF2, M2M BIBREF1 and MultiWOZ BIBREF0. These datasets have utilized a variety of data collection techniques, falling within two broad categories:", "id": 2300, "question": "What are the domains covered in the dataset?", "title": "Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset" }, { "answers": [ "" ], "context": "India is a highly diverse multilingual country in the world. In India, people of different regions use their own regional speaking languages, which makes India a country having world's second highest number of languages. Human spoken languages in India belongs to several language families. Two main of those families are typically known as Indo-Aryan languages having 78.05 percentage Indian speakers BIBREF0 and Dravidian languages having 19.64 BIBREF0 percentage Indian speakers. Hindi and Gujarati are among constitutional languages of India having nearly 601,688,479 BIBREF0 Indian speakers almost 59 BIBREF0 percentage of total country population. Constitute of India under Article 343 offers English as second additional official language having only 226,449 BIBREF0 Indian speakers and nearly 0.02 percentages of total country population BIBREF0. Communication and information exchange among people is necessary for sharing knowledge, feelings, opinions, facts, and thoughts. Variation of English is used globally for human communication. The content available on the Internet is exceptionally dominated by English. Only 20 percent of the world population speaks in English, while in India it is only 0.02 BIBREF0. It is not possible to have a human translator in the country having this much language diversity. In order to bridge this vast language gap we need effective and accurate computational approaches, which require minimum human intervention. This task can be effectively done using machine translation.", "id": 2301, "question": "What is their baseline?", "title": "Neural Machine Translation System of Indic Languages -- An Attention based Approach" }, { "answers": [ "" ], "context": "SemEval Task 4 BIBREF1 tasked participating teams with identifying news articles that are misleading to their readers, a phenomenon often associated with “fake news” distributed by partisan sources BIBREF2 .", "id": 2302, "question": "Do they use the cased or uncased BERT model?", "title": "Harvey Mudd College at SemEval-2019 Task 4: The Clint Buchanan Hyperpartisan News Detector" }, { "answers": [ "They pre-train the models using 600000 articles as an unsupervised dataset and then fine-tune the models on small training set." ], "context": "We build upon the Bidirectional Encoder Representations from Transformers (BERT) model. BERT is a deep bidirectional transformer that has been successfully tuned to a variety of tasks BIBREF0 . BERT functions as a language model over character sequences, with tokenization as described by BIBREF3 . The transformer architecture BIBREF4 is based upon relying on self-attention layers to encode a sequence. To allow the language model to be trained in a bidirectional manner instead of predicting tokens autoregressively, BERT was pre-trained to fill in the blanks for a piece of text, also known as the Cloze task BIBREF5 .", "id": 2303, "question": "How are the two different models trained?", "title": "Harvey Mudd College at SemEval-2019 Task 4: The Clint Buchanan Hyperpartisan News Detector" }, { "answers": [ "645, 600000" ], "context": "Next, we describe the variations of the BERT model used in our experiments, the data we used, and details of the setup of each of our experiments.", "id": 2304, "question": "How long is the dataset?", "title": "Harvey Mudd College at SemEval-2019 Task 4: The Clint Buchanan Hyperpartisan News Detector" }, { "answers": [ "The negative effects were insignificant." ], "context": "Multiple tasks may often benefit from others by leveraging more available data. For natural language tasks, a simple approach is to pre-train embeddings BIBREF0, BIBREF1 or a language model BIBREF2, BIBREF3 over a large corpus. The learnt representations may then be used for upstream tasks such as part-of-speech tagging or parsing, for which there is less annotated data. Alternatively, multiple tasks may be trained simultaneously with either a single model or by sharing some model components. In addition to potentially benefit from multiple data sources, this approach also reduces the memory use. However, multi-task models of similar size as single-task baselines often under-perform because of their limited capacity. The underlying multi-task model learns to improve on harder tasks, but may hit a plateau, while simpler (or data poor) tasks can be over-trained (over-fitted). Regardless of data complexity, some tasks may be forgotten if the schedule is improper, also known as catastrophic forgetting BIBREF4.", "id": 2305, "question": "How big are negative effects of proposed techniques on high-resource tasks?", "title": "Adaptive Scheduling for Multi-Task Learning" }, { "answers": [ "" ], "context": "A common approach for multi-task learning is to train on each task uniformly BIBREF5. Alternatively, each task may be sampled following a fixed non-uniform schedule, often favoring either a specific task of interest or tasks with larger amounts of data BIBREF6, BIBREF8. Kipperwasser and Ballesteros BIBREF8 also propose variable schedules that increasingly favor some tasks over time. As all these schedules are pre-defined (as a function of the training step or amount of available training data), they offer limited control over the performance of all tasks. As such, we consider adaptive schedules that vary based on the validation performance of each task during training.", "id": 2306, "question": "What datasets are used for experiments?", "title": "Adaptive Scheduling for Multi-Task Learning" }, { "answers": [ "English to French and English to German" ], "context": "Explicit schedules may possibly be too restrictive in some circumstances, such as models trained on a very high number of tasks, or when one task is sampled much more often than others. Instead of explicitly varying task schedules, a similar impact may be achieved through learning rate or gradient manipulation. For example, the GradNorm BIBREF9 algorithm scales task gradients based on the magnitude of the gradients as well as on the training losses.", "id": 2307, "question": "Are this techniques used in training multilingual models, on what languages?", "title": "Adaptive Scheduling for Multi-Task Learning" }, { "answers": [ "" ], "context": "Scaling either the gradients $g_t$ or the per-task learning rates $\\alpha $ is equivalent with standard stochastic gradient descent, but not with adaptive optimizers such as Adam BIBREF7, whose update rule is given in Eq. DISPLAY_FORM5.", "id": 2308, "question": "What baselines non-adaptive baselines are used?", "title": "Adaptive Scheduling for Multi-Task Learning" }, { "answers": [ "" ], "context": "Networks are ubiquitous, with prominent examples including social networks (e.g., Facebook, Twitter) or citation networks of research papers (e.g., arXiv). When analyzing data from these real-world networks, traditional methods often represent vertices (nodes) as one-hot representations (containing the connectivity information of each vertex with respect to all other vertices), usually suffering from issues related to the inherent sparsity of large-scale networks. This results in models that are not able to fully capture the relationships between vertices of the network BIBREF0 , BIBREF1 . Alternatively, network embedding (i.e., network representation learning) has been considered, representing each vertex of a network with a low-dimensional vector that preserves information on its similarity relative to other vertices. This approach has attracted considerable attention in recent years BIBREF2 , BIBREF0 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 .", "id": 2309, "question": "What text sequences are associated with each vertex?", "title": "Improved Semantic-Aware Network Embedding with Fine-Grained Word Alignment" }, { "answers": [ "" ], "context": "A network (graph) is defined as $G = \\lbrace V,E\\rbrace $ , where $V$ and $E$ denote the set of $N$ vertices (nodes) and edges, respectively, where elements of $E$ are two-element subsets of $V$ . Here we only consider undirected networks, however, our approach (introduced below) can be readily extended to the directed case. We also define $W$ , the symmetric $\\mathbb {R}^{N \\times N}$ matrix whose elements, $w_{ij}$ , denote the weights associated with edges in $V$ , and $V$0 , the set of text sequences assigned to each vertex. Edges and weights contain the structural information of the network, while the text can be used to characterize the semantic properties of each vertex. Given network $V$1 , with the network embedding we seek to encode each vertex into a low-dimensional vector $V$2 (with dimension much smaller than $V$3 ), while preserving structural and semantic features of $V$4 .", "id": 2310, "question": "How long does it take for the model to run?", "title": "Improved Semantic-Aware Network Embedding with Fine-Grained Word Alignment" }, { "answers": [ "" ], "context": " Improving unsupervised learning is of key importance for advancing machine learning methods, as to unlock access to almost unlimited amounts of data to be used as training resources. The majority of recent success stories of deep learning does not fall into this category but instead relied on supervised training (in particular in the vision domain). A very notable exception comes from the text and natural language processing domain, in the form of semantic word embeddings trained unsupervised BIBREF0 , BIBREF1 , BIBREF2 . Within only a few years from their invention, such word representations – which are based on a simple matrix factorization model as we formalize below – are now routinely trained on very large amounts of raw text data, and have become ubiquitous building blocks of a majority of current state-of-the-art NLP applications.", "id": 2311, "question": "Do they report results only on English data?", "title": "Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features" }, { "answers": [ "" ], "context": "Our model is inspired by simple matrix factor models (bilinear models) such as recently very successfully used in unsupervised learning of word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF5 as well as supervised of sentence classification BIBREF6 . More precisely, these models can all be formalized as an optimization problem of the form DISPLAYFORM0 ", "id": 2312, "question": "Which other unsupervised models are used for comparison?", "title": "Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features" }, { "answers": [ "Accuracy and F1 score for supervised tasks, Pearson's and Spearman's correlation for unsupervised tasks" ], "context": "We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.", "id": 2313, "question": "What metric is used to measure performance?", "title": "Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features" }, { "answers": [ "" ], "context": "In contrast to more complex neural network based models, one of the core advantages of the proposed technique is the low computational cost for both inference and training. Given a sentence INLINEFORM0 and a trained model, computing the sentence representation INLINEFORM1 only requires INLINEFORM2 floating point operations (or INLINEFORM3 to be precise for the n-gram case, see ( EQREF8 )), where INLINEFORM4 is the embedding dimension. The same holds for the cost of training with SGD on the objective ( EQREF10 ), per sentence seen in the training corpus. Due to the simplicity of the model, parallel training is straight-forward using parallelized or distributed SGD.", "id": 2314, "question": "How do the n-gram features incorporate compositionality?", "title": "Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features" }, { "answers": [ "" ], "context": "Language models (LMs) are crucial components in many applications, such as speech recognition and machine translation. The aim of language models is to compute the probability of any given sentence INLINEFORM0 , which can be calculated as DISPLAYFORM0 ", "id": 2315, "question": "Which dataset do they use?", "title": "Future Word Contexts in Neural Network Language Models" }, { "answers": [ "Zipf's law describes change of word frequency rate, while Heaps-Herdan describes different word number in large texts (assumed that Hepas-Herdan is consequence of Zipf's)" ], "context": "Statistical characterization of languages has been a field of study for decadesBIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Even simple quantities, like letter frequency, can be used to decode simple substitution cryptogramsBIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, probably the most surprising result in the field is Zipf's law, which states that if one ranks words by their frequency in a large text, the resulting rank frequency distribution is approximately a power law, for all languages BIBREF0, BIBREF11. These kind of universal results have long piqued the interest of physicists and mathematicians, as well as linguistsBIBREF12, BIBREF13, BIBREF14. Indeed, a large amount of effort has been devoted to try to understand the origin of Zipf's law, in some cases arguing that it arises from the fact that texts carry information BIBREF15, all the way to arguing that it is the result of mere chance BIBREF16, BIBREF17. Another interesting characterization of texts is the Heaps-Herdan law, which describes how the vocabulary -that is, the set of different words- grows with the size of a text, the number of which, empirically, has been found to grow as a power of the text size BIBREF18, BIBREF19. It is worth noting that it has been argued that this law is a consequence Zipf's law. BIBREF20, BIBREF21", "id": 2316, "question": "How do Zipf and Herdan-Heap's laws differ?", "title": "Universal and non-universal text statistics: Clustering coefficient for language identification" }, { "answers": [ "" ], "context": "The goal of text summarization task is to produce a summary from a set of documents. The summary should retain important information and be reasonably shorter than the original documents BIBREF0 . When the set of documents contains only a single document, the task is usually referred to as single-document summarization. There are two kinds of summarization characterized by how the summary is produced: extractive and abstractive. Extractive summarization attempts to extract few important sentences verbatim from the original document. In contrast, abstractive summarization tries to produce an abstract which may contain sentences that do not exist in or are paraphrased from the original document.", "id": 2317, "question": "What was the best performing baseline?", "title": "IndoSum: A New Benchmark Dataset for Indonesian Text Summarization" }, { "answers": [ "" ], "context": "Fachrurrozi et al. BIBREF3 proposed some scoring methods and used them with TF-IDF to rank and summarize news articles. Another work BIBREF4 used latent Dirichlet allocation coupled with genetic algorithm to produce summaries for online news articles. Simple methods like naive Bayes has also been used for Indonesian news summarization BIBREF2 , although for English, naive Bayes has been used almost two decades earlier BIBREF5 . A more recent work BIBREF6 employed a summarization algorithm called TextTeaser with some predefined features for news articles as well. Slamet et al. BIBREF7 used TF-IDF to convert sentences into vectors, and their similarities are then computed against another vector obtained from some keywords. They used these similarity scores to extract important sentences as the summary. Unfortunately, all these work do not seem to be evaluated using ROUGE, despite being the standard metric for text summarization research.", "id": 2318, "question": "Which approaches did they use?", "title": "IndoSum: A New Benchmark Dataset for Indonesian Text Summarization" }, { "answers": [ "" ], "context": "We used a dataset provided by Shortir, an Indonesian news aggregator and summarizer company. The dataset contains roughly 20K news articles. Each article has the title, category, source (e.g., CNN Indonesia, Kumparan), URL to the original article, and an abstractive summary which was created manually by a total of 2 native speakers of Indonesian. There are 6 categories in total: Entertainment, Inspiration, Sport, Showbiz, Headline, and Tech. A sample article-summary pair is shown in Fig. FIGREF4 .", "id": 2319, "question": "What is the size of the dataset?", "title": "IndoSum: A New Benchmark Dataset for Indonesian Text Summarization" }, { "answers": [ "" ], "context": "For evaluation, we used ROUGE BIBREF1 , a standard metric for text summarization. We used the implementation provided by pythonrouge. Following BIBREF11 , we report the INLINEFORM0 score of R-1, R-2, and R-L. Intuitively, R-1 and R-2 measure informativeness and R-L measures fluency BIBREF11 . We report the INLINEFORM1 score instead of just the recall score because although we extract a fixed number of sentences as the summary, the number of words are not limited. So, reporting only recall benefits models which extract long sentences.", "id": 2320, "question": "Did they use a crowdsourcing platform for the summaries?", "title": "IndoSum: A New Benchmark Dataset for Indonesian Text Summarization" }, { "answers": [ "Random perturbation of Wikipedia sentences using mask-filling with BERT, backtranslation and randomly drop out" ], "context": "In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm BIBREF0, BIBREF1 which can tackle a wide array of tasks including translation BIBREF2, summarization BIBREF3, BIBREF4, structured-data-to-text generation BIBREF5, BIBREF6, BIBREF7 dialog BIBREF8, BIBREF9 and image captioning BIBREF10. However, progress is increasingly impeded by the shortcomings of existing metrics BIBREF7, BIBREF11, BIBREF12.", "id": 2321, "question": "How are the synthetic examples generated?", "title": "BLEURT: Learning Robust Metrics for Text Generation" }, { "answers": [ "" ], "context": "Greedy transition-based parsers are popular in NLP, as they provide competitive accuracy with high efficiency. They syntactically analyze a sentence by greedily applying transitions, which read it from left to right and produce a dependency tree.", "id": 2322, "question": "Do they measure the number of created No-Arc long sequences?", "title": "Non-Projective Dependency Parsing with Non-Local Transitions" }, { "answers": [ "Proposed method achieves 94.5 UAS and 92.4 LAS compared to 94.3 and 92.2 of best state-of-the -art greedy based parser. Best state-of-the art parser overall achieves 95.8 UAS and 94.6 LAS." ], "context": "The original non-projective parser defined by covington01fundamental was modelled under the transition-based parsing framework by Nivre2008. We only sketch this transition system briefly for space reasons, and refer to BIBREF4 for details.", "id": 2323, "question": "By how much does the new parser outperform the current state-of-the-art?", "title": "Non-Projective Dependency Parsing with Non-Local Transitions" }, { "answers": [ "" ], "context": "A cryptocurrency is a digital currency designed to work as a medium of exchange that uses strong cryptography to secure financial transactions, control the creation of additional units, and verify the transfer of assets. They are based on decentralized systems built on block-chain technology, a distributed ledger enforced by a disparate network of computers BIBREF0. The first decentralized cryptocurrency, Bitcoin, was released as open-source software in 2009. After this release, approximately 4000 altcoins (other cryptocurrencies) have been released. As of August 2019, the total market capitalization of cryptocurrencies is $258 billion, where Bitcoin alone has a market capitalization of $179 billion BIBREF1.", "id": 2324, "question": "Do they evaluate only on English datasets?", "title": "KryptoOracle: A Real-Time Cryptocurrency Price Prediction Platform Using Twitter Sentiments" }, { "answers": [ "root mean square error between the actual and the predicted price of Bitcoin for every minute" ], "context": "In this section we present a brief review of the state of the art related to cryptocurrency price prediction. Related works can be divided into three main categories: (i) social media sentiments and financial markets (including cryptocurrency markets); (ii) machine learning for cryptocurrency price prediction; and (iii) big data platforms for financial market prediction.", "id": 2325, "question": "What experimental evaluation is used?", "title": "KryptoOracle: A Real-Time Cryptocurrency Price Prediction Platform Using Twitter Sentiments" }, { "answers": [ "By using Apache Spark which stores all executions in a lineage graph and recovers to the previous steady state from any fault" ], "context": "KryptoOracle is an engine that aims at predicting the trends of any cryptocurrency based on the sentiment of the crowd. It does so by learning the correlation between the sentiments of relevant tweets and the real time price of the cryptocurrency. The engine bootstraps itself by first learning from the history given to it and starts predicting based on the previous correlation. KryptoOracle is also capable of reinforcing itself by the mistakes it makes and tries to improve itself at prediction. In addition, the engine supports trend visualization over time based on records of both incoming data and intermediate results. This engine has been built keeping in mind the increasing data volume, velocity and variety that has been made available and is therefore able to scale and manage high volumes of heterogeneous data.", "id": 2326, "question": "How is the architecture fault-tolerant?", "title": "KryptoOracle: A Real-Time Cryptocurrency Price Prediction Platform Using Twitter Sentiments" }, { "answers": [ "handling large volume incoming data, sentiment analysis on tweets and predictive online learning" ], "context": "The growth of the volume of data inspired us to opt for a big data architecture which can not only handle the prediction algorithms but also the streaming and increasing volume of data in a fault tolerant way.", "id": 2327, "question": "Which elements of the platform are modular?", "title": "KryptoOracle: A Real-Time Cryptocurrency Price Prediction Platform Using Twitter Sentiments" }, { "answers": [ "" ], "context": "The spread of misinformation or hate messages through social media is a central societal challenge given the unprecedented broadcast potential of these tools. While there already exist some moderation mechanisms such as crowd-sourced abuse reports and dedicated human teams of moderators, the huge and growing scale of these networks requires some degree of automation for the task.", "id": 2328, "question": "What is the source of memes?", "title": "Hate Speech in Pixels: Detection of Offensive Memes towards Automatic Moderation" }, { "answers": [ "" ], "context": "Hate speech is a widely studied topic in the context of social science. This phenomena has been monitored, tracked, measured or quantified in a number of occasions BIBREF4, BIBREF5, BIBREF6. It appears in media such as newspapers or TV news, but one of the main focus of hate speech with very diverse targets has appeared in social networks BIBREF7, BIBREF8, BIBREF9. Most works in hate speech detection has focused in language. The most common approach is to generate an embedding of some kind, using bag of words BIBREF8 or N-gram features BIBREF10 and many times using expert knowledge for keywords. After that, the embedding is fed to a binary classifier to predict hate speech. Up to our knowledge, there is no previous work on detecting hate speech when combining language with visual content as in memes. Our technical solution is inspired by BIBREF11 in which gang violence on social media was predicted from a multimodal approach that fused images and text. Their model extracted features from both modalities using pretrained embeddings for language and vision, and later merged both vectors to feed the multimodal features into a classifier.", "id": 2329, "question": "Is the dataset multimodal?", "title": "Hate Speech in Pixels: Detection of Offensive Memes towards Automatic Moderation" }, { "answers": [ "" ], "context": "The overall system expects an Internet meme input, and produces a hate score as an output. Figure FIGREF1 shows a block diagram of the proposed solution.", "id": 2330, "question": "How is each instance of the dataset annotated?", "title": "Hate Speech in Pixels: Detection of Offensive Memes towards Automatic Moderation" }, { "answers": [ "" ], "context": "Variational Autoencoder (VAE) BIBREF1 is a powerful method for learning representations of high-dimensional data. However, recent attempts of applying VAEs to text modelling are still far less successful compared to its application to image and speech BIBREF2, BIBREF3, BIBREF4. When applying VAEs for text modelling, recurrent neural networks (RNNs) are commonly used as the architecture for both encoder and decoder BIBREF0, BIBREF5, BIBREF6. While such a VAE-RNN based architecture allows encoding and generating sentences (in the decoding phase) with variable-length effectively, it is also vulnerable to an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks.", "id": 2331, "question": "Which dataset do they use for text modelling?", "title": "A Stable Variational Autoencoder for Text Modelling" }, { "answers": [ "" ], "context": "A variational autoencoder (VAE) is a deep generative model, which combines variational inference with deep learning. The VAE modifies the conventional autoencoder architecture by replacing the deterministic latent representation $\\mathbf {z}$ of an input $\\mathbf {x}$ with a posterior distribution $P(\\mathbf {z}|\\mathbf {x})$, and imposing a prior distribution on the posterior, such that the model allows sampling from any point of the latent space and yet able to generate novel and plausible output. The prior is typically chosen to be standard normal distributions, i.e., $P(\\mathbf {z}) = \\mathcal {N}(\\mathbf {0},\\mathbf {1})$, such that the KL divergence between posterior and prior can be computed in closed form BIBREF1.", "id": 2332, "question": "Do they compare against state of the art text generation?", "title": "A Stable Variational Autoencoder for Text Modelling" }, { "answers": [ "" ], "context": "In this section, we discuss the technical details of the proposed holistic regularisation VAE (HR-VAE) model, a general architecture which can effectively mitigate the KL vanishing phenomenon.", "id": 2333, "question": "How do they evaluate generated text quality?", "title": "A Stable Variational Autoencoder for Text Modelling" }, { "answers": [ "" ], "context": "In recent years, there has been a rapid growth in the usage of social media. People post their day-to-day happenings on regular basis. BIBREF0 propose four tasks for detecting drug names, classifying medication intake, classifying adverse drug reaction and detecting vaccination behavior from tweets. We participated in the Task2 and Task4.", "id": 2334, "question": "Was the system only evaluated over the second shared task?", "title": "Neural DrugNet" }, { "answers": [ "BLUE utilizes different metrics for each of the tasks: Pearson correlation coefficient, F-1 scores, micro-averaging, and accuracy" ], "context": "With the growing amount of biomedical information available in textual form, there have been significant advances in the development of pre-training language representations that can be applied to a range of different tasks in the biomedical domain, such as pre-trained word embeddings, sentence embeddings, and contextual representations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 .", "id": 2335, "question": "Could you tell me more about the metrics used for performance evaluation?", "title": "Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets" }, { "answers": [ "" ], "context": "There is a long history of using shared language representations to capture text semantics in biomedical text and data mining research. Such research utilizes a technique, termed transfer learning, whereby the language representations are pre-trained on large corpora and fine-tuned in a variety of downstream tasks, such as named entity recognition and relation extraction.", "id": 2336, "question": "which tasks are used in BLUE benchmark?", "title": "Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets" }, { "answers": [ "bilingual dictionary induction, monolingual and cross-lingual word similarity, and cross-lingual hypernym discovery" ], "context": "Word embeddings are one of the most widely used resources in NLP, as they have proven to be of enormous importance for modeling linguistic phenomena in both supervised and unsupervised settings. In particular, the representation of words in cross-lingual vector spaces (henceforth, cross-lingual word embeddings) is quickly gaining in popularity. One of the main reasons is that they play a crucial role in transferring knowledge from one language to another, specifically in downstream tasks such as information retrieval BIBREF0 , entity linking BIBREF1 and text classification BIBREF2 , while at the same time providing improvements in multilingual NLP problems such as machine translation BIBREF3 .", "id": 2337, "question": "What are the tasks that this method has shown improvements?", "title": "Improving Cross-Lingual Word Embeddings by Meeting in the Middle" }, { "answers": [ "because word pair similarity increases if the two words translate to similar parts of the cross-lingual embedding space" ], "context": "Bilingual word embeddings have been extensively studied in the literature in recent years. Their nature varies with respect to the supervision signals used for training BIBREF13 , BIBREF14 . Some common signals to learn bilingual embeddings come from parallel BIBREF15 , BIBREF16 , BIBREF17 or comparable corpora BIBREF18 , BIBREF19 , BIBREF20 , or lexical resources such as WordNet, ConceptNet or BabelNet BIBREF21 , BIBREF22 , BIBREF23 . However, these sources of supervision may be scarce, limited to certain domains or may not be directly available for certain language pairs.", "id": 2338, "question": "Why does the model improve in monolingual spaces as well? ", "title": "Improving Cross-Lingual Word Embeddings by Meeting in the Middle" }, { "answers": [ "" ], "context": "Electronic Health Records (EHRs) are organized collections of information about individual patients. They are designed such that they can be shared across different settings for providing health care services. The Institute of Medicine committee on improving the patient record has recognized the importance of using EHRs to inform decision support systems and support data-driven quality measures BIBREF0 . One of the biggest challenges in achieving this goal is the difficulty of extracting information from large quantities of EHR data stored as unstructured free text. Clinicians often make use of narratives and first-person stories to document interactions, findings and analyses in patient cases BIBREF1 . As a result, finding information from these volumes of health care records typically requires the use of NLP techniques to automate the extraction process.", "id": 2339, "question": "What are the categories being extracted?", "title": "An Interactive Tool for Natural Language Processing on Clinical Text" }, { "answers": [ "" ], "context": "Abstract Meaning Representation (AMR) parsing is the process of converting natural language sentences into their corresponding AMR representations BIBREF0 . An AMR is a graph with nodes representing the concepts of the sentence and edges representing the semantic relations between them. Most available AMR datasets large enough to train statistical models consist of pairs of English sentences and AMR graphs.", "id": 2340, "question": "Do the authors test their annotation projection techniques on tasks other than AMR?", "title": "Cross-lingual Abstract Meaning Representation Parsing" }, { "answers": [ "Word alignments are generated for parallel text, and aligned words are assumed to also share AMR node alignments." ], "context": "AMR is a semantic representation heavily biased towards English, where labels for nodes and edges are either English words or Propbank frames BIBREF5 . The goal of AMR is to abstract away from the syntactic realization of the original sentences while maintaining its underlying meaning. As a consequence, different phrasings of one sentence are expected to provide identical AMR representations. This canonicalization does not always hold across languages: two sentences that express the same meaning in two different languages are not guaranteed to produce identical AMR structures BIBREF6 , BIBREF7 . However, xue2014not show that in many cases the unlabeled AMRs are in fact shared across languages. We are encouraged by this finding and argue that it should be possible to develop algorithms that account for some of these differences when they arise. We therefore introduce a new problem, which we call cross-lingual AMR parsing: given a sentence in any language, the goal is to recover the AMR graph that was originally devised for its English translation. This task is harder than traditional AMR parsing as it requires to recover English labels as well as to deal with structural differences between languages, usually referred as translation divergence. We propose two initial solutions to this problem: by annotation projection and by machine translation.", "id": 2341, "question": "How is annotation projection done when languages have different word order?", "title": "Cross-lingual Abstract Meaning Representation Parsing" }, { "answers": [ "" ], "context": "Ontology-based knowledge bases (KBs) like DBpedia BIBREF0 are playing an increasingly important role in domains such knowledge management, data analysis and natural language understanding. Although they are very valuable resources, the usefulness and usability of such KBs is limited by various quality issues BIBREF1 , BIBREF2 , BIBREF3 . One such issue is the use of string literals (both explicitly typed and plain literals) instead of semantically typed entities; for example in the triple $\\langle $ River_Thames, passesArea, “Port Meadow, Oxford\" $\\rangle $ . This weakens the KB as it does not capture the semantics of such literals. If, in contrast, the object of the triple were an entity, then this entity could, e.g., be typed as Wetland and Park, and its location given as Oxford. This problem is pervasive and hence results in a significant loss of information: according to statistics from Gunaratna et al. BIBREF4 in 2016, the DBpedia property dbp:location has over 105,000 unique string literals that could be matched with entities. Besides DBpedia, such literals can also be found in some other KBs from encyclopedias (e.g., zhishi.me BIBREF5 ), in RDF graphs transformed from tabular data (e.g., LinkedGeoData BIBREF6 ), in aligned or evolving KBs, etc. ", "id": 2342, "question": "What is the reasoning method that is used?", "title": "Canonicalizing Knowledge Base Literals" }, { "answers": [ "" ], "context": "In this study we consider a knowledge base (KB) that includes both ontological axioms that induce (at least) a hierarchy of semantic types (i.e., classes), and assertions that describe concrete entities (individuals). Each such assertion is assumed to be in the form of an RDF triple $\\langle s,p,o \\rangle $ , where $s$ is an entity, $p$ is a property and $o$ can be either an entity or a literal (i.e., a typed or untyped data value such as a string or integer).", "id": 2343, "question": "What KB is used in this work?", "title": "Canonicalizing Knowledge Base Literals" }, { "answers": [ "0.8320 on semantic typing, 0.7194 on entity matching" ], "context": "The technical framework for the classification problem is shown in Fig. 1 . It involves three main steps: (i) candidate class extraction; (ii) model training and prediction; and (iii) literal typing and canonicalization.", "id": 2344, "question": "What's the precision of the system?", "title": "Canonicalizing Knowledge Base Literals" }, { "answers": [ "" ], "context": "Modern speech-based assistants, such as Amazon Alexa, Google Home, Microsoft Cortana, and Apple Siri, enable users to complete daily tasks such as shopping, setting reminders, and playing games using voice commands. Such human-like interfaces create a rich experience for users by enabling them to complete many tasks hands- and eyes-free in a conversational manner. Furthermore, these services offer tools to enable developers and customers to create custom voice experiences (skills) and as a result extend the capabilities of the assistant. Amazon's Alexa Skills Kit BIBREF0, Google's Actions and Microsoft's Cortana Skills Kit are examples of such tools. As the number of skills (with potentially overlapping functionality) increases, it becomes more difficult for end users to find the skills that can address their request.", "id": 2345, "question": "How did they measure effectiveness?", "title": "Towards Personalized Dialog Policies for Conversational Skill Discovery" }, { "answers": [ "Answer with content missing: (Table 2) CONCAT ensemble" ], "context": "Imagine that you have a friend who claims to know a lot of trivia. During a quiz, you ask them about the native language of actor Jean Marais. They correctly answer French. For a moment you are impressed, until you realize that Jean is a typical French name. So you ask the same question about Daniel Ceccaldi (another French actor, but with an Italian-sounding name). This time your friend says “Italian, I guess.” If this were a Question Answering (QA) benchmark, your friend would have achieved a respectable accuracy of 50%. Yet, their performance does not indicate factual knowledge about the native languages of actors. Rather, it shows that they are able to reason about the likely origins of peoples' names (see Table TABREF1 for more examples).", "id": 2346, "question": "Which of the two ensembles yields the best performance?", "title": "BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA" }, { "answers": [ "" ], "context": "The LAMA (LAnguage Model Analysis) benchmark BIBREF1 is supposed to probe for “factual and commonsense knowledge” inherent in LMs. In this paper, we focus on LAMA-Google-RE and LAMA-T-REx BIBREF5, which are aimed at factual knowledge. Contrary to most previous works on QA, LAMA tests LMs as-is, without supervised finetuning.", "id": 2347, "question": "What are the two ways of ensembling BERT and E-BERT?", "title": "BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA" }, { "answers": [ "" ], "context": "It is often possible to guess properties of an entity from its name, with zero factual knowledge of the entity itself. This is because entities are often named according to implicit or explicit rules (e.g., the cultural norms involved in naming a child, copyright laws for industrial products, or simply a practical need for descriptive names). LAMA makes guessing even easier by its limited vocabulary, which may only contain a few candidates for a particular entity type.", "id": 2348, "question": "How is it determined that a fact is easy-to-guess?", "title": "BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA" }, { "answers": [ "" ], "context": "Constituent and dependency are two typical syntactic structure representation forms as shown in Figure FIGREF1, which have been well studied from both linguistic and computational perspective BIBREF0, BIBREF1. In earlier time, linguists and NLP researchers discussed how to encode lexical dependencies in phrase structures, like Tree-adjoining grammar (TAG) BIBREF2 and head-driven phrase structure grammar (HPSG) BIBREF3.", "id": 2349, "question": "How is dependency parsing empirically verified?", "title": "Concurrent Parsing of Constituency and Dependency" }, { "answers": [ "" ], "context": "Using an encoder-decoder backbone, our model may be regarded as an extension of the constituent parsing model of BIBREF18 as shown in Figure FIGREF4. The difference is that in our model both constituent and dependency parsing share the same token representation and shared self-attention layers and each has its own individual Self-Attention Layers and subsequent processing layers. Our model includes four modules: token representation, self-attention encoder, constituent and dependency parsing decoder.", "id": 2350, "question": "How are different network components evaluated?", "title": "Concurrent Parsing of Constituency and Dependency" }, { "answers": [ "" ], "context": "In our model, token representation $x_i$ is composed by character, word and part-of-speech (POS) embeddings. For character-level representation, we explore two types of encoders, CharCNNs BIBREF19, BIBREF20 and CharLSTM BIBREF18, as both types have been verified their effectiveness. For word-level representation, we concatenate randomly initialized and pre-trained word embeddings. We consider two ways to compose the final token representation, summing and concatenation, $x_i$=$x_{char}$+$x_{word}$+$x_{POS}$, $x_i$=[$x_{char}$;$x_{word}$;$x_{POS}$].", "id": 2351, "question": "What are the performances obtained for PTB and CTB?", "title": "Concurrent Parsing of Constituency and Dependency" }, { "answers": [ "" ], "context": "The encoder in our model is adapted from BIBREF21 to factor explicit content and position information in the self-attention process BIBREF18. The input matrices $X = [x_1, x_2, \\dots , x_n ]$ in which $x_i$ is concatenated with position embedding are transformed by a self-attention encoder. We factor the model between content and position information both in self-attention sub-layer and feed-forward network, whose setting details follow BIBREF18. We also try different numbers of shared self-attention layers in section SECREF15.", "id": 2352, "question": "What are the models used to perform constituency and dependency parsing?", "title": "Concurrent Parsing of Constituency and Dependency" }, { "answers": [ "" ], "context": "The capability of deep neural models of handling complex dependencies has benefited various artificial intelligence tasks, such as image recognition where test error was reduced by scaling VGG nets BIBREF0 up to hundreds of convolutional layers BIBREF1. In NLP, deep self-attention networks have enabled large-scale pretrained language models such as BERT BIBREF2 and GPT BIBREF3 to boost state-of-the-art (SOTA) performance on downstream applications. By contrast, though neural machine translation (NMT) gained encouraging improvement when shifting from a shallow architecture BIBREF4 to deeper ones BIBREF5, BIBREF6, BIBREF7, BIBREF8, the Transformer BIBREF9, a currently SOTA architecture, achieves best results with merely 6 encoder and decoder layers, and no gains were reported by BIBREF9 from further increasing its depth on standard datasets.", "id": 2353, "question": "Is the proposed layer smaller in parameters than a Transformer?", "title": "Improving Deep Transformer with Depth-Scaled Initialization and Merged Attention" }, { "answers": [ "They initialize their word and entity embeddings with vectors pre-trained over a large corpus of unlabeled data." ], "context": "Named Entity Disambiguation (NED) is the task of linking mentions of entities in text to a given knowledge base, such as Freebase or Wikipedia. NED is a key component in Entity Linking (EL) systems, focusing on the disambiguation task itself, independently from the tasks of Named Entity Recognition (detecting mention bounds) and Candidate Generation (retrieving the set of potential candidate entities). NED has been recognized as an important component in NLP tasks such as semantic parsing BIBREF0 .", "id": 2354, "question": "What is the new initialization method proposed in this paper?", "title": "Named Entity Disambiguation for Noisy Text" }, { "answers": [ "The authors believe that the Wikilinks corpus contains ground truth annotations while being noisy. They discard mentions that cannot have ground-truth verified by comparison with Wikipedia." ], "context": "We introduce WikilinksNED, a large-scale NED dataset based on text fragments from the web. Our dataset is derived from the Wikilinks corpus BIBREF14 , which was constructed by crawling the web and collecting hyperlinks (mentions) linking to Wikipedia concepts (entities) and their surrounding text (context). Wikilinks contains 40 million mentions covering 3 million entities, collected from over 10 million web pages.", "id": 2355, "question": "How was a quality control performed so that the text is noisy but the annotations are accurate?", "title": "Named Entity Disambiguation for Noisy Text" }, { "answers": [ "No, it is a probabilistic model trained by finding feature weights through gradient ascent" ], "context": "In active machine learning, a learner is able to query an oracle in order to obtain information that is expected to improve performance. Theoretical and empirical results show that active learning can speed acquisition for a variety of learning tasks BIBREF0 . Although impressive, most work on active machine learning has focused on relatively simple types of information requests (most often a request for a supervised label). In contrast, humans often learn by asking far richer questions which more directly target the critical parameters in a learning task. A human child might ask “Do all dogs have long tails?\" or “What is the difference between cats and dogs?\" BIBREF1 . A long term goal of artificial intelligence (AI) is to develop algorithms with a similar capacity to learn by asking rich questions. Our premise is that we can make progress toward this goal by better understanding human question asking abilities in computational terms BIBREF2 .", "id": 2356, "question": "Is it a neural model? How is it trained?", "title": "Question Asking as Program Generation" }, { "answers": [ "" ], "context": "Twitter is a social network that has been used worldwide as a means of news spreading. In fact, more than 85% of its users use Twitter to be updated with news, and do so on a daily basis BIBREF0. Users behaviour of this social network has been found efficient in electronic word-of-mouth processes BIBREF1, which is a key component for the quick spreading of breaking news. This would lead to think that news-related content occupies the majority of the tweets volume. However, on average, the proportion of news-related content to the total content of tweets is 1% worldwide, but increases dramatically (up to 15%) in countries in conflict BIBREF2. An extrapolation of these findings indicates that Colombia might have a high content of news-related tweets, since it is well-known that Colombia is one of the most violent countries in the world, and has been for decades BIBREF3.", "id": 2357, "question": "How do people engage in Twitter threads on different types of news?", "title": "Event detection in Colombian security Twitter news using fine-grained latent topic analysis" }, { "answers": [ "" ], "context": "In this section, we describe the dataset used in our research, as well as the methods to perform fine-grained latent topic analysis to process all the data. The method is largely based on our previous work BIBREF17.", "id": 2358, "question": "How are the clusters related to security, violence and crime identified?", "title": "Event detection in Colombian security Twitter news using fine-grained latent topic analysis" }, { "answers": [ "" ], "context": "Recent advances in visual language field enabled by deep learning techniques have succeeded in bridging the gap between vision and language in a variety of tasks, ranging from describing the image BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 to answering questions about the image BIBREF4 , BIBREF5 . Such achievements were possible under the premise that there exists a set of ground truth references that are universally applicable regardless of the target, scope, or context. In real-world setting, however, image descriptions are prone to an infinitely wide range of variabilities, as different viewers may pay attention to different aspects of the image in different contexts, resulting in a variety of descriptions or interpretations. Due to its subjective nature, such diversity is difficult to obtain with conventional image description techniques.", "id": 2359, "question": "What are the features of used to customize target user interaction? ", "title": "Customized Image Narrative Generation via Interactive Visual Question Generation and Answering" }, { "answers": [ "Through human evaluation where they are asked to evaluate the generated output on a likert scale." ], "context": "The growing interest in Machine Reading Comprehension (MRC) has sparked significant research efforts on Question Generation (QG), the dual task to Question Answering (QA). In QA, the objective is to produce an adequate response given a query and a text; conversely, for QG, the task is generally defined as generating relevant question given a source text, focusing on a specific answer span. To our knowledge, all works tackling QG have thus far focused exclusively on generating relevant questions which can be answered given the source text: for instance, given AAAI was founded in 1979 as input, a question likely to be automatically generated would be When was AAAI founded?, where the answer 1979 is a span of the input. Such questions are useful to evaluate reading comprehension for both machines BIBREF0, BIBREF1 and humans BIBREF2.", "id": 2360, "question": "How they evaluate quality of generated output?", "title": "Ask to Learn: A Study on Curiosity-driven Question Generation" }, { "answers": [ "" ], "context": "Deep learning models have been widely applied to text generation tasks such as machine translation BIBREF5, abstractive summarization BIBREF6 or dialog BIBREF7, providing significant gains in performance. The state of the art approaches are based on sequence to sequence models BIBREF8, BIBREF9. In recent years, significant research efforts have been directed to the tasks of Machine Reading Comprehension (MRC) and Question Answering (QA) BIBREF0, BIBREF10. The data used for tackling these tasks are usually composed of $\\lbrace context, question, answer\\rbrace $ triplets: given a context and the question, a model is trained to predict the answer.", "id": 2361, "question": "What automated metrics authors investigate?", "title": "Ask to Learn: A Study on Curiosity-driven Question Generation" }, { "answers": [ "" ], "context": "NLP can be extremely useful for enabling scientific inquiry, helping us to quickly and efficiently understand large corpora, gather evidence, and test hypotheses BIBREF0 , BIBREF1 . One domain for which automated analysis is particularly useful is Internet security: researchers obtain large amounts of text data pertinent to active threats or ongoing cybercriminal activity, for which the ability to rapidly characterize that text and draw conclusions can reap major benefits BIBREF2 , BIBREF3 . However, conducting automatic analysis is difficult because this data is out-of-domain for conventional NLP models, which harms the performance of both discrete models BIBREF4 and deep models BIBREF5 . Not only that, we show that data from one cybercrime forum is even out of domain with respect to another cybercrime forum, making this data especially challenging.", "id": 2362, "question": "What supervised models are experimented with?", "title": "Identifying Products in Online Cybercrime Marketplaces: A Dataset for Fine-grained Domain Adaptation" }, { "answers": [ "" ], "context": "We consider several forums that vary in the nature of products being traded:", "id": 2363, "question": "Who annotated the data?", "title": "Identifying Products in Online Cybercrime Marketplaces: A Dataset for Fine-grained Domain Adaptation" }, { "answers": [ "Darkode, Hack Forums, Blackhat and Nulled." ], "context": "We developed our annotation guidelines through six preliminary rounds of annotation, covering 560 posts. Each round was followed by discussion and resolution of every post with disagreements. We benefited from members of our team who brought extensive domain expertise to the task. As well as refining the annotation guidelines, the development process trained annotators who were not security experts. The data annotated during this process is not included in Table TABREF3 .", "id": 2364, "question": "What are the four forums the data comes from?", "title": "Identifying Products in Online Cybercrime Marketplaces: A Dataset for Fine-grained Domain Adaptation" }, { "answers": [ "" ], "context": "Neural machine translation (NMT) typically makes use of a recurrent neural network (RNN) -based encoder and decoder, along with an attention mechanism BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . However, it has been shown that RNNs require some supervision to learn syntax BIBREF4 , BIBREF5 , BIBREF6 . Therefore, explicitly incorporating syntactic information into NMT has the potential to improve performance. This is particularly true for source syntax, which can improve the model's representation of the source language.", "id": 2365, "question": "How do they obtain parsed source sentences?", "title": "Multi-Source Syntactic Neural Machine Translation" }, { "answers": [ "" ], "context": "Using linearized parse trees within sequential frameworks was first done in the context of neural parsing. vinyals2015grammar parsed using an attentional seq2seq model; they used linearized, unlexicalized parse trees on the target side and sentences on the source side. In addition, as in this work, they used an external parser to create synthetic parsed training data, resulting in improved parsing performance. choe2016parsing adopted a similar strategy, using linearized parses in an RNN language modeling framework.", "id": 2366, "question": "What kind of encoders are used for the parsed source sentence?", "title": "Multi-Source Syntactic Neural Machine Translation" }, { "answers": [ "" ], "context": "Among the first proposals for using source syntax in NMT was that of luong2015multi, who introduced a multi-task system in which the source data was parsed and translated using a shared encoder and two decoders. More radical changes to the standard NMT paradigm have also been proposed. eriguchi2016tree introduced tree-to-sequence NMT; this model took parse trees as input using a tree-LSTM BIBREF10 encoder. bastings2017graph used a graph convolutional encoder in order to take labeled dependency parses of the source sentences into account. hashimoto2017neural added a latent graph parser to the encoder, allowing it to learn soft dependency parses while simultaneously learning to translate.", "id": 2367, "question": "Whas is the performance drop of their model when there is no parsed input?", "title": "Multi-Source Syntactic Neural Machine Translation" }, { "answers": [ "" ], "context": "Machine Translation, which is a field of concentrate under common language preparing, focuses at deciphering normal language naturally utilizing machines. Information driven machine interpretation has turned into the overwhelming field of concentrate because of the availability of substantial parallel corpora. The primary target of information driven machine interpretation is to decipher concealed source language, given that the frameworks take in interpretation learning from sentence adjusted bi-lingual preparing information.", "id": 2368, "question": "How were their results compared to state-of-the-art?", "title": "Self-attention based end-to-end Hindi-English Neural Machine Translation" }, { "answers": [ "" ], "context": "Neural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN).", "id": 2369, "question": "What supports the claim that injected CNN into recurent units will enhance ability of the model to catch local context and reduce ambiguities?", "title": "Contextual Recurrent Units for Cloze-style Reading Comprehension" }, { "answers": [ "" ], "context": "Gated recurrent unit (GRU) has been proposed in the scenario of neural machine translations BIBREF0. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation.", "id": 2370, "question": "How is CNN injected into recurent units?", "title": "Contextual Recurrent Units for Cloze-style Reading Comprehension" }, { "answers": [ "" ], "context": "In this section, we will give a detailed introduction to our CRU model. Firstly, we will give a brief introduction to GRU BIBREF0 as preliminaries, and then three variants of our CRU model will be illustrated.", "id": 2371, "question": "Are there some results better than state of the art on these tasks?", "title": "Contextual Recurrent Units for Cloze-style Reading Comprehension" }, { "answers": [ "" ], "context": "Gated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data BIBREF0, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU. Given a sequence $x = \\lbrace x_1, x_2, ..., x_n\\rbrace $, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations.", "id": 2372, "question": "Do experiment results show consistent significant improvement of new approach over traditional CNN and RNN models?", "title": "Contextual Recurrent Units for Cloze-style Reading Comprehension" }, { "answers": [ "" ], "context": "By only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem.", "id": 2373, "question": "What datasets are used for testing sentiment classification and reading comprehension?", "title": "Contextual Recurrent Units for Cloze-style Reading Comprehension" }, { "answers": [ "" ], "context": "Encoder-decoder models BIBREF0 are effective in tasks such as machine translation ( BIBREF1 , BIBREF1 ; BIBREF2 , BIBREF2 ) and grammatical error correction BIBREF3 . Vocabulary in encoder-decoder models is generally selected from the training corpus in descending order of frequency, and low-frequency words are replaced with an unknown word token <unk>. The so-called out-of-vocabulary (OOV) words are replaced with <unk> to not increase the decoder's complexity and to reduce noise. However, naive frequency-based OOV replacement may lead to loss of information that is necessary for modeling context in the encoder.", "id": 2374, "question": "So we do not use pre-trained embedding in this case?", "title": "Graph-based Filtering of Out-of-Vocabulary Words for Encoder-Decoder Models" }, { "answers": [ "BERT generates sentence embeddings that represent words in context. These sentence embeddings are merged into a single conversational-context vector that is used to calculate a gated embedding and is later combined with the output of the decoder h to provide the gated activations for the next hidden layer." ], "context": "In a long conversation, there exists a tendency of semantically related words, or phrases reoccur across sentences, or there exists topical coherence. Existing speech recognition systems are built at individual, isolated utterance level in order to make building systems computationally feasible. However, this may lose important conversational context information. There have been many studies that have attempted to inject a longer context information BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , all of these models are developed on text data for language modeling task.", "id": 2375, "question": "How are sentence embeddings incorporated into the speech recognition system?", "title": "Gated Embeddings in End-to-End Speech Recognition for Conversational-Context Fusion" }, { "answers": [ "the training dataset is large while the target dataset is usually much smaller" ], "context": "One of the most important characteristics of an intelligent system is to understand stories like humans do. A story is a sequence of sentences, and can be in the form of plain text BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 or spoken content BIBREF0 , where the latter usually requires the spoken content to be first transcribed into text by automatic speech recognition (ASR), and the model will subsequently process the ASR output. To evaluate the extent of the model's understanding of the story, it is asked to answer questions about the story. Such a task is referred to as question answering (QA), and has been a long-standing yet challenging problem in natural language processing (NLP).", "id": 2376, "question": "How different is the dataset size of source and target?", "title": "Supervised and Unsupervised Transfer Learning for Question Answering" }, { "answers": [ "" ], "context": "Knowledge about entities is essential for understanding human language. This knowledge can be attributional (e.g., canFly, isEdible), type-based (e.g., isFood, isPolitician, isDisease) or relational (e.g, marriedTo, bornIn). Knowledge bases (KBs) are designed to store this information in a structured way, so that it can be queried easily. Examples of such KBs are Freebase BIBREF3 , Wikipedia, Google knowledge graph and YAGO BIBREF4 . For automatic updating and completing the entity knowledge, text resources such as news, user forums, textbooks or any other data in the form of text are important sources. Therefore, information extraction methods have been introduced to extract knowledge about entities from text. In this paper, we focus on the extraction of entity types, i.e., assigning types to – or typing – entities. Type information can help extraction of relations by applying constraints on relation arguments.", "id": 2377, "question": "How do you find the entity descriptions?", "title": "Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities" }, { "answers": [ "" ], "context": "Natural language based question answering (NLQA) not only involves linguistic understanding, but often involves reasoning with various kinds of knowledge. In recent years, many NLQA datasets and challenges have been proposed, for example, SQuAD BIBREF0 , TriviaQA BIBREF1 and MultiRC BIBREF2 , and each of them have their own focus, sometimes by design and other times by virtue of their development methodology. Many of these datasets and challenges try to mimic human question answering settings. One such setting is open book question answering where humans are asked to answer questions in a setup where they can refer to books and other materials related to their questions. In such a setting, the focus is not on memorization but, as mentioned in BIBREF3 , on “deeper understanding of the materials and its application to new situations BIBREF4 , BIBREF5 .” In BIBREF3 , they propose the OpenBookQA dataset mimicking this setting.", "id": 2378, "question": "How is OpenBookQA different from other natural language QA?", "title": "Careful Selection of Knowledge to solve Open Book Question Answering" }, { "answers": [ "" ], "context": "Business documents broadly characterize a large class of documents that are central to the operation of business. These include legal contracts, purchase orders, financial statements, regulatory filings, and more. Such documents have a number of characteristics that set them apart from the types of texts that most NLP techniques today are designed to process (Wikipedia articles, news stories, web pages, etc.): They are heterogeneous and frequently contain a mix of both free text as well as semi-structured elements (tables, headings, etc.). They are, by definition, domain specific, often with vocabulary, phrases, and linguistic structures (e.g., legal boilerplate and terms of art) that are rarely seen in general natural language corpora.", "id": 2379, "question": "At what text unit/level were documents processed?", "title": "Rapid Adaptation of BERT for Information Extraction on Domain-Specific Business Documents" }, { "answers": [ "" ], "context": "Within the broad space of business documents, we have decided to focus on two specific types: regulatory filings and property lease agreements. While our approach is not language specific, all our work is conducted on Chinese documents. In this section, we first describe these documents and our corpora, our sequence labeling model, and finally our evaluation approach.", "id": 2380, "question": "What evaluation metric were used for presenting results? ", "title": "Rapid Adaptation of BERT for Information Extraction on Domain-Specific Business Documents" }, { "answers": [ "" ], "context": "Regulatory Filings. We focused on a specific type of filing: disclosures of pledges by shareholders when their shares are offered up for collateral. These are publicly accessible and were gathered from the database of a stock exchange in China. We observe that most of these announcements are fairly formulaic, likely generated by templates. However, we treated them all as natural language text and did not exploit this observation; for example, we made no explicit attempt to induce template structure or apply clustering—although such techniques would likely improve extraction accuracy. In total, we collected and manually annotated 150 filings, which were divided into training, validation, and test sets with a 6:2:2 split. Our test corpus comprises 30 regulatory filings. Table TABREF6 enumerates the seven content elements that we extract.", "id": 2381, "question": "Was the structure of regulatory filings exploited when training the model? ", "title": "Rapid Adaptation of BERT for Information Extraction on Domain-Specific Business Documents" }, { "answers": [ "Variety of formats supported (PDF, Word...), user can define content elements of document" ], "context": "An obvious approach to content element extraction is to formulate the problem as a sequence labeling task. Prior to the advent of neural networks, Conditional Random Fields (CRFs) BIBREF4, BIBREF5 represented the most popular approach to this task. Starting from a few years ago, neural networks have become the dominant approach, starting with RNNs BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. Most recently, deep transformer-based models such as BERT represent the state of the art in this task BIBREF1, BIBREF12, BIBREF13 . We adopt the sequence labeling approach of BIBREF1, based on annotations of our corpus using a standard BIO tagging scheme with respect to the content elements we are interested in.", "id": 2382, "question": "What type of documents are supported by the annotation platform?", "title": "Rapid Adaptation of BERT for Information Extraction on Domain-Specific Business Documents" }, { "answers": [ "" ], "context": "Disinformation presents a serious threat to society, as the proliferation of fake news can have a significant impact on an individual's perception of reality. Fake news is a claim or story that is fabricated, with the intention to deceive, often for a secondary motive such as economic or political gain BIBREF0. In the age of digital news and social media, fake news can spread rapidly, impacting large amounts of people in a short period of time BIBREF1. To mitigate the negative impact of fake news on society, various organizations now employ personnel to verify dubious claims through a manual fact-checking procedure, however, this process is very laborious. With a fast-paced modern news cycle, many journalists and fact-checkers are under increased stress to be more efficient in their daily work. To assist in this process, automated fact-checking has been proposed as a potential solution BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6.", "id": 2383, "question": "What are the state-of-the-art models for the task?", "title": "Taking a Stance on Fake News: Towards Automatic Disinformation Assessment via Deep Bidirectional Transformer Language Models for Stance Detection" }, { "answers": [ "" ], "context": "Semantic composition plays an important role in sentiment analysis of phrases and sentences. This includes detecting the scope and impact of negation in reversing a sentiment's polarity, as well as quantifying the influence of modifiers, such as degree adverbs and intensifiers, in rescaling the sentiment's intensity BIBREF0 .", "id": 2384, "question": "Which datasets are used for evaluation?", "title": "Explaining Recurrent Neural Network Predictions in Sentiment Analysis" }, { "answers": [ "optimize single task with no synthetic data" ], "context": "One of the main challenges in building a Natural Language Understanding (NLU) component for a specific task is the necessary human effort to encode the task's specific knowledge. In traditional NLU components, this was done by creating hand-written rules. In today's state-of-the-art NLU components, significant amounts of human effort have to be used for collecting the training data. For example, when building an NLU component for airplane travel information, there are a lot of possibilities to express the situation that someone wants to book a flight from New York to Pittsburgh. In order to build a system, we need to have seen many of them in the training data. Although more and more data has been collected and datasets with this data have been published BIBREF0 , the datasets often consist of data from another domain, which is needed for a certain NLU component.", "id": 2385, "question": "What are the strong baselines you have?", "title": "Multi-task learning to improve natural language understanding" }, { "answers": [ "networks where nodes represent causes and effects, and directed edges represent cause-effect relationships proposed by humans" ], "context": "In this work we compare causal attribution networks derived from three datasets. A causal attribution dataset is a collection of text pairs that reflect cause-effect relationships proposed by humans (for example, “virus causes sickness”). These written statements identify the nodes of the network (see also our graph fusion algorithm for dealing with semantically equivalent statements) while cause-effect relationships form the directed edges (“virus” $\\rightarrow $ “sickness”) of the causal attribution network.", "id": 2386, "question": "What are causal attribution networks?", "title": "Inferring the size of the causal universe: features and fusion of causal attribution networks" }, { "answers": [ "" ], "context": "Urban legends are a genre of modern folklore consisting of stories told as true – and plausible enough to be believed – about some rare and exceptional events that supposedly happened to a real person or in a real place.", "id": 2387, "question": "How accurate is their predictive model?", "title": "Why Do Urban Legends Go Viral?" }, { "answers": [ "" ], "context": "The need to uncover presumed underlying linguistic evolutionary principles and analyse correlation between world's languages has entailed this research. For centuries people have been speculating about the origins of language, however this subject is still obscure. Non-automated linguistic analysis of language relationships has been complicated and very time-consuming. Consequently, this research aims to apply a computational approach to compare human languages. It is based on the phonetic representation of certain key words and concept. This comparison of word similarity aims to facilitate the grouping of languages and the analysis of the formation of genealogical relationship between languages.", "id": 2388, "question": "How large language sets are able to be explored using this approach?", "title": "An efficient automated data analytics approach to large scale computational comparative linguistics" }, { "answers": [ "if it includes negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture." ], "context": "1.1em", "id": 2389, "question": "how did they ask if a tweet was racist?", "title": "A Dictionary-based Approach to Racism Detection in Dutch Social Media" }, { "answers": [ "" ], "context": "Morphological analysis (hajivc1998tagging, oflazer1994tagging, inter alia) is the task of predicting fine-grained annotations about the syntactic properties of tokens in a language such as part-of-speech, case, or tense. For instance, in Figure FIGREF2 , the given Portuguese sentence is labeled with the respective morphological tags such as Gender and its label value Masculine.", "id": 2390, "question": "What other cross-lingual approaches is the model compared to?", "title": "Neural Factor Graph Models for Cross-lingual Morphological Tagging" }, { "answers": [ "" ], "context": "Formally, we define the problem of morphological analysis as the task of mapping a length- INLINEFORM0 string of tokens INLINEFORM1 into the target morphological tag sets for each token INLINEFORM2 . For the INLINEFORM3 th token, the target label INLINEFORM4 defines a set of tags (e.g. {Gender: Masc, Number: Sing, POS: Verb}). An annotation schema defines a set INLINEFORM5 of INLINEFORM6 possible tag types and with the INLINEFORM7 th type (e.g. Gender) defining its set of possible labels INLINEFORM8 (e.g. {Masc, Fem, Neu}) such that INLINEFORM9 . We must note that not all tags or attributes need to be specified for a token; usually, a subset of INLINEFORM10 is specified for a token and the remaining tags can be treated as mapping to a INLINEFORM11 value. Let INLINEFORM12 denote the set of all possible tag sets.", "id": 2391, "question": "What languages are explored?", "title": "Neural Factor Graph Models for Cross-lingual Morphological Tagging" }, { "answers": [ "" ], "context": "Recent years have seen a rapid increase of robotic deployment, beyond traditional applications in cordoned-off workcells in factories, into new, more collaborative use-cases. For example, social robotics and service robotics have targeted scenarios like rehabilitation, where a robot operates in close proximity to a human. While industrial applications envision full autonomy, these collaborative scenarios involve interaction between robots and humans and require effective communication. For instance, a robot that is not able to reach an object may ask for a pick-and-place to be executed in the context of collaborative assembly. Or, in the context of a robotic assistant, a robot may ask for confirmation of a pick-and-place requested by a person.", "id": 2392, "question": "How many human subjects were used in the study?", "title": "That and There: Judging the Intent of Pointing Actions with Robotic Arms" }, { "answers": [ "By treating logical forms as a latent variable and training a discriminative log-linear model over logical form y given x." ], "context": "Semantic parsing is the task of converting natural language utterances into machine-understandable meaning representations or logical forms. The task has attracted much attention in the literature due to a wide range of applications ranging from question answering BIBREF0 , BIBREF1 to relation extraction BIBREF2 , goal-oriented dialog BIBREF3 , and instruction understanding BIBREF4 , BIBREF5 , BIBREF6 .", "id": 2393, "question": "How does the model compute the likelihood of executing to the correction semantic denotation?", "title": "Weakly-supervised Neural Semantic Parsing with a Generative Ranker" }, { "answers": [ "" ], "context": "Neural Machine Translation (NMT) has achieved great successes on machine translation tasks recently BIBREF0 , BIBREF1 . Generally, it relies on a recurrent neural network under the Encode-Decode framework: it firstly encodes a source sentence into context vectors and then generates its translation token-by-token, selecting from the target vocabulary. Among different variants of NMT, attention based NMT, which is the focus of this paper, is attracting increasing interests in the community BIBREF0 , BIBREF2 . One of its advantages is that it is able to dynamically make use of the encoded context through an attention mechanism thereby allowing the use of fewer hidden layers while still maintaining high levels of translation performance.", "id": 2394, "question": "Which conventional alignment models do they use as guidance?", "title": "Neural Machine Translation with Supervised Attention" }, { "answers": [ "" ], "context": "Suppose INLINEFORM0 denotes a source sentence, INLINEFORM1 a target sentence. In addition, let INLINEFORM2 denote a prefix of INLINEFORM3 . Neural Machine Translation (NMT) directly maps a source sentence into a target under an encode-decode framework. In the encoding stage, it uses two bidirectional recurrent neural networks to encode INLINEFORM4 into a sequence of vectors INLINEFORM5 , with INLINEFORM6 representing the concatenation of two vectors for INLINEFORM7 source word from two directional RNNs. In the decoding stage, it generates the target translation from the conditional probability over the pair of sequences INLINEFORM8 and INLINEFORM9 via a recurrent neural network parametrized by INLINEFORM10 as follows: DISPLAYFORM0 ", "id": 2395, "question": "Which dataset do they use?", "title": "Neural Machine Translation with Supervised Attention" }, { "answers": [ "" ], "context": "Speaker diarization is the task of segmenting an audio recording in time, indexing each segment by speaker identity. In the standard version of the task BIBREF0, the goal is not to identify known speakers, but to co-index segments that are attributed to the same speaker; in other words, the task implies finding speaker boundaries and grouping segments that belong to the same speaker (including determining the number of distinct speakers). Often diarization is run, in parallel or in sequence, with speech recognition with the goal of achieving speaker-attributed speech-to-text transcription BIBREF1.", "id": 2396, "question": "On average, by how much do they reduce the diarization error?", "title": "Dover: A Method for Combining Diarization Outputs" }, { "answers": [ "" ], "context": "The reason that combining diarization outputs in a ROVER-like manner is not straightforward is the complex structure of the task: a diarization system has to perform segmentation (finding speaker boundaries) and decisions about identity of speakers across segments. Where those functions are performed by specialized classifiers inside the diarization algorithm, ensemble methods could easily be used. For example, multiple speaker change detectors could vote on a consensus, or a speaker clustering algorithm could combine multiple acoustic embeddings to evaluate cluster similarity BIBREF7.", "id": 2397, "question": "Do they compare their algorithm to voting without weights?", "title": "Dover: A Method for Combining Diarization Outputs" }, { "answers": [ "" ], "context": "Our algorithm maps the anonymous speaker labels from multiple diarization outputs into a common label space, and then performs a simple voting for each region of audio. A “region” for this purpose is a maximal segment delimited by any of the original speaker boundaries, from any of the input segmentations. The combined (or consensus) labeling is then obtained by stringing the majority labels for all regions together.", "id": 2398, "question": "How do they assign weights between votes in their DOVER algorithm?", "title": "Dover: A Method for Combining Diarization Outputs" }, { "answers": [ "ISOT dataset: LLVM\nLiar dataset: Hybrid CNN and LSTM with attention" ], "context": "Flexibility and ease of access to social media have resulted in the use of online channels for news access by a great number of people. For example, nearly two-thirds of American adults have access to news by online channels BIBREF0, BIBREF1. BIBREF2 also reported that social media and news consumption is significantly increased in Great Britain.", "id": 2399, "question": "What are state of the art methods authors compare their work with? ", "title": "Detecting Fake News with Capsule Neural Networks" }, { "answers": [ "" ], "context": "With more than one hundred thousand new scholarly articles being published each year, there is a rapid growth in the number of citations for the relevant scientific articles. In this context, we highlight the following interesting facts about the process of citing scientific articles: (i) the most commonly cited paper by Gerard Salton, titled “A Vector Space Model for Information Retrieval” (alleged to have been published in 1975) does not actually exist in reality BIBREF0 , (ii) the scientific authors read only 20% of the works they cite BIBREF1 , (iii) one third of the references in a paper are redundant and 40% are perfunctory BIBREF2 , (iv) 62.7% of the references could not be attributed a specific function (definition, tool etc.) BIBREF3 . Despite these facts, the existing bibliographic metrics consider that all citations are equally significant.", "id": 2400, "question": "What are the baselines model?", "title": "All Fingers are not Equal: Intensity of References in Scientific Articles" }, { "answers": [ "" ], "context": "Code-switching has received a lot of attention from speech and computational linguistic communities especially on how to automatically recognize text from speech and understand the structure within it. This phenomenon is very common in bilingual and multilingual communities. For decades, linguists studied this phenomenon and found that speakers switch at certain points, not randomly and obeys several constraints which point to the code-switched position in an utterance BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . These hypotheses have been empirically proven by observing that bilinguals tend to code-switch intra-sententially at certain (morpho)-syntactic boundaries BIBREF5 . BIBREF1 defined the well-known theory that constraints the code-switch between a functional head and its complement is given the strong relationship between the two constituents, which corresponds to a hierarchical structure in terms of Part-of-Speech (POS) tags. BIBREF3 introduced Matrix-Language Model Framework for an intra-sentential case where the primary language is called Matrix Language and the second one called Embedded Language BIBREF2 . A language island was then introduced which is a constituent composed entirely of the language morphemes. From the Matrix-Language Frame Model, both matrix language (ML) island and embedded language (EL) islands are well-formed in their grammars and the EL islands are constrained under ML grammar BIBREF6 . BIBREF7 studied determiner–noun switches in Spanish–English bilinguals .", "id": 2401, "question": "What is the architecture of the model?", "title": "Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning" }, { "answers": [ "" ], "context": "The earliest language modeling research on code-switching data was applying linguistic theories on computational modelings such as Inversion Constraints and Functional Head Constraints on Chinese-English code-switching data BIBREF9 , BIBREF10 . BIBREF11 built a bilingual language model which is trained by interpolating two monolingual language models with statistical machine translation (SMT) based text generation to generate artificial code-switching text. BIBREF12 , BIBREF13 introduced a class-based method using RNNLM for computing the posterior probability and added POS tags in the input. BIBREF14 explored the combination of brown word clusters, open class words, and clusters of open class word embeddings as hand-crafted features for improving the factored language model. In addition, BIBREF15 proposed a generative language modeling with explicit phrase structure. A method of tying input and output embedding helped to reduce the number of parameters in language model and improved the perplexity BIBREF16 .", "id": 2402, "question": "What languages are explored in the work?", "title": "Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning" }, { "answers": [ "" ], "context": "Natural language processing (NLP) with neural networks has grown in importance over the last few years. They provide state-of-the-art models for tasks like coreference resolution, language modeling, and machine translation BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . However, since these models are trained on human language texts, a natural question is whether they exhibit bias based on gender or other characteristics, and, if so, how should this bias be mitigated. This is the question that we address in this paper.", "id": 2403, "question": "What is the state-of-the-art neural coreference resolution model?", "title": "Gender Bias in Neural Natural Language Processing" }, { "answers": [ "Their GTRS approach got an improvement of 3.89% compared to SVM and 27.91% compared to Pawlak." ], "context": "Satirical news, which uses parody characterized in a conventional news style, has now become an entertainment on social media. While news satire is claimed to be pure comedic and of amusement, it makes statements on real events often with the aim of attaining social criticism and influencing change BIBREF0. Satirical news can also be misleading to readers, even though it is not designed for falsifications. Given such sophistication, satirical news detection is a necessary yet challenging natural language processing (NLP) task. Many feature based fake or satirical news detection systems BIBREF1, BIBREF2, BIBREF3 extract features from word relations given by statistics or lexical database, and other linguistic features. In addition, with the great success of deep learning in NLP in recent years, many end-to-end neural nets based detection systems BIBREF4, BIBREF5, BIBREF6 have been proposed and delivered promising results on satirical news article detection.", "id": 2404, "question": "How much improvement do they get?", "title": "Satirical News Detection with Semantic Feature Extraction and Game-theoretic Rough Sets" }, { "answers": [ "" ], "context": "Satirical news detection is an important yet challenging NLP task. Many feature based models have been proposed. Burfoot et al. extracted features of headline, profanity, and slang using word relations given by statistical metrics and lexical database BIBREF1. Rubin et al. proposed a SVM based model with five features (absurdity, humor, grammar, negative affect, and punctuation) for fake news document detection BIBREF2. Yang et al. presented linguistic features such as psycholinguistic feature based on dictionary and writing stylistic feature from part-of-speech tags distribution frequency BIBREF17. Shu et al. gave a survey in which a set of feature extraction methods is introduced for fake news on social media BIBREF3. Conroy et al. also uses social network behavior to detect fake news BIBREF18. For satirical sentence classification, Davidov et al. extract patterns using word frequency and punctuation features for tweet sentences and amazon comments BIBREF19. The detection of a certain type of sarcasm which contracts positive sentiment with a negative situation by analyzing the sentence pattern with a bootstrapped learning was also discussed BIBREF20. Although word level statistical features are widely used, with advanced word representations and state-of-the-art part-of-speech tagging and named entity recognition model, we observe that semantic features are more important than word level statistical features to model performance. Thus, we decompose the syntactic tree and use word vectors to more precisely capture the semantic inconsistencies in different structural parts of a satirical news tweet.", "id": 2405, "question": "How large is the dataset?", "title": "Satirical News Detection with Semantic Feature Extraction and Game-theoretic Rough Sets" }, { "answers": [ "" ], "context": "In this section, we will describe the composition and preprocessing of our dataset and introduce our model in detail. We create our dataset by collecting legitimate and satirical news tweets from different news source accounts. Our model aims to detect whether the content of a news tweet is satirical or legitimate. We first extract the semantic features based on inconsistencies in different structural parts of the tweet sentences, and then use these features to train game-theoretic rough set decision model.", "id": 2406, "question": "What features do they extract?", "title": "Satirical News Detection with Semantic Feature Extraction and Game-theoretic Rough Sets" }, { "answers": [ "" ], "context": "A definition of the meeting “hot spots” was first introduced in BIBREF2, where it was investigated whether human annotators could reliably identify regions in which participants are “highly involved in the discussion”. The motivation was that meetings generally have low information density and are tedious to review verbatim after the fact. An automatic system that could detect regions of high interest (as indicated by the involvement of the participants during the meeting) would thus be useful. Relatedly, automatic meeting summarization could also benefit from such information to give extra weight to hot spot regions in selecting or abstracting material for inclusion in the summary. Later work on the relationship between involvement and summarization BIBREF3 defined a different approach: hot spots are those regions chosen for inclusion in a summary by human annotators (“summarization hot spots”). In the present work we stick with the original “involvement hot spot” notion, and refer to such regions simply as “hot spots”, regardless of their possible role in summarization. We note that high involvement may be triggered both by a meeting's content (“what is being talked about”, and “what may be included in a textual summary”), as well as behavioral and social factors, such as a desire to participate, to stake out a position, or to oppose another participant. As related notion in dialog system research is “level of interest” BIBREF4.", "id": 2407, "question": "What they use as a metric of finding hot spots in meeting?", "title": "Combining Acoustics, Content and Interaction Features to Find Hot Spots in Meetings" }, { "answers": [ "" ], "context": "The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances.", "id": 2408, "question": "Is this approach compared to some baseline?", "title": "Combining Acoustics, Content and Interaction Features to Find Hot Spots in Meetings" }, { "answers": [ "" ], "context": "As stated above, the corpus was originally labeled for hot spots at the utterance level, where involvement was marked by either a `b' or a `b+' label. Training and test samples for our experiments correspond to 60 s-long sliding windows, with a 15 s step size. If a certain window, e.g., a segment spanning the times 15 s ...75 s, overlaps with any involved speech utterance, then we label that whole window as `hot'. Fig. FIGREF6 gives a visual representation.", "id": 2409, "question": "How big is ICSI meeting corpus?", "title": "Combining Acoustics, Content and Interaction Features to Find Hot Spots in Meetings" }, { "answers": [ "" ], "context": "In spite of the windowing approach, the class distribution is still skewed, and an accuracy metric would reflect the particular class distribution in our data set. Therefore, we adopt the unweighted average recall (UAR) metric commonly used in emotion classification research. UAR is a reweighted accuracy where the samples of both classes are weighted equally in aggregate. UAR thus simulates a uniform class distribution. To match the objective, our classifiers are trained on appropriately weighted training data. Note that chance performance for UAR is by definition 50%, making results more comparable across different data sets.", "id": 2410, "question": "What annotations are available in ICSI meeting corpus?", "title": "Combining Acoustics, Content and Interaction Features to Find Hot Spots in Meetings" }, { "answers": [ "" ], "context": "Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts BIBREF0 , BIBREF1 . In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone). The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI. However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts BIBREF2 , BIBREF3 , BIBREF4 . For instance, in some datasets, negation words like “not” and “nobody” are often associated with a relationship of contradiction. As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "id": 2411, "question": "Is such bias caused by bad annotation?", "title": "Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference" }, { "answers": [ "" ], "context": "This paper proposes a data augmentation protocol for sequence modeling problems. Our approach aims to supply a simple and model-agnostic bias toward compositional reuse of previously observed sequence fragments in novel environments. Consider a language modeling task in which we wish to estimate a probability distribution over a family of sentences with the following finite sample as training data:", "id": 2412, "question": "How do they determine similar environments for fragments in their data augmentation scheme?", "title": "Good-Enough Compositional Data Augmentation" }, { "answers": [ "" ], "context": "Recent years have seen tremendous success at natural language transduction and generation tasks using black-box function approximators, especially recurrent BIBREF9 and attentional BIBREF10 neural models. With enough training data, these models are often more accurate than than approaches built on traditional tools from the computational linguistics literature—formal models like regular transducers or context-free grammars BIBREF11 can be brittle and challenging to efficiently infer from large datasets.", "id": 2413, "question": "Do they experiment with language modeling on large datasets?", "title": "Good-Enough Compositional Data Augmentation" }, { "answers": [ "Answer with content missing: (Applications section) We use Wikipedia articles\nin five languages\n(Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English) as well as the Na dataset of Adams\net al. (2017).\nSelect:\nKinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English" ], "context": "Consider again the example in fig:teaser. Our data augmentation protocol aims to discover substitutable sentence fragments (highlighted), with the fact a pair of fragments appear in some common sub-sentential environment (underlined) taken as evidence that the fragments belong to a common category. To generate a new examples for the model, an occurrence of one fragment is removed from a sentence to produce a sentence template, which is then populated with the other fragment.", "id": 2414, "question": "Which languages do they test on?", "title": "Good-Enough Compositional Data Augmentation" }, { "answers": [ "deciding publisher partisanship, risk annotator bias because of short description text provided to annotators" ], "context": "In a survey across 38 countries, the Pew Research Center reported that the global public opposed partisanship in news media BIBREF0 . It is, however, challenging to assess the partisanship of news articles on a large scale. We thus made an effort to create a dataset of articles annotated with political partisanship so that content analysis systems can benefit from it.", "id": 2415, "question": "What limitations are mentioned?", "title": "DpgMedia2019: A Dutch News Dataset for Partisanship Detection" }, { "answers": [ "" ], "context": "DpgMedia2019 is a Dutch dataset that was collected from the publications within DPG Media. We took 11 publishers in the Netherlands for the dataset. These publishers include 4 national publishers, Algemeen Dagblad (AD), de Volkskrant (VK), Trouw, and Het Parool, and 7 regional publishers, de Gelderlander, Tubantia, Brabants Dagblad, Eindhovens Dagblad, BN/De Stem PZC, and de Stentor. The regional publishers are collectively called Algemeen Dagblad Regionaal (ADR). A summary of the dataset is shown in Table TABREF3 .", "id": 2416, "question": "What examples of applications are mentioned?", "title": "DpgMedia2019: A Dutch News Dataset for Partisanship Detection" }, { "answers": [ "" ], "context": "We used an internal database that stores all articles written by journalists and ready to be published to collect the articles. From the database, we queried all articles that were published between 2017 and 2019. We filtered articles to be non-advertisement. We also filtered on the main sections so that the articles were not published under the sports and entertainment sections, which we assumed to be less political. After collecting, we found that a lot of the articles were published by several publishers, especially a large overlap existed between AD and ADR. To deal with the problem without losing many articles, we decided that articles that appeared in both AD and its regional publications belonged to AD. Therefore, articles were processed in the following steps:", "id": 2417, "question": "Did they crowdsource the annotations?", "title": "DpgMedia2019: A Dutch News Dataset for Partisanship Detection" }, { "answers": [ "" ], "context": "Task-oriented language grounding refers to the process of extracting semantically meaningful representations of language by mapping it to visual elements and actions in the environment in order to perform the task specified by the instruction BIBREF0.", "id": 2418, "question": "Why they conclude that the usage of Gated-Attention provides no competitive advantage against concatenation in this setting?", "title": "Task-Oriented Language Grounding for Language Input with Multiple Sub-Goals of Non-Linear Order" }, { "answers": [ "" ], "context": "Similar Case Matching (SCM) plays a major role in legal system, especially in common law legal system. The most similar cases in the past determine the judgment results of cases in common law systems. As a result, legal professionals often spend much time finding and judging similar cases to prove fairness in judgment. As automatically finding similar cases can benefit to the legal system, we select SCM as one of the tasks of CAIL2019.", "id": 2419, "question": "What was the best team's system?", "title": "CAIL2019-SCM: A Dataset of Similar Case Matching in Legal Domain" }, { "answers": [ "CNN, LSTM, BERT" ], "context": "We first define the task of CAIL2019-SCM here. The input of CAIL2019-SCM is a triplet $(A,B,C)$, where $A,B,C$ are fact descriptions of three cases. Here we define a function $sim$ which is used for measuring the similarity between two cases. Then the task of CAIL2019-SCM is to predict whether $sim(A,B)>sim(A,C)$ or $sim(A,C)>sim(A,B)$.", "id": 2420, "question": "What are the baselines?", "title": "CAIL2019-SCM: A Dataset of Similar Case Matching in Legal Domain" }, { "answers": [ "No feature is given, only discussion that semantic features are use in practice and yet to be discovered how to embed that knowledge into statistical decision theory framework." ], "context": "Building on a long history of language generation models that are based on statistical knowledge that people have BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, large-scale, neural network-based language models (LMs) that write paragraph-length text with the coherence of human writing have emerged BIBREF6, BIBREF7, BIBREF8. Such models have raised concerns about misuse in generating fake news, misleading reviews, and hate speech BIBREF9, BIBREF10, BIBREF8, BIBREF11, BIBREF12. The alarming consequences of such machine-generated misinformation present an urgent need to discern fake content from genuine, as it is becoming more and more difficult for people to do so without cognitive support tools BIBREF13. Several recent studies have used supervised learning to develop classifiers for this task BIBREF8, BIBREF14, BIBREF9, BIBREF15, BIBREF16 and interpreted their properties. Here we take inspiration from our recent work on information-theoretic limits for detecting audiovisual deepfakes generated by GANs BIBREF17 to develop information-theoretic limits for detecting the outputs of language models. In particular, we build on the information-theoretic study of authentication BIBREF18 to use a formal hypothesis testing framework for detecting the outputs of language models.", "id": 2421, "question": "What semantic features help in detecting whether a piece of text is genuine or generated? of ", "title": "Limits of Detecting Text Generated by Large-Scale Language Models" }, { "answers": [ "" ], "context": "Consider a language $L$ like English, which has tokens drawn from a finite alphabet $\\mathcal {A}$; tokens can be letters, words, or other such symbols. A language model assigns probabilities to sequences of tokens $(a_1,a_2,\\ldots ,a_m)$ so the more likely a sequence is in $L$, the greater its probability. Language models discussed in Sec. SECREF1 estimate this probability $Q$ as a product of each token's probability $q$ given its preceding tokens:", "id": 2422, "question": "Which language models generate text that can be easier to classify as genuine or generated?", "title": "Limits of Detecting Text Generated by Large-Scale Language Models" }, { "answers": [ "It is not completely valid for natural languages because of diversity of language - this is called smoothing requirement." ], "context": "Recall that the distribution of authentic text is denoted $P$ and the distribution of text generated by the language model is $Q$. Suppose we have access to $n$ tokens of generated text from the language model, which we call $Y_1, Y_2, Y_3, \\ldots , Y_n$. We can then formalize a hypothesis test as:", "id": 2423, "question": "Is the assumption that natural language is stationary and ergodic valid?", "title": "Limits of Detecting Text Generated by Large-Scale Language Models" }, { "answers": [ "DocQA, SAN, QANet, ASReader, LM, Random Guess" ], "context": "[color=red!20,size=,fancyline,caption=,disable]ben:It is a little weird that RECORD is not spelled out in the abstract, but especially odd that it isn't spelled out in the Introduction. I would remove the footnote, put that content in the Introduction", "id": 2424, "question": "Which models do they try out?", "title": "ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension" }, { "answers": [ "" ], "context": "Speech processing enables natural communication with smart phones or smart home assistants, e.g., Amazon Echo, Google Home. However, continuously performing speech recognition is not energy-efficient and would drain batteries of smart devices. Instead, most speech recognition systems passively listen for utterances of certain wake words such as “Ok Google\", “Hey Siri\", “Alexa\", etc. to trigger the continuous speech recognition system on demand. This task is referred to as keyword spotting (KWS). There are also uses of KWS where a view simple speech commands (e.g. “on\", “off\") are enough to interact with a device such as a voice-controlled light bulb.", "id": 2425, "question": "Do they compare executionttime of their model against other models?", "title": "Small-Footprint Keyword Spotting on Raw Audio Data with Sinc-Convolutions" }, { "answers": [ "" ], "context": "Recently, CNNs have been successfully applied to KWS BIBREF1, BIBREF2, BIBREF3. Zhang et al. evaluated different neural network architectures (such as CNNs, LSTMs, GRUs) in terms of accuracy, computational operations and memory footprint as well as their deployment on embedded hardware BIBREF1. They achieved their best results using a CNN with DSConvs. Tang et al. explored the use of Deep Residual Networks with dilated convolutions to achieve a high accuracy of $95.8\\%$ BIBREF2, while keeping the number of parameters comparable to BIBREF1. Choi et al. build on this work as they also use a ResNet-inspired architecture. Instead of using 2D convolution over a time-frequency representation of the data they convolve along the time dimension and treat the frequency dimension as channels BIBREF3.", "id": 2426, "question": "What is the memory footprint decrease of their model in comparison to other models?", "title": "Small-Footprint Keyword Spotting on Raw Audio Data with Sinc-Convolutions" }, { "answers": [ "" ], "context": "Any finite training set is consistent with multiple generalizations. Therefore, the way that a learner generalizes to unseen examples depends not only on the training data but also on properties of the learner. Suppose a learner is told that a blue triangle is an example of a blick. A learner preferring shape-based generalizations would conclude that blick means “triangle,” while a learner preferring color-based generalizations would conclude that blick means “blue object” BIBREF0. Factors that guide a learner to choose one generalization over another are called inductive biases.", "id": 2427, "question": "What architectural factors were investigated?", "title": "Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks" }, { "answers": [ "" ], "context": "Recent work has shown evidence of substantial bias in machine learning systems, which is typically a result of bias in the training data. This includes both supervised BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 and unsupervised natural language processing systems BIBREF4 , BIBREF5 , BIBREF6 . Machine learning models are currently being deployed in the field to detect hate speech and abusive language on social media platforms including Facebook, Instagram, and Youtube. The aim of these models is to identify abusive language that directly targets certain individuals or groups, particularly people belonging to protected categories BIBREF7 . Bias may reduce the accuracy of these models, and at worst, will mean that the models actively discriminate against the same groups they are designed to protect.", "id": 2428, "question": "Any other bias may be detected?", "title": "Racial Bias in Hate Speech and Abusive Language Detection Datasets" }, { "answers": [ "" ], "context": "Representing the meanings of words is a fundamental task in Natural Language Processing (NLP). One popular approach to represent the meaning of a word is to embed it in some fixed-dimensional vector space (). In contrast to sparse and high-dimensional counting-based distributional word representation methods that use co-occurring contexts of a word as its representation (), dense and low-dimensional prediction-based distributed word representations have obtained impressive performances in numerous NLP tasks such as sentiment classification (), and machine translation (). Several distributed word embedding learning methods based on different learning strategies have been proposed (;;;;).", "id": 2429, "question": "What is the introduced meta-embedding method introduced in this paper?", "title": "Think Globally, Embed Locally --- Locally Linear Meta-embedding of Words" }, { "answers": [ "" ], "context": "Our main goal is to develop a monaural conversation transcription system that can not only perform automatic speech recognition (ASR) of multiple talkers but also determine who spoke the utterance when, known as speaker diarization BIBREF0, BIBREF1. For both ASR and speaker diarization, the main difficulty comes from speaker overlaps. For example, a speaker-overlap ratio of about 15% was reported in real meeting recordings BIBREF2. For such overlapped speech, neither conventional ASR nor speaker diarization provides a result with sufficient accuracy. It is known that mixing two speech significantly degrades ASR accuracy BIBREF3, BIBREF4, BIBREF5. In addition, no speaker overlaps are assumed with most conventional speaker diarization techniques, such as clustering of speech partitions (e.g. BIBREF0, BIBREF6, BIBREF7, BIBREF8, BIBREF9), which works only if there are no speaker overlaps. Due to these difficulties, it is still very challenging to perform ASR and speaker diarization for monaural recordings of conversation.", "id": 2430, "question": "How long are dialogue recordings used for evaluation?", "title": "Simultaneous Speech Recognition and Speaker Diarization for Monaural Dialogue Recordings with Target-Speaker Acoustic Models" }, { "answers": [ "" ], "context": "This paper combines grammar induction (Dunn, 2018a, 2018b, 2019) and text classification (Joachims, 1998) to model syntactic variation across national varieties of English. This classification-based approach is situated within the task of dialect identification (Section 2) and evaluated against other baselines for the task (Sections 7 and 8). But the focus is modelling syntactic variation on a global-scale using corpus data. On the one hand, the problem is to use a model of syntactic preferences to predict an author's dialect membership (Dunn, 2018c). On the other hand, the problem is to take a spatially-generic grammar of English that is itself learned from raw text (c.f., Zeman, et al., 2017; Zeman, et al., 2018) and adapt that grammar using dialect identification as an optimization task: which constructions are more likely to occur in a specific regional variety?", "id": 2431, "question": "What do the models that they compare predict?", "title": "Modeling Global Syntactic Variation in English Using Dialect Classification" }, { "answers": [ "" ], "context": "The availability of cross-language parallel corpora is one of the basis of current Statistical and Neural Machine Translation systems (e.g. SMT and NMT). Acquiring a high-quality parallel corpus that is large enough to train MT systems, specially NMT ones, is not a trivial task, since it usually demands human curating and correct alignment. In light of that, the automated creation of parallel corpora from freely available resources is extremely important in Natural Language Processing (NLP), enabling the development of accurate MT solutions. Many parallel corpora are already available, some with bilingual alignment, while others are multilingually aligned, with 3 or more languages, such as Europarl BIBREF0 , from the European Parliament, JRC-Acquis BIBREF1 , from the European Commission, OpenSubtitles BIBREF2 , from movies subtitles.", "id": 2432, "question": "What SMT models did they look at?", "title": "A Parallel Corpus of Theses and Dissertations Abstracts" }, { "answers": [ "" ], "context": "In this section, we detail the information retrieved from CAPES website, the filtering process, the sentence alignment, and the evaluation experiments. An overview of the steps employed in this article is shown in Figure FIGREF1 .", "id": 2433, "question": "Which NMT models did they experiment with?", "title": "A Parallel Corpus of Theses and Dissertations Abstracts" }, { "answers": [ "" ], "context": "Idiomatic expressions pose a major challenge for a wide range of applications in natural language processing BIBREF0. These include machine translation BIBREF1, BIBREF2, semantic parsing BIBREF3, sentiment analysis BIBREF4, and word sense disambiguation BIBREF5. Idioms show significant syntactic and morphological variability (e.g. beans being spilled for spill the beans), which makes them hard to find automatically. Moreover, their non-compositional nature makes idioms really hard to interpret, because their meaning is often very different from the meanings of the words that make them up. Hence, successful systems need not only be able to recognise idiomatic expressions in text or dialogue, but they also need to give a proper interpretation to them. As a matter of fact, current language technology performs badly on idiom understanding, a phenomenon that perhaps has not received enough attention.", "id": 2434, "question": "How big PIE datasets are obtained from dictionaries?", "title": "Casting a Wide Net: Robust Extraction of Potentially Idiomatic Expressions" }, { "answers": [ "" ], "context": "The ambiguity of phrases like wake up and smell the coffee poses a terminological problem. Usually, these phrases are called idiomatic expressions, which is suitable when they are used in an idiomatic sense, but not so much when they are used in a literal sense. Therefore, we propose a new term: potentially idiomatic expressions, or PIEs for short. The term potentially idiomatic expression refers to those expressions which can have an idiomatic meaning, regardless of whether they actually have that meaning in a given context. So, see the light is a PIE in both `After another explanation, I finally saw the light' and `I saw the light of the sun through the trees', while it is an idiomatic expression in the first context, and a literal phrase in the latter context.", "id": 2435, "question": "What compleentary PIE extraction methods are used to increase reliability further?", "title": "Casting a Wide Net: Robust Extraction of Potentially Idiomatic Expressions" }, { "answers": [ "" ], "context": "This section is structured so as to reflect the dual contribution of the present work. First, we discuss existing resources annotated for idiomatic expressions. Second, we discuss existing approaches to the automatic extraction of idioms.", "id": 2436, "question": "Are PIEs extracted automatically subjected to human evaluation?", "title": "Casting a Wide Net: Robust Extraction of Potentially Idiomatic Expressions" }, { "answers": [ "" ], "context": "There are four sizeable sense-annotated PIE corpora for English: the VNC-Tokens Dataset BIBREF9, the Gigaword dataset BIBREF14, the IDIX Corpus BIBREF10, and the SemEval-2013 Task 5 dataset BIBREF15. An overview of these corpora is presented in Table TABREF7.", "id": 2437, "question": "What dictionaries are used for automatic extraction of PIEs?", "title": "Casting a Wide Net: Robust Extraction of Potentially Idiomatic Expressions" }, { "answers": [ "" ], "context": "Machine translation (MT) research is biased towards language pairs including English due to the ease of collecting parallel corpora. Translation between non-English languages, e.g., French$\\rightarrow $German, is usually done with pivoting through English, i.e., translating French (source) input to English (pivot) first with a French$\\rightarrow $English model which is later translated to German (target) with a English$\\rightarrow $German model BIBREF0, BIBREF1, BIBREF2. However, pivoting requires doubled decoding time and the translation errors are propagated or expanded via the two-step process.", "id": 2438, "question": "Are experiments performed with any other pair of languages, how did proposed method perform compared to other models?", "title": "Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages" }, { "answers": [ "" ], "context": "In this section, we first review existing approaches to leverage a pivot language in low-resource/zero-resource MT. They can be divided into three categories:", "id": 2439, "question": "Is pivot language used in experiments English or some other language?", "title": "Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages" }, { "answers": [ "" ], "context": "Our methods are based on a simple transfer learning principle for NMT, adjusted to a usual data condition for non-English language pairs: lots of source-pivot and pivot-target parallel data, little (low-resource) or no (zero-resource) source-target parallel data. Here are the core steps of the plain transfer (Figure FIGREF10):", "id": 2440, "question": "What are multilingual models that were outperformed in performed experiment?", "title": "Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages" }, { "answers": [ "" ], "context": "Image captioning—the task of providing a natural language description of the content within an image—lies at the intersection of computer vision and natural language processing. As both of these research areas are highly active and have experienced many recent advances, the progress in image captioning has naturally followed suit. On the computer vision side, improved convolutional neural network and object detection architectures have contributed to improved image captioning systems. On the natural language processing side, more sophisticated sequential models, such as attention-based recurrent neural networks, have similarly resulted in more accurate image caption generation.", "id": 2441, "question": "What are the common captioning metrics?", "title": "Image Captioning: Transforming Objects into Words" }, { "answers": [ "" ], "context": "[display] 1", "id": 2442, "question": "Which English domains do they evaluate on?", "title": "Semi-Supervised Methods for Out-of-Domain Dependency Parsing" }, { "answers": [ "" ], "context": "Likelihood-based language models with deep neural networks have been widely adopted to tackle language tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. By far, one of the most popular training strategies is teacher forcing, which derives from the general maximum likelihood estimation (MLE) principle BIBREF4. Under the teacher forcing schema, a model is trained to make predictions conditioned on ground-truth inputs. Although this strategy enables effective training of large neural networks, it is susceptible to aggravate exposure bias: a model may perform poorly at the inference stage, once its self-generated prefix diverges from the previously learned ground-truth data BIBREF5.", "id": 2443, "question": "What is the road exam metric?", "title": "Rethinking Exposure Bias In Language Modeling" }, { "answers": [ "TEACHER FORCING (TF), SCHEDULED SAMPLING (SS), SEQGAN, RANKGAN, LEAKGAN." ], "context": "As an early work to address exposure bias, BIBREF5 proposed a curriculum learning approach called scheduled sampling, which gradually replaces the ground-truth tokens with the model's own predictions while training. Later, BIBREF9 criticized this approach for pushing the model towards overfitting onto the corpus distribution based on the position of each token in the sequence, instead of learning about the prefix.", "id": 2444, "question": "What are the competing models?", "title": "Rethinking Exposure Bias In Language Modeling" }, { "answers": [ "The relation R(x,y) is mapped onto a question q whose answer is y" ], "context": "Relation extraction systems populate knowledge bases with facts from an unstructured text corpus. When the type of facts (relations) are predefined, one can use crowdsourcing BIBREF0 or distant supervision BIBREF1 to collect examples and train an extraction model for each relation type. However, these approaches are incapable of extracting relations that were not specified in advance and observed during training. In this paper, we propose an alternative approach for relation extraction, which can potentially extract facts of new types that were neither specified nor observed a priori.", "id": 2445, "question": "How is the input triple translated to a slot-filling task?", "title": "Zero-Shot Relation Extraction via Reading Comprehension" }, { "answers": [ "" ], "context": "Deep convolutional neural networks (CNNs) with 2D convolutions and small kernels BIBREF1, have achieved state-of-the-art results for several speech recognition tasks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. The accuracy of those models grows with their complexity, leading to redundant latent representations. Several approaches have been proposed in the literature to reduce this redundancy BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, and therefore to improve their efficiency.", "id": 2446, "question": "Is model compared against state of the art models on these datasets?", "title": "Multi-scale Octave Convolutions for Robust Speech Recognition" }, { "answers": [ "" ], "context": "An octave convolutional layer BIBREF0 factorizes the output feature maps of a convolutional layer into two groups. The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer, and an example with three groups and reductions of one and two octaves is depicted in Fig. FIGREF1.", "id": 2447, "question": "How is octave convolution concept extended to multiple resolutions and octaves?", "title": "Multi-scale Octave Convolutions for Robust Speech Recognition" }, { "answers": [ "" ], "context": "The modelling of natural language relies on the idea that languages are compositional, i.e. that the meaning of a sentence is a function of the meanings of the words in the sentence, as proposed by BIBREF0 . Whether or not this principle tells the whole story, it is certainly important as we undoubtedly manage to create and understand novel combinations of words. Fuzzy set theory has long been considered a useful framework for the modelling of natural language expressions, as it provides a functional calculus for concept combination BIBREF1 , BIBREF2 .", "id": 2448, "question": "Does this paper address the variation among English dialects regarding these hedges?", "title": "A Label Semantics Approach to Linguistic Hedges" }, { "answers": [ "" ], "context": "Recurrent neural network language models (RNNLM) can theoretically model the word history over an arbitrarily long length of time and thus have been shown to perform better than traditional n-gram models BIBREF0. Recent prior work has continuously improved the performance of RNNLMs through hyper-parameter tuning, training optimization methods, and development of new network architectures BIBREF1, BIBREF2, BIBREF3, BIBREF4.", "id": 2449, "question": "On which dataset is model trained?", "title": "Behavior Gated Language Models" }, { "answers": [ "pre-trained to identify the presence of behavior from a sequence of word using the Couples Therapy Corpus" ], "context": "In this section, we first describe a typical RNN based language model which serves as a baseline for this study. Second, we introduce the proposed behavior prediction model for extracting behavioral information. Finally, the proposed architecture of the language model which incorporates the behavioral information through a gating mechanism is presented.", "id": 2450, "question": "How is module that analyzes behavioral state trained?", "title": "Behavior Gated Language Models" }, { "answers": [ "The model does not add new relations to the knowledge graph." ], "context": "Knowledge Graphs (KGs) are a special type of information network that represents knowledge using RDF-style triples $\\langle h$ , $r$ , $t\\rangle $ , where $h$ represents some head entity and $r$ represents some relationship that connects $h$ to some tail entity $t$ . In this formalism a statement like “Springfield is the capital of Illinois” can be represented as $\\langle $ Springfield, capitalOf, Illinois $\\rangle $ . Recently, a variety of KGs, such as DBPedia BIBREF0 , and ConceptNet BIBREF1 , have been curated in the service of fact checking BIBREF2 , question answering BIBREF3 , entity linking BIBREF4 , and for many other tasks BIBREF5 . Despite their usefulness and popularity, KGs are often noisy and incomplete. For example, DBPedia, which is generated from Wikipedia's infoboxes, contains $4.6$ million entities, but half of these entities contain less than 5 relationships.", "id": 2451, "question": "Can the model add new relations to the knowledge graph, or just new entities?", "title": "Open-World Knowledge Graph Completion" }, { "answers": [ "" ], "context": "1.1em", "id": 2452, "question": "How large is the dataset?", "title": "The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources" }, { "answers": [ "" ], "context": "There is an increasing need to interpret and act upon rumours spreading quickly through social media during breaking news, where new reports are released piecemeal and often have an unverified status at the time of posting. Previous research has posited the damage that the diffusion of false rumours can cause in society, and that corrections issued by news organisations or state agencies such as the police may not necessarily achieve the desired effect sufficiently quickly BIBREF0 , BIBREF1 . Being able to determine the accuracy of reports is therefore crucial in these scenarios. However, the veracity of rumours in circulation is usually hard to establish BIBREF2 , since as many views and testimonies as possible need to be assembled and examined in order to reach a final judgement. Examples of rumours that were later disproven, after being widely circulated, include a 2010 earthquake in Chile, where rumours of a volcano eruption and a tsunami warning in Valparaiso spawned on Twitter BIBREF3 . Another example is the England riots in 2011, where false rumours claimed that rioters were going to attack Birmingham's Children's Hospital and that animals had escaped from London Zoo BIBREF4 .", "id": 2453, "question": "Why is a Gaussian process an especially appropriate method for this classification problem?", "title": "Using Gaussian Processes for Rumour Stance Classification in Social Media" }, { "answers": [ "" ], "context": "0pt1ex1ex", "id": 2454, "question": "Do the authors do manual evaluation?", "title": "Topic Spotting using Hierarchical Networks with Self Attention" }, { "answers": [ "" ], "context": "There has been growing research interest in training dialog systems with end-to-end models BIBREF0 , BIBREF1 , BIBREF2 in recent years. These models are directly trained on past dialogs, without assumptions on the domain or dialog state structure BIBREF3 . One of their limitations is that they select responses only according to the content of the conversation and are thus incapable of adapting to users with different personalities. Specifically, common issues with such content-based models include: (i) the inability to adjust language style flexibly BIBREF4 ; (ii) the lack of a dynamic conversation policy based on the interlocutor's profile BIBREF5 ; and (iii) the incapability of handling ambiguities in user requests.", "id": 2455, "question": "What datasets did they use?", "title": "Learning Personalized End-to-End Goal-Oriented Dialog" }, { "answers": [ "The dataset contains about 590 tweets about DDos attacks." ], "context": "Denial of Service attacks are explicit attempts to stop legitimate users from accessing specific network systems BIBREF0. Attackers try to exhaust network resources like bandwidth, or server resources like CPU and memory. As a result, the targeted system slows down or becomes unusable BIBREF1. On-line service providers like Bank Of America, Facebook and Reddit are often the target of such attacks and the frequency and scale of those attacks has increased rapidly in recent years BIBREF2.", "id": 2456, "question": "Do twitter users tend to tweet about the DOS attack when it occurs? How much data supports this assumption?", "title": "Determining the Scale of Impact from Denial-of-Service Attacks in Real Time Using Twitter" }, { "answers": [ "Tweets related to a Bank of America DDos attack were used as training data. The test datasets contain tweets related to attacks to Bank of America, PNC and Wells Fargo." ], "context": "Denial of Service (DoS) attacks are a major threat to Internet security, and detecting them has been a core task of the security community for more than a decade. There exists significant amount of prior work in this domain. BIBREF9, BIBREF10, BIBREF11 all introduced different methods to tackle this problem. The major difference between this work and previous ones are that instead of working on the data of the network itself, we use the reactions of users on social networks to identify an intrusion.", "id": 2457, "question": "What is the training and test data used?", "title": "Determining the Scale of Impact from Denial-of-Service Attacks in Real Time Using Twitter" }, { "answers": [ "" ], "context": "Figure FIGREF4 outlines the entire pipeline of the model from preprocessing tweets to modeling them and finally detecting / ranking future tweets that are related to a DoS issue and measuring its severity.", "id": 2458, "question": "Was performance of the weakly-supervised model compared to the performance of a supervised model?", "title": "Determining the Scale of Impact from Denial-of-Service Attacks in Real Time Using Twitter" }, { "answers": [ "" ], "context": "Over the last couple of years, the MeToo movement has facilitated several discussions about sexual abuse. Social media, especially Twitter, was one of the leading platforms where people shared their experiences of sexual harassment, expressed their opinions, and also offered support to victims. A large portion of these tweets was tagged with a dedicated hashtag #MeToo, and it was one of the main trending topics in many countries. The movement was viral on social media and the hashtag used over 19 million times in a year.", "id": 2459, "question": "Do the tweets come from a specific region?", "title": "#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement" }, { "answers": [ "" ], "context": "Language resources are an essential component in entire R&D domains. From the humble but vast repositories of monolingual texts that are used by the newest language modeling approaches like BERT and GPT, to parallel corpora that allows our machine translation systems to inch closer to human performance, to the more specialized resources like WordNets that encode semantic relations between nodes, these resources are necessary for the general advancement of Natural Language Processing, which eventually evolves into real apps and services we are (already) taking for granted.", "id": 2460, "question": "Did they experiment with the corpus?", "title": "Introducing RONEC -- the Romanian Named Entity Corpus" }, { "answers": [ "current news, historical news, free time, sports, juridical news pieces, personal adverts, editorials." ], "context": "We note that, while fragmentary, there are a few related language resources available, but none that specifically target named entities:", "id": 2461, "question": "What writing styles are present in the corpus?", "title": "Introducing RONEC -- the Romanian Named Entity Corpus" }, { "answers": [ "" ], "context": "ROCO BIBREF4 is a Romanian journalistic corpus that contains approx. 7.1M tokens. It is rich in proper names, numerals and named entities. The corpus has been automatically annotated at word-level with morphosyntactic information (MSD annotations).", "id": 2462, "question": "How did they determine the distinct classes?", "title": "Introducing RONEC -- the Romanian Named Entity Corpus" }, { "answers": [ "" ], "context": "Recently, character composition models have shown great success in many NLP tasks, mainly because of their robustness in dealing with out-of-vocabulary (OOV) words by capturing sub-word informations. Among the character composition models, bidirectional long short-term memory (LSTM) models and convolutional neural networks (CNN) are widely applied in many tasks, e.g. part-of-speech (POS) tagging BIBREF0 , BIBREF1 , named entity recognition BIBREF2 , language modeling BIBREF3 , BIBREF4 , machine translation BIBREF5 and dependency parsing BIBREF6 , BIBREF7 .", "id": 2463, "question": "Do they jointly tackle multiple tagging problems?", "title": "A General-Purpose Tagger with Convolutional Neural Networks" }, { "answers": [ "" ], "context": "Our proposed CNN tagger has two main components: the character composition model and the context encoding model. Both components are essentially CNN models, capturing different levels of information: the first CNN captures morphological information from character n-grams, the second one captures contextual information from word n-grams. Figure FIGREF2 shows a diagram of both models of the tagger.", "id": 2464, "question": "How many parameters does their CNN have?", "title": "A General-Purpose Tagger with Convolutional Neural Networks" }, { "answers": [ "" ], "context": "The character composition model is similar to Yu:2017, where several convolution filters are used to capture character n-grams of different sizes. The outputs of each convolution filter are fed through a max pooling layer, and the pooling outputs are concatenated to represent the word.", "id": 2465, "question": "How do they confirm their model working well on out-of-vocabulary problems?", "title": "A General-Purpose Tagger with Convolutional Neural Networks" }, { "answers": [ "" ], "context": "With the popularity of shared videos, social networks, online course, etc, the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to. Accessing large collections of multimedia or spoken content is difficult and time-consuming for humans, even if these materials are more attractive for humans than plain text information. Hence, it will be great if the machine can automatically listen to and understand the spoken content, and even visualize the key information for humans. This paper presents an initial attempt towards the above goal: machine comprehension of spoken content. In an initial task, we wish the machine can listen to and understand an audio story, and answer the questions related to that audio content. TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today's machine can perform with such a test.", "id": 2466, "question": "What approach does this work propose for the new task?", "title": "Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine" }, { "answers": [ "" ], "context": "In this paper, we develop and propose a new task of machine comprehension of spoken content which was never mentioned before to our knowledge. We take TOEFL listening comprehension test as an corpus for this work. TOEFL is an English examination which tests the knowledge and skills of academic English for English learners whose native languages is not English. In this examination, the subjects would first listen to an audio story around five minutes and then answer several question according to that story. The story is related to the college life such as conversation between the student and the professor or a lecture in the class. Each question has four choices where only one is correct. An real example in the TOEFL examination is shown in Fig. 1 . The upper part is the manual transcription of a small part of the audio story. The questions and four choices are listed too. The correct choice to the question in Fig. 1 is choice A. The questions in TOEFL are not simple even for a human with relatively good knowledge because the question cannot be answered by simply matching the words in the question and in the choices with those in the story, and key information is usually buried by many irrelevant utterances. To answer the questions like “Why does the student go to professor's office?\", the listeners have to understand the whole audio story and draw the inferences to answer the question correctly. As a result, this task is believed to be very challenging for the state-of-the-art spoken language understanding technologies.", "id": 2467, "question": "What is the new task proposed in this work?", "title": "Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine" }, { "answers": [ "" ], "context": "Several successful efforts have led to publishing huge RDF (Resource Description Framework) datasets on Linked Open Data (LOD) such as DBpedia BIBREF0 and LinkedGeoData BIBREF1 . However, these sources are limited to either structured or semi-structured data. So far, a significant portion of the Web content consists of textual data from social network feeds, blogs, news, logs, etc. Although the Natural Language Processing (NLP) community has developed approaches to extract essential information from plain text (e.g., BIBREF2 , BIBREF3 , BIBREF4 ), there is convenient support for knowledge graph construction. Further, several lexical analysis based approaches extract only a limited form of metadata that is inadequate for supporting applications such as question answering systems. For example, the query “Give me the list of reported events by BBC and CNN about the number of killed people in Yemen in the last four days”, about a recent event (containing restrictions such as location and time) poses several challenges to the current state of Linked Data and relevant information extraction techniques. The query seeks “fresh” information (e.g., last four days) whereas the current version of Linked Data is encyclopedic and historical, and does not contain appropriate information present in a temporally annotated data stream. Further, the query specifies provenance (e.g., published by BBC and CNN) that might not always be available on Linked Data. Crucially, the example query asks about a specific type of event (i.e., reports of war caused killing people) with multiple arguments (e.g., in this case, location argument occurred in Yemen). In spite of recent progress BIBREF5 , BIBREF6 , BIBREF7 , there is still no standardized mechanism for (i) selecting background data model, (ii) recognizing and classifying specific event types, (iii) identifying and labeling associated arguments (i.e., entities as well as relations), (iv) interlinking events, and (v) representing events. In fact, most of the state-of-the-art solutions are ad hoc and limited. In this paper, we provide a systematic pipeline for developing knowledge graph of interlinked events. As a proof-of-concept, we show a case study of headline news on Twitter. The main contributions of this paper include:", "id": 2468, "question": "Which news organisations are the headlines sourced from?", "title": "Principles for Developing a Knowledge Graph of Interlinked Events from News Headlines on Twitter" }, { "answers": [ "high-order representation of a relation, loss gradient of relation meta" ], "context": "A knowledge graph is composed by a large amount of triples in the form of $(head\\; entity,\\, relation,\\, tail\\; entity)$ ( $(h, r, t)$ in short), encoding knowledge and facts in the world. Many KGs have been proposed BIBREF0 , BIBREF1 , BIBREF2 and applied to various applications BIBREF3 , BIBREF4 , BIBREF5 .", "id": 2469, "question": "What meta-information is being transferred?", "title": "Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs" }, { "answers": [ "NELL-One, Wiki-One" ], "context": "One target of MetaR is to learn the representation of entities fitting the few-shot link prediction task and the learning framework is inspired by knowledge graph embedding methods. Furthermore, using loss gradient as one kind of meta information is inspired by MetaNet BIBREF12 and MAML BIBREF13 which explore methods for few-shot learning by meta-learning. From these two points, we regard knowledge graph embedding and meta-learning as two main kinds of related work.", "id": 2470, "question": "What datasets are used to evaluate the approach?", "title": "Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs" }, { "answers": [ "" ], "context": "Illiteracy has been one of the most serious pervasive problems all over the world. According to the U. S. Department of Education, the National Center for Education Statistics, approximately 32 million adults in the United States are not able to read, which is about 14% of the entire adult population BIBREF0 . Additionally, 44% of the 2.4 million students in the U. S. federally funded adult education programs are English as a second language (ESL) students, and about 185,000 of them are at the lowest ESL level, beginning literacy BIBREF1 . While low-literate adults lack the ability to read and to understand text, particularly, the low-literate ESL adult learners also face the dual challenge of developing basic literacy skills which includes decoding, comprehending, and producing print, along with English proficiency, represent different nationalities and cultural backgrounds BIBREF2 . Hence, illiteracy is shown as a significant barrier that results in a person's struggling in every aspect of his or her daily life activity.", "id": 2471, "question": "Does their solution involve connecting images and text?", "title": "SimplerVoice: A Key Message&Visual Description Generator System for Illiteracy" }, { "answers": [ "" ], "context": "In the field of ABE and SLA, researchers have conducted a number of studies to assist low-literate learners in their efforts to acquire literacy and language skills by reading interventions, and providing specific instructions through local education agencies, community colleges and educational organizations BIBREF3 , BIBREF1 .", "id": 2472, "question": "Which model do they use to generate key messages?", "title": "SimplerVoice: A Key Message&Visual Description Generator System for Illiteracy" }, { "answers": [ "" ], "context": "Vector space embeddings are commonly used to represent entities in fields such as machine learning (ML) BIBREF0, natural language processing (NLP) BIBREF1, information retrieval (IR) BIBREF2 and cognitive science BIBREF3. An important point, however, is that such representations usually represent both individuals and categories as vectors BIBREF4, BIBREF5, BIBREF6. Note that in this paper, we use the term category to denote natural groupings of individuals, as it is used in cognitive science, with individuals referring to the objects from the considered domain of discourse. For example, the individuals carrot and cucumber belong to the vegetable category. We use the term entities as an umbrella term covering both individuals and categories.", "id": 2473, "question": "What experiments they perform to demonstrate that their approach leads more accurate region based representations?", "title": "Modelling Semantic Categories using Conceptual Neighborhood" }, { "answers": [ "" ], "context": "In distributional semantics, categories are frequently modelled as vectors. For example, BIBREF14 study the problem of deciding for a word pair $(i,c)$ whether $i$ denotes an instance of the category $c$, which they refer to as instantiation. They treat this problem as a binary classification problem, where e.g. the pair (AAAI, conference) would be a positive example, while (conference, AAAI) and (New York, conference) would be negative examples. Different from our setting, their aim is thus essentially to model the instantiation relation itself, similar in spirit to how hypernymy has been modelled in NLP BIBREF15, BIBREF16. To predict instantiation, they use a simple neural network model which takes as input the word vectors of the input pair $(i,c)$. They also experiment with an approach that instead models a given category as the average of the word vectors of its known instances and found that this led to better results.", "id": 2474, "question": "How they indentify conceptual neighbours?", "title": "Modelling Semantic Categories using Conceptual Neighborhood" }, { "answers": [ "" ], "context": "The performance of state-of-the-art MT systems is not perfect, thus, human interventions are still required to correct machine translated texts into publishable quality translations BIBREF0. Automatic post-editing (APE) is a method that aims to automatically correct errors made by MT systems before performing actual human post-editing (PE) BIBREF1, thereby reducing the translators' workload and increasing productivity BIBREF2. APE systems trained on human PE data serve as MT post-processing modules to improve the overall performance. APE can therefore be viewed as a 2nd-stage MT system, translating predictable error patterns in MT output to their corresponding corrections. APE training data minimally involves MT output ($mt$) and the human post-edited ($pe$) version of $mt$, but additionally using the source ($src$) has been shown to provide further benefits BIBREF3, BIBREF4, BIBREF5.", "id": 2475, "question": "What experiment result led to conclussion that reducing the number of layers of the decoder does not matter much?", "title": "The Transference Architecture for Automatic Post-Editing" }, { "answers": [ "comparing to the results from reducing the number of layers in the decoder, the BLEU score was 69.93 which is less than 1% in case of test2016 and in case of test2017 it was less by 0.2 %. In terms of TER it had higher score by 0.7 in case of test2016 and 0.1 in case of test2017. " ], "context": "Recent advances in APE research are directed towards neural APE, which was first proposed by Pal:2016:ACL and junczysdowmunt-grundkiewicz:2016:WMT for the single-source APE scenario which does not consider $src$, i.e. $mt \\rightarrow pe$. In their work, junczysdowmunt-grundkiewicz:2016:WMT also generated a large synthetic training dataset through back translation, which we also use as additional training data. Exploiting source information as an additional input can help neural APE to disambiguate corrections applied at each time step; this naturally leads to multi-source APE ($\\lbrace src, mt\\rbrace \\rightarrow pe$). A multi-source neural APE system can be configured either by using a single encoder that encodes the concatenation of $src$ and $mt$ BIBREF9 or by using two separate encoders for $src$ and $mt$ and passing the concatenation of both encoders' final states to the decoder BIBREF10. A few approaches to multi-source neural APE were proposed in the WMT 2017 APE shared task. Junczysdowmunt:2017:WMT combine both $mt$ and $src$ in a single neural architecture, exploring different combinations of attention mechanisms including soft attention and hard monotonic attention. Chatterjee-EtAl:2017:WMT2 built upon the two-encoder architecture of multi-source models BIBREF10 by means of concatenating both weighted contexts of encoded $src$ and $mt$. Varis-bojar:2017:WMT compared two multi-source models, one using a single encoder with concatenation of $src$ and $mt$ sentences, and a second one using two character-level encoders for $mt$ and $src$ along with a character-level decoder.", "id": 2476, "question": "How much is performance hurt when using too small amount of layers in encoder?", "title": "The Transference Architecture for Automatic Post-Editing" }, { "answers": [ "" ], "context": "We propose a multi-source transformer model called transference ($\\lbrace src,mt\\rbrace _{tr} \\rightarrow pe$, Figure FIGREF1), which takes advantage of both the encodings of $src$ and $mt$ and attends over a combination of both sequences while generating the post-edited sentence. The second encoder, $enc_{src \\rightarrow mt}$, makes use of the first encoder $enc_{src}$ and a sub-encoder $enc_{mt}$ for considering $src$ and $mt$. Here, the $enc_{src}$ encoder and the $dec_{pe}$ decoder are equivalent to the original transformer for neural MT. Our $enc_{src \\rightarrow mt}$ follows an architecture similar to the transformer's decoder, the difference being that no masked multi-head self-attention is used to process $mt$.", "id": 2477, "question": "What was previous state of the art model for automatic post editing?", "title": "The Transference Architecture for Automatic Post-Editing" }, { "answers": [ "Multilingual Neural Machine Translation Models" ], "context": "Our primary goal is to learn meaning representations of sentences and sentence fragments by looking at the distributional information that is available in parallel corpora of human translations. The basic idea is to use translations into other languages as “semantic mirrors” of the original text, assuming that they represent the same meaning but with different symbols, wordings and linguistic structures. For this we discard any meaning diversions that may happen in translation due to target audience adaptation or other processes that may influence the semantics of the translated texts. We also assume that the material can be divided into meaningful and self-contained units, Bible verses in our case, and focus on the global data-driven model that hopefully can cope with instances that violate our assumptions.", "id": 2478, "question": "What neural machine translation models can learn in terms of transfer learning?", "title": "Emerging Language Spaces Learned From Massively Multilingual Corpora" }, { "answers": [ "" ], "context": "There are more and more NLP scholars focusing on the research of multi-party dialogues, such as multi-party dialogues discourse parsing and multi-party meeting summarization BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the scale of the STAC dataset has limited the research of discourse parsing for multi-party dialogues. On the other hand, as we know, there is no literature working on machine reading comprehension for multi-party dialogues. Considering the connection between the relevance between machine reading comprehension and discourse parsing, we annotate the dataset for two tasks for multi-party dialogues understanding.", "id": 2479, "question": "Did they experiment on the proposed task?", "title": "An Annotation Scheme of A Large-scale Multi-party Dialogues Dataset for Discourse Parsing and Machine Comprehension" }, { "answers": [ "" ], "context": "Our dataset derives from the large scale multi-party dialogues dataset the Ubuntu Chat Corpus BIBREF6. The Ubuntu dataset is a large scale multi-party dialogues corpus.", "id": 2480, "question": "Is annotation done manually?", "title": "An Annotation Scheme of A Large-scale Multi-party Dialogues Dataset for Discourse Parsing and Machine Comprehension" }, { "answers": [ "" ], "context": "This section will explain how to annotate discourse structure in multi-party dialogues.", "id": 2481, "question": "How large is the proposed dataset?", "title": "An Annotation Scheme of A Large-scale Multi-party Dialogues Dataset for Discourse Parsing and Machine Comprehension" }, { "answers": [ "" ], "context": "Analyzing and generating natural language texts requires the capturing of two important aspects of language: what is said and how it is said. In the literature, much more attention has been paid to studies on what is said. However, recently, capturing how it is said, such as stylistic variations, has also proven to be useful for natural language processing tasks such as classification, analysis, and generation BIBREF1 , BIBREF2 , BIBREF3 .", "id": 2482, "question": "How large is the dataset?", "title": "Unsupervised Learning of Style-sensitive Word Vectors" }, { "answers": [ "" ], "context": "The key idea is to extend the continuous bag of words (CBOW) BIBREF0 by distinguishing nearby contexts and wider contexts under the assumption that a style persists throughout every single utterance in a dialog. We elaborate on it in this section.", "id": 2483, "question": "How is the dataset created?", "title": "Unsupervised Learning of Style-sensitive Word Vectors" }, { "answers": [ "" ], "context": "Recurrent neural networks (RNNs) are among the most powerful models for natural language processing, speech recognition, question-answering systems and other problems with sequential data BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . For complex tasks such as machine translation BIBREF5 or speech recognition BIBREF3 modern RNN architectures incorporate a huge number of parameters. To use these models on portable devices with limited memory, for instance, smartphones, the model compression is desired. High compression level may also lead to an acceleration of RNNs. In addition, compression regularizes RNNs and helps to avoid overfitting.", "id": 2484, "question": "What is binary variational dropout?", "title": "Bayesian Sparsification of Recurrent Neural Networks" }, { "answers": [ "" ], "context": "Nowadays, DNNs have solved masses of significant practical problems in various areas like computer vision BIBREF0 , BIBREF1 , audio BIBREF2 , BIBREF3 , natural language processing (NLP) BIBREF4 , BIBREF5 etc. Due to the great success, systems based on DNN are widely deployed in physical world, including some sensitive security tasks. However, Szegedy et al. BIBREF6 found an interesting fact that a crafted input with small perturbations could easily fool DNN models. This kind of inputs is called adversarial examples. Certainly, with the development of theory and practice, the definitions of adversarial examples BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 are varied. But these definitions have two cores in common. One is that the perturbations are small and the ability of fooling DNN models is the other. It naturally raises a question why adversarial examples exist in DNNs. The reason why they are vulnerable to adversarial examples is probably because of DNNs’ linear nature. Goodfellow et al. BIBREF7 then gave this explanation after adversarial examples arose. Researchers therefore treat adversarial examples as a security problem and pay much attention to works of adversarial attacks and defenses BIBREF10 , BIBREF11 .", "id": 2485, "question": "Which strategies show the most promise in deterring these attacks?", "title": "Towards a Robust Deep Neural Network in Text Domain A Survey" }, { "answers": [ "" ], "context": "Current state-of-the-art models for speech recognition require vast amounts of transcribed audio data to attain good performance. In particular, end-to-end ASR models are more demanding in the amount of training data required when compared to traditional hybrid models. While obtaining a large amount of labeled data requires substantial effort and resources, it is much less costly to obtain abundant unlabeled data.", "id": 2486, "question": "What are baseline models on WSJ eval92 and LibriSpeech test-clean?", "title": "Deep Contextualized Acoustic Representations For Semi-Supervised Speech Recognition" }, { "answers": [ "" ], "context": "Neural Machine Translation (NMT) is an end-to-end learning approach to machine translation which has recently shown promising results on multiple language pairs BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Unlike conventional Statistical Machine Translation (SMT) systems BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 which consist of multiple separately tuned components, NMT aims at building upon a single and large neural network to directly map input text to associated output text. Typical NMT models consists of two recurrent neural networks (RNNs), an encoder to read and encode the input text into a distributed representation and a decoder to generate translated text conditioned on the input representation BIBREF13 , BIBREF14 .", "id": 2487, "question": "Do they use the same architecture as LSTM-s and GRUs with just replacing with the LAU unit?", "title": "Deep Neural Machine Translation with Linear Associative Unit" }, { "answers": [ "" ], "context": "Recent years, graph neural networks (GNNs) have been applied to various fields of machine learning, including node classification BIBREF0 , relation classification BIBREF1 , molecular property prediction BIBREF2 , few-shot learning BIBREF3 , and achieve promising results on these tasks. These works have demonstrated GNNs' strong power to process relational reasoning on graphs.", "id": 2488, "question": "So this paper turns unstructured text inputs to parameters that GNNs can read?", "title": "Graph Neural Networks with Generated Parameters for Relation Extraction" }, { "answers": [ "" ], "context": "In a seminal paper, Charles Hockett BIBREF0 identified duality of patterning as one of the core design features of human language. A language exhibits duality of patterning when it is organized at two distinct levels. At a first level, meaningless forms (typically referred to as phonemes) are combined into meaningful units (henceforth this property will be referred to as combinatoriality). For example, the English forms /k/, /a/, and /t/ are combined in different ways to obtain the three words /kat/, /akt/, and /tak/ (respectively written 'cat', 'act' and 'tack'). Because the individual forms in them are meaningless, these words have no relation in meaning in spite of being made of the same forms. This is a very important property, thanks to which all of the many words of the English lexicon can be obtained by relatively simple combinations of about forty phonemes. If phonemes had individual meaning, this degree of compactness would not be possible. At a second level, meaningful units (typically referred to as morphemes) are composed into larger units, the meaning of which is related to the individual meaning of the composing units (henceforth this property will be referred to as compositionality). For example, the meaning of the word 'boyfriend' is related to the meaning of the words 'boy' and 'friend' which composed it. The compositional level includes syntax as well. For example, the meaning of the sentence 'cats eat fishes' is related to the meaning of the words 'cats', 'eat', and 'fishes'. In this paper, for the sake of simplicity, we focus exclusively on the lexicon level. This has to be considered as a first step towards the comprehension of the emergence of complex structures in languages.", "id": 2489, "question": "What other models are compared to the Blending Game?", "title": "On the emergence of syntactic structures: quantifying and modelling duality of patterning" }, { "answers": [ "" ], "context": "In this section we quantify the notion of duality of patterning as observed in real languages in order to provide suitable measures for the combinatoriality and compositionality.", "id": 2490, "question": "What empirical data are the Blending Game predictions compared to?", "title": "On the emergence of syntactic structures: quantifying and modelling duality of patterning" }, { "answers": [ "Automatic transcription of 5000 tokens through sequential neural models trained on the annotated part of the corpus" ], "context": "Arabish is the romanization of Arabic Dialects (ADs) used for informal messaging, especially in social networks. This writing system provides an interesting ground for linguistic research, computational as well as sociolinguistic, mainly due to the fact that it is a spontaneous representation of the ADs, and because it is a linguistic phenomenon in constant expansion on the web. Despite such potential, little research has been dedicated to Tunisian Arabish (TA). In this paper we describe the work we carried to develop a flexible and multi-purpose TA resource. This will include a TA corpus, together with some tools that could be useful for analyzing the corpus and for its extension with new data.", "id": 2491, "question": "How does the semi-automatic construction process work?", "title": "TArC: Incrementally and Semi-Automatically Collecting a Tunisian Arabish Corpus" }, { "answers": [ "" ], "context": "In this section, we provide an overview of work done on automatic processing of TUN and TA. As briefly outlined above, many studies on TUN and TA aim at solving the lack of standard orthography. The first Conventional Orthography for Dialectal Arabic (CODA) was for Egyptian Arabic BIBREF2 and it was used by bies2014transliteration for Egyptian Arabish transliteration into Arabic script. The CODA version for TUN (CODA TUN) was developed by DBLP:conf/lrec/ZribiBMEBH14, and was used in many studies, like boujelbane2015traitements. Such work presents a research on automatic word recognition in TUN. Narrowing down to the specific field of TA, CODA TUN was used in masmoudi2015arabic to realize a TA-Arabic script conversion tool, implemented with a rule-based approach. The most extensive CODA is CODA*, a unified set of guidelines for 28 Arab city dialects BIBREF0. For the present research, CODA* is considered the most convenient guideline to follow due to its extensive applicability, which will support comparative studies of corpora in different ADs. As we already mentioned, there are few NLP tools available for Arabish processing in comparison to the amount of NLP tools realized for Arabic. Considering the lack of spelling conventions for Arabish, previous effort has focused on automatic transliteration from Arabish to Arabic script, e.g. chalabi2012romanized, darwish2013arabizi, and al2014automatic. These three work are based on a character-to-character mapping model that aims at generating a range of alternative words that must then be selected through a linguistic model. A different method is presented in younes2018sequence, in which the authors present a sequence-to-sequence-based approach for TA-Arabic characters transliteration in both directions BIBREF3, BIBREF4.", "id": 2492, "question": "Does the paper report translation accuracy for an automatic translation model for Tunisian to Arabish words?", "title": "TArC: Incrementally and Semi-Automatically Collecting a Tunisian Arabish Corpus" }, { "answers": [ "" ], "context": "Our success as a social species depends on our ability to understand, and be understood by, different communicative partners across different contexts. Theory of mind—the ability to represent and reason about others' mental states—is considered to be the key mechanism that supports such context-sensitivity in our everyday social interactions. Being able to reason about what others see, want, and think allows us to make more accurate predictions about their future behavior in different contexts and adjust our own behaviors accordingly BIBREF0 . Over the past two decades, however, there has been sustained debate over the extent to which adults actually make of use theory of mind in communication.", "id": 2493, "question": "Did participants behave unexpectedly?", "title": "Speakers account for asymmetries in visual perspective so listeners don't have to" }, { "answers": [ "" ], "context": "How does an unscripted speaker change her communicative behavior when there is uncertainty about exactly what her partner can see? To address this question empirically, we randomly assigned participants to the roles of speaker and listener and paired them over the web to play an interactive communication task BIBREF57 .", "id": 2494, "question": "Was this experiment done in a lab?", "title": "Speakers account for asymmetries in visual perspective so listeners don't have to" }, { "answers": [ "" ], "context": "Recently, with the advancement of deep learning, great progress has been made in end-to-end (E2E) automatic speech recognition (ASR). With the goal of directly mapping a sequence of speech frames to a sequence of output tokens, an E2E ASR system incorporates the acoustic model, language model and pronunciation model of a conventional ASR system into a single deep neural network (DNN). The most dominant approaches for E2E ASR include connectionist temporal classification (CTC) BIBREF0, BIBREF1, recurrent neural network transducer (RNNT) BIBREF2 and attention-based encoder-decoder (AED) models BIBREF3, BIBREF4, BIBREF5.", "id": 2495, "question": "How long is new model trained on 3400 hours of data?", "title": "Domain Adaptation via Teacher-Student Learning for End-to-End Speech Recognition" }, { "answers": [ "" ], "context": "Open-domain question answering (OpenQA) aims to seek answers for a broad range of questions from a large knowledge sources, e.g., structured knowledge bases BIBREF0 , BIBREF1 and unstructured documents from search engine BIBREF2 . In this paper we focus on the OpenQA task with the unstructured knowledge sources retrieved by search engine.", "id": 2496, "question": "How much does HAS-QA improve over baselines?", "title": "HAS-QA: Hierarchical Answer Spans Model for Open-domain Question Answering" }, { "answers": [ "The framework jointly learns parametrized QA and QG models subject to the constraint in equation 2. In more detail, they minimize QA and QG loss functions, with a third dual loss for regularization." ], "context": "Question answering (QA) and question generation (QG) are two fundamental tasks in natural language processing BIBREF0 , BIBREF1 . Both tasks involve reasoning between a question sequence $q$ and an answer sentence $a$ . In this work, we take answer sentence selection BIBREF2 as the QA task, which is a fundamental QA task and is very important for many applications such as search engine and conversational bots. The task of QA takes a question sentence $q$ and a list of candidate answer sentences as the input, and finds the top relevant answer sentence from the candidate list. The task of QG takes a sentence $a$ as input, and generates a question sentence $q$ which could be answered by $a$ .", "id": 2497, "question": "What does \"explicitly leverages their probabilistic correlation to guide the training process of both models\" mean?", "title": "Question Answering and Question Generation as Dual Tasks" }, { "answers": [ "" ], "context": "To model language, we must represent words. We can imagine representing every word with a binary one-hot vector corresponding to a dictionary position. But such a representation contains no valuable semantic information: distances between word vectors represent only differences in alphabetic ordering. Modern approaches, by contrast, learn to map words with similar meanings to nearby points in a vector space BIBREF0 , from large datasets such as Wikipedia. These learned word embeddings have become ubiquitous in predictive tasks.", "id": 2498, "question": "How does this compare to contextual embedding methods?", "title": "Multimodal Word Distributions" }, { "answers": [ "" ], "context": "Visual question answering (VQA) comes as a classic task which combines visual and textual modal data into a unified system. Taking an image and a natural language question about it as input, a VQA system is supposed to output the corresponding natural language answer. VQA problem requires image and text understanding, common sense and knowledge inference. The solution of VQA problem will be a great progress in approaching the goal of Visual Turing Test, and is also conducive to tasks such as multi-modal retrieval, image captioning and accessibility facilities.", "id": 2499, "question": "Does the new system utilize pre-extracted bounding boxes and/or features?", "title": "Task-driven Visual Saliency and Attention-based Visual Question Answering" }, { "answers": [ "" ], "context": "Saliency generally comes from contrasts between a pixel or an object and its surroundings, describing how outstanding it is. It could facilitate learning by focusing the most pertinent regions. Saliency detection methods mimic the human attention in psychology, including both bottom-up and top-down manners BIBREF15 . Typical saliency methods BIBREF16 , BIBREF17 are pixel- or object-oriented, which are not appropriate for VQA due to center bias and are difficulty in collecting large scale eye tracking data.", "id": 2500, "question": "To which previous papers does this work compare its results?", "title": "Task-driven Visual Saliency and Attention-based Visual Question Answering" }, { "answers": [ "" ], "context": "Human beings are rational and a major component of rationality is the ability to reason. Reasoning is the process of combining facts and beliefs to make new decisions BIBREF0 , as well as the ability to manipulate knowledge to draw inferences BIBREF1 . Commonsense reasoning utilizes the basic knowledge that reflects our natural understanding of the world and human behaviors, which is common to all humans.", "id": 2501, "question": "Do they consider other tasks?", "title": "KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning" }, { "answers": [ "" ], "context": "There are various forms of a natural disaster such as flood, earthquake, volcano eruptions, storms, etc. but the flood is one of the lethal and prominent forms of natural disaster according to World Meteorological Organization (WMO) for most of the countries. National Weather Services (NWS) reported 28,826 flash floods events in the United States from October 2007 to October 2015 which resulted in 278 live loss and million-dollar worth crop and property damage BIBREF0. Monitoring and detecting floods in advance and proactively working towards saving peoples live and minimizing damage at the same time is amongst one of the most important tasks nowadays. In recent times, humans are extremely active on social media such as Twitter, Facebook, Youtube, Flickr, Instagram, etc. People use these platform extensively to share crucial information via message, photos and videos in real-time on social media for their interaction and information dissemination on every topic and acts as an active human sensor. It has been observed in the past few years via several case studies that social media also contributes significantly and being used extensively for crisis-related feeds BIBREF1 and extremely helpful in situation awareness towards crisis management BIBREF2, BIBREF3, BIBREF4. Emergency first responders agency, humanitarian organizations, city authorities and other end users are always looking for the right amount and content that would be helpful in the crisis scenarios but generally, social media provides an overwhelming amount of unlabeled data and it is very crucial to filter out the right kind of information using text classification. The advances in Artificial Intelligence (AI) which includes machine learning and Natural Language Processing (NLP) methods can track and focus on humanitarian relief process and extract meaningful insights from the huge amount of social media data generated regularly in a timely manner.", "id": 2502, "question": "What were the model's results on flood detection?", "title": "Localized Flood DetectionWith Minimal Labeled Social Media Data Using Transfer Learning" }, { "answers": [ "" ], "context": "Growing active user base on social media and has been created a great opportunity for extracting crucial information in real-time for various events and topics. Social media is being vigorously used as the communication channel in the time of any crisis or any natural disaster in order to convey the actionable information to the emergency responders to help them by more situational awareness context so that they make a better decision for rescue operations, sending alerts, reaching out people right on time. There have been numerous works proposed related to crisis management using social media content which is discussed in the following section.", "id": 2503, "question": "What dataset did they use?", "title": "Localized Flood DetectionWith Minimal Labeled Social Media Data Using Transfer Learning" }, { "answers": [ "" ], "context": "Emergency events such as natural or man-made disasters bring unique challenges for humanitarian response organizations. Particularly, sudden-onset crisis situations demand officials to make fast decisions based on minimum information available to deploy rapid crisis response. However, information scarcity during time-critical situations hinders decision-making processes and delays response efforts BIBREF0 , BIBREF1 .", "id": 2504, "question": "What exactly is new about this stochastic gradient descent algorithm?", "title": "Applications of Online Deep Learning for Crisis Response Using Social Media Information" }, { "answers": [ "" ], "context": "We discuss two core models for addressing sequence labeling problems and describe, for each, training them in a single-model multilingual setting: (1) the Meta-LSTM BIBREF0 , an extremely strong baseline for our tasks, and (2) a multilingual BERT-based model BIBREF1 .", "id": 2505, "question": "What codemixed language pairs are evaluated?", "title": "Small and Practical BERT Models for Sequence Labeling" }, { "answers": [ "" ], "context": "The Meta-LSTM is the best-performing model of the CoNLL 2018 Shared Task BIBREF2 for universal part-of-speech tagging and morphological features. The model is composed of 3 LSTMs: a character-BiLSTM, a word-BiLSTM and a single joint BiLSTM which takes the output of the character and word-BiLSTMs as input. The entire model structure is referred to as Meta-LSTM.", "id": 2506, "question": "How do they compress the model?", "title": "Small and Practical BERT Models for Sequence Labeling" }, { "answers": [ "" ], "context": "BERT is a transformer-based model BIBREF3 pretrained with a masked-LM task on millions of words of text. In this paper our BERT-based experiments make use of the cased multilingual BERT model available on GitHub and pretrained on 104 languages.", "id": 2507, "question": "What is the multilingual baseline?", "title": "Small and Practical BERT Models for Sequence Labeling" }, { "answers": [ "" ], "context": "Dialogue Act Recognition (DAR) is an essential problem in modeling and detecting discourse structure. The goal of DAR is to attach semantic labels to each utterance in a conversation and recognize the speaker's intention, which can be regarded as a sequence labeling task. Many applications have benefited from the use of automatic dialogue act recognition such as dialogue systems, machine translation, automatic speech recognition, topic identification and talking avatars BIBREF0 BIBREF1 BIBREF2 . One of the primary applications of DAR is to support task-oriented discourse agent system. Knowing the past utterances of DA can help ease the prediction of the current DA state, thus help to narrow the range of utterance generation topics for the current turn. For instance, the \"Greeting\" and \"Farewell\" acts are often followed with another same type utterances, the \"Answer\" act often responds to the former \"Question\" type utterance. Thus if we can correctly recognize the current dialogue act, we can easily predict the following utterance act and generate a corresponding response. Table 1 shows a snippet of the kind of discourse structure in which we are interested.", "id": 2508, "question": "Which features do they use?", "title": "Dialogue Act Recognition via CRF-Attentive Structured Network" }, { "answers": [ "" ], "context": "In this section, we study the problem of dialogue act recognition from the viewpoint of extending rich CRF-attentive structural dependencies. We first present the hierarchical semantic inference with memory mechanism from three levels: word level, utterance level and conversation level. We then develop graphical structured attention to the linear chain conditional random field to fully utilize the contextual dependencies.", "id": 2509, "question": "By how much do they outperform state-of-the-art solutions on SWDA and MRDA?", "title": "Dialogue Act Recognition via CRF-Attentive Structured Network" }, { "answers": [ "" ], "context": "Microblogging environments, which allow users to post short messages, have gained increased popularity in the last decade. Twitter, which is one of the most popular microblogging platforms, has become an interesting platform for exchanging ideas, following recent developments and trends, or discussing any possible topic. Since Twitter has an enormously wide range of users with varying interests and sharing preferences, a significant amount of content is being created rapidly. Therefore, mining such platforms can extract valuable information. As a consequence, extracting information from Twitter has become a hot topic of research. For Twitter text mining, one popular research area is opinion mining or sentiment analysis, which is surely useful for companies or political parties to gather information about their services and products BIBREF0 . Another popular research area is content analysis, or more specifically topic modeling, which is useful for text classification and filtering applications on Twitter BIBREF1 . Moreover, event monitoring and trend analysis are also other examples of useful application areas on microblog texts BIBREF2 .", "id": 2510, "question": "What type and size of word embeddings were used?", "title": "Named Entity Recognition on Twitter for Turkish using Semi-supervised Learning with Word Embeddings" }, { "answers": [ "" ], "context": "There are various important studies of NER on Twitter for English. Ritter-2011 presented a two-phase NER system for tweets, T-NER, using Conditional Random Fields (CRF) and including tweet-specific features. Liu-2011 proposed a hybrid NER approach based on K-Nearest Neighbors and linear CRF. Liu-2012 presented a factor graph-based method for NER on Twitter. Li-2012 described an unsupervised approach for tweets, called TwiNER. Bontcheva-2013 described an NLP pipeline for tweets, called TwitIE. Very recently, Cherry-2015 have shown the effectiveness of Brown clusters and word vectors on Twitter NER for English.", "id": 2511, "question": "What data was used to build the word embeddings?", "title": "Named Entity Recognition on Twitter for Turkish using Semi-supervised Learning with Word Embeddings" }, { "answers": [ "" ], "context": "Abstractive summarization aims to shorten a source article or paragraph by rewriting while preserving the main idea. Due to the difficulties in rewriting long documents, a large body of research on this topic has focused on paragraph-level article summarization. Among them, sequence-to-sequence models have become the mainstream and some have achieved state-of-the-art performance BIBREF0 , BIBREF1 , BIBREF2 . In general, the only available information for these models during decoding is simply the source article representations from the encoder and the generated words from the previous time steps BIBREF2 , BIBREF3 , BIBREF4 , while the previous words are also generated based on the article representations. Since natural language text is complicated and verbose in nature, and training data is insufficient in size to help the models distinguish important article information from noise, sequence-to-sequence models tend to deteriorate with the accumulation of word generation, e.g., they generate irrelevant and repeated words frequently BIBREF5 .", "id": 2512, "question": "How are templates discovered from training data?", "title": "BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization" }, { "answers": [ "efficiency task aimed at reducing the number of parameters while minimizing drop in performance" ], "context": "The Transformer network BIBREF3 is a neural sequence-to-sequence model that has achieved state-of-the-art results in machine translation. However, Transformer models tend to be very large, typically consisting of hundreds of millions of parameters. As the number of parameters directly corresponds to secondary storage requirements and memory consumption during inference, using Transformer networks may be prohibitively expensive in scenarios with constrained resources. For the 2019 Workshop on Neural Generation of Text (WNGT) Efficiency shared task BIBREF0, the Notre Dame Natural Language Processing (NDNLP) group looked at a method of inducing sparsity in parameters called auto-sizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance.", "id": 2513, "question": "What is WNGT 2019 shared task?", "title": "Efficiency through Auto-Sizing: Notre Dame NLP's Submission to the WNGT 2019 Efficiency Task" }, { "answers": [ "" ], "context": "Grammatical error correction (GEC) is a challenging task due to the variability of the type of errors and the syntactic and semantic dependencies of the errors on the surrounding context. Most of the grammatical error correction systems use classification and rule-based approaches for correcting specific error types. However, these systems use several linguistic cues as features. The standard linguistic analysis tools like part-of-speech (POS) taggers and parsers are often trained on well-formed text and perform poorly on ungrammatical text. This introduces further errors and limits the performance of rule-based and classification approaches to GEC. As a consequence, the phrase-based statistical machine translation (SMT) approach to GEC has gained popularity because of its ability to learn text transformations from erroneous text to correct text from error-corrected parallel corpora without any additional linguistic information. They are also not limited to specific error types. Currently, many state-of-the-art GEC systems are based on SMT or use SMT components for error correction BIBREF0 , BIBREF1 , BIBREF2 . In this paper, grammatical error correction includes correcting errors of all types, including word choice errors and collocation errors which constitute a large class of learners' errors.", "id": 2514, "question": "Do they use pretrained word representations in their neural network models?", "title": "Neural Network Translation Models for Grammatical Error Correction" }, { "answers": [ "" ], "context": "In the past decade, there has been increasing attention on grammatical error correction in English, mainly due to the growing number of English as Second Language (ESL) learners around the world. The popularity of this problem in natural language processing research grew further through Helping Our Own (HOO) and the CoNLL shared tasks BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Most published work in GEC aimed at building specific classifiers for different error types and then use them to build hybrid systems BIBREF9 , BIBREF10 . One of the first approaches of using SMT for GEC focused on correction of countability errors of mass nouns (e.g., many informations INLINEFORM0 much information) BIBREF11 . They had to use an artificially constructed parallel corpus for training their SMT system. Later, the availability of large-scale error corrected data BIBREF12 further improved SMT-based GEC systems.", "id": 2515, "question": "How do they combine the two proposed neural network models?", "title": "Neural Network Translation Models for Grammatical Error Correction" }, { "answers": [ "" ], "context": "In this paper, the task of grammatical error correction is formulated as a translation task from the language of `bad' English to the language of `good' English. That is, the source sentence is written by a second language learner and potentially contains grammatical errors, whereas the target sentence is the corrected fluent sentence. We use a phrase-based machine translation framework BIBREF18 for translation, which employs a log-linear model to find the best translation INLINEFORM0 given a source sentence INLINEFORM1 . The best translation is selected according to the following equation: INLINEFORM2 ", "id": 2516, "question": "Which dataset do they evaluate grammatical error correction on?", "title": "Neural Network Translation Models for Grammatical Error Correction" }, { "answers": [ "" ], "context": "Over the past few years, major commercial search engines have enriched and improved the user experience by proactively presenting related entities for a query along with the regular web search results. Figure FIGREF3 shows an example of Alibaba ShenMa search engine's entity recommendation results presented on the panel of its mobile search result page.", "id": 2517, "question": "How many users/clicks does their search engine have?", "title": "Context-aware Deep Model for Entity Recommendation in Search Engine at Alibaba" }, { "answers": [ "" ], "context": "Time-critical analysis of social media data streams is important for many application areas. For instance, responders to humanitarian disasters (e.g., earthquake, flood) need information about the disasters to determine what help is needed and where. This information usually breaks out on social media before other sources. During the onset of a crisis situation, rapid analysis of messages posted on microblogging platforms such as Twitter can help humanitarian organizations like the United Nations gain situational awareness, learn about urgent needs of affected people at different locations, and decide on actions accordingly BIBREF0 , BIBREF1 .", "id": 2518, "question": "what was their baseline comparison?", "title": "Rapid Classification of Crisis-Related Data on Social Networks using Convolutional Neural Networks" }, { "answers": [ "It is observed some variability - but not significant. Bert does not seem to gain much more syntax information than with type level information." ], "context": "Neural networks are the backbone of modern state-of-the-art Natural Language Processing (NLP) systems. One inherent by-product of training a neural network is the production of real-valued representations. Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks' impressive performance on many NLP tasks BIBREF0. As a result of this speculation, one common thread of research focuses on the construction of probes, i.e., supervised models that are trained to extract the linguistic properties directly BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. A syntactic probe, then, is a model for extracting syntactic properties, such as part-of-speech, from the representations BIBREF6.", "id": 2519, "question": "Was any variation in results observed based on language typology?", "title": "Information-Theoretic Probing for Linguistic Structure" }, { "answers": [ "" ], "context": "Following hewitt-liang-2019-designing, we consider probes that examine syntactic knowledge in contextualized embeddings. These probes only consider a single token's embedding and try to perform the task using only that information. Specifically, in this work, we consider part-of-speech (POS) labeling: determining a word's part of speech in a given sentence. For example, we wish to determine whether the word love is a noun or a verb. This task requires the sentential context for success. As an example, consider the utterance “love is blind” where, only with the context, is it clear that love is a noun. Thus, to do well on this task, the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS.", "id": 2520, "question": "Does the work explicitly study the relationship between model complexity and linguistic structure encoding?", "title": "Information-Theoretic Probing for Linguistic Structure" }, { "answers": [ "" ], "context": "Among the several senses that The Oxford English Dictionary, the most venerable dictionary of English, provides for the word event, are the following.", "id": 2521, "question": "Which datasets are used in this work?", "title": "Detecting and Extracting Events from Text Documents" }, { "answers": [ "" ], "context": "Open domain semantic parsing aims to map natural language utterances to structured meaning representations. Recently, seq2seq based approaches have achieved promising performance by structure-aware networks, such as sequence-to-actionBIBREF0 and STAMPBIBREF1.", "id": 2522, "question": "Does the training dataset provide logical form supervision?", "title": "A Sketch-Based System for Semantic Parsing" }, { "answers": [ "" ], "context": "The dataset MSParS is published by NLPCC 2019 evaluation task. The whole dataset consists of 81,826 samples annotated by native English speakers. 80% of them are used as training set. 10% of them are used as validation set while the rest is used as test set. 3000 hard samples are selected from the test set. Metric for this dataset is the exactly matching accuracy on both full test set and hard test subset. Each sample is composed of the question, the logical form, the parameters(entity/value/type) and question type as the Table TABREF3 demonstrates.", "id": 2523, "question": "What is the difference between the full test set and the hard test set?", "title": "A Sketch-Based System for Semantic Parsing" }, { "answers": [ "" ], "context": "The cocktail party problem BIBREF0 , BIBREF1 , referring to multi-talker overlapped speech recognition, is critical to enable automatic speech recognition (ASR) scenarios such as automatic meeting transcription, automatic captioning for audio/video recordings, and multi-party human-machine interactions, where overlapping speech is commonly observed and all streams need to be transcribed. The problem is still one of the hardest problems in ASR, despite encouraging progresses BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 .", "id": 2524, "question": "How is the discriminative training formulation different from the standard ones?", "title": "Progressive Joint Modeling in Unsupervised Single-channel Overlapped Speech Recognition" }, { "answers": [ "" ], "context": "Unsupervised single-channel overlapped speech recognition refers to the speech recognition problem when multiple unseen talkers speak at the same time and only a single channel of overlapped speech is available. Different from supervised mode, there's not any prior knowledge of speakers in the evaluation stage.", "id": 2525, "question": "How are the two datasets artificially overlapped?", "title": "Progressive Joint Modeling in Unsupervised Single-channel Overlapped Speech Recognition" }, { "answers": [ "" ], "context": "Mining Twitter data has increasingly been attracting much research attention in many NLP applications such as in sentiment analysis BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 and stock market prediction BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Recently, Davidov2010 and Reyes2013 have shown that Twitter data includes a high volume of “ironic” tweets. For example, a user can use positive words in a Twitter message to her intended negative meaning (e.g., “It is awesome to go to bed at 3 am #not”). This especially results in a research challenge to assign correct sentiment labels for ironic tweets BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 .", "id": 2526, "question": "What baseline system is used?", "title": "NIHRIO at SemEval-2018 Task 3: A Simple and Accurate Neural Network Model for Irony Detection in Twitter" }, { "answers": [ "" ], "context": "The dataset consists of 4,618 tweets (2,222 ironic + 2,396 non-ironic) that are manually labelled by three students. Some pre-processing steps were applied to the dataset, such as the emoji icons in a tweet are replaced by a describing text using the Python emoji package. Additionally, all the ironic hashtags, such as #not, #sarcasm, #irony, in the dataset have been removed. This makes difficult to correctly predict the label of a tweet. For example, “@coreybking thanks for the spoiler!!!! #not” is an ironic tweet but without #not, it probably is a non-ironic tweet. The dataset is split into the training and test sets as detailed in Table TABREF5 .", "id": 2527, "question": "What type of lexical, syntactic, semantic and polarity features are used?", "title": "NIHRIO at SemEval-2018 Task 3: A Simple and Accurate Neural Network Model for Irony Detection in Twitter" }, { "answers": [ "" ], "context": "Writing a summary is a different task compared to producing a longer article. As a consequence, it is likely that the topic and discourse moves made in summaries differ from those in regular articles. In this work, we present a powerful extractive summarization system which exploits rich summary-internal structure to perform content selection, redundancy reduction, and even predict the target summary length, all in one joint model.", "id": 2528, "question": "How does nextsum work?", "title": "What comes next? Extractive summarization by next-sentence prediction" }, { "answers": [ "There is no reason to think that this approach wouldn't also be successful for other technical domains. Technical terms are replaced with tokens, therefore so as long as there is a corresponding process for identifying and replacing technical terms in the new domain this approach could be viable." ], "context": "Neural machine translation (NMT), a new approach to solving machine translation, has achieved promising results BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . An NMT system builds a simple large neural network that reads the entire input source sentence and generates an output translation. The entire neural network is jointly trained to maximize the conditional probability of a correct translation of a source sentence with a bilingual corpus. Although NMT offers many advantages over traditional phrase-based approaches, such as a small memory footprint and simple decoder implementation, conventional NMT is limited when it comes to larger vocabularies. This is because the training complexity and decoding complexity proportionally increase with the number of target words. Words that are out of vocabulary are represented by a single unknown token in translations, as illustrated in Figure 1 . The problem becomes more serious when translating patent documents, which contain several newly introduced technical terms.", "id": 2529, "question": "Can the approach be generalized to other technical domains as well? ", "title": "Translation of Patent Sentences with a Large Vocabulary of Technical Terms Using Neural Machine Translation" }, { "answers": [ "" ], "context": "Currency trading (Forex) is the largest world market in terms of volume. We analyze trading and tweeting about the EUR-USD currency pair over a period of three years. First, a large number of tweets were manually labeled, and a Twitter stance classification model is constructed. The model then classifies all the tweets by the trading stance signal: buy, hold, or sell (EUR vs. USD). The Twitter stance is compared to the actual currency rates by applying the event study methodology, well-known in financial economics. It turns out that there are large differences in Twitter stance distribution and potential trading returns between the four groups of Twitter users: trading robots, spammers, trading companies, and individual traders. Additionally, we observe attempts of reputation manipulation by post festum removal of tweets with poor predictions, and deleting/reposting of identical tweets to increase the visibility without tainting one's Twitter timeline. ", "id": 2530, "question": "How many tweets were manually labelled? ", "title": "Forex trading and Twitter: Spam, bots, and reputation manipulation" }, { "answers": [ "The same 2K set from Gigaword used in BIBREF7" ], "context": "Machine summarization systems have made significant progress in recent years, especially in the domain of news text. This has been made possible among other things by the popularization of the neural sequence-to-sequence (seq2seq) paradigm BIBREF0, BIBREF1, BIBREF2, the development of methods which combine the strengths of extractive and abstractive approaches to summarization BIBREF3, BIBREF4, and the availability of large training datasets for the task, such as Gigaword or the CNN-Daily Mail corpus which comprise of over 3.8M shorter and 300K longer articles and aligned summaries respectively. Unfortunately, the lack of datasets of similar scale for other text genres remains a limiting factor when attempting to take full advantage of these modeling advances using supervised training algorithms.", "id": 2531, "question": "What dataset they use for evaluation?", "title": "Unsupervised Text Summarization via Mixed Model Back-Translation" }, { "answers": [ "" ], "context": "Today most of business-related information is transmitted in an electronic form, such as emails. Therefore, converting these messages into an easily analyzable representation could open numerous business opportunities, as a lot of them are not used fully because of the difficulty to build bespoke parsing methods. In particular, a great number of these transmissions are semi-structured text, which doesn’t necessarily follows the classic english grammar. As seen in Fig. 1 , they can be under the form of tables containing diverse elements, words and numbers, afterwards referred to as tokens.", "id": 2532, "question": "What is the source of the tables?", "title": "Putting Self-Supervised Token Embedding on the Tables" }, { "answers": [ "all regions except those that are colored black" ], "context": "Human language reflects cultural, political, and social evolution. Words are the atoms of language. Their meanings and usage patterns reveal insight into the dynamical process by which society changes. Indeed, the increasing frequency with which electronic text is used as a means of communicating, e.g., through email, text messaging, and social media, offers us the opportunity to quantify previously unobserved mechanisms of linguistic development.", "id": 2533, "question": "Which regions of the United States do they consider?", "title": "English verb regularization in books and tweets" }, { "answers": [ "" ], "context": "To be consistent with prior work, we chose the verb list for our project to match that of Michel et al. BIBREF1 . When comparing BE with AE, we use the subset of verbs that form the irregular past tense with the suffix -t. When calculating frequencies or token counts for the `past tense' we use both the preterite and past participle of the verb. See #1 for a complete tabulation of all verb forms.", "id": 2534, "question": "Why did they only consider six years of published books?", "title": "English verb regularization in books and tweets" }, { "answers": [ "" ], "context": "In the past 18 months, advances on many Natural Language Processing (NLP) tasks have been dominated by deep learning models and, more specifically, the use of Transfer Learning methods BIBREF0 in which a deep neural network language model is pretrained on a web-scale unlabelled text dataset with a general-purpose training objective before being fine-tuned on various downstream tasks. Following noticeable improvements using Long Short-Term Memory (LSTM) architectures BIBREF1, BIBREF2, a series of works combining Transfer Learning methods with large-scale Transformer architectures BIBREF3 has repeatedly advanced the state-of-the-art on NLP tasks ranging from text classification BIBREF4, language understanding BIBREF5, BIBREF6, BIBREF7, machine translation BIBREF8, and zero-short language generation BIBREF9 up to co-reference resolution BIBREF10 and commonsense inference BIBREF11.", "id": 2535, "question": "What state-of-the-art general-purpose pretrained models are made available under the unified API? ", "title": "HuggingFace's Transformers: State-of-the-art Natural Language Processing" }, { "answers": [ "they use ROC curves and cross-validation" ], "context": "For the past 20 years, topic models have been used as a means of dimension reduction on text data, in order to ascertain underlying themes, or `topics', from documents. These probabilistic models have frequently been applied to machine learning problems, such as web spam filtering BIBREF0 , database sorting BIBREF1 and trend detection BIBREF2 .", "id": 2536, "question": "How is performance measured?", "title": "A framework for streamlined statistical prediction using topic models" }, { "answers": [ "" ], "context": "Being one of the prominent natural language generation tasks, neural abstractive text summarization (NATS) has gained a lot of popularity BIBREF0 , BIBREF1 , BIBREF2 . Different from extractive text summarization BIBREF3 , BIBREF4 , BIBREF5 , NATS relies on modern deep learning models, particularly sequence-to-sequence (Seq2Seq) models, to generate words from a vocabulary based on the representations/features of source documents BIBREF0 , BIBREF6 , so that it has the ability to generate high-quality summaries that are verbally innovative and can also easily incorporate external knowledge BIBREF1 . Many NATS models have achieved better performance in terms of the commonly used evaluation measures (such as ROUGE BIBREF7 score) compared to extractive text summarization approaches BIBREF2 , BIBREF8 , BIBREF9 .", "id": 2537, "question": "What models are included in the toolkit?", "title": "LeafNATS: An Open-Source Toolkit and Live Demo System for Neural Abstractive Text Summarization" }, { "answers": [ "" ], "context": " In this work, we aim to develop an automatic Language-Based Image Editing (LBIE) system. Given a source image, which can be a sketch, a grayscale image or a natural image, the system will automatically generate a target image by editing the source image following natural language instructions provided by users. Such a system has a wide range of applications from Computer-Aided Design (CAD) to Virtual Reality (VR). As illustrated in Figure 1 , a fashion designer presents a sketch of a pair of new shoes (i.e., the source image) to a customer, who can provide modifications on the style and color in verbal description, which can then be taken by the LBIE system to change the original design. The final output (i.e., the target image) is the revised and enriched design that meets the customer’s requirement. Figure 2 showcases the use of LBIE for VR. While most VR systems still use button-controlled or touchscreen interface, LBIE provides a natural user interface for future VR systems, where users can easily modify the virtual environment via natural language instructions.", "id": 2538, "question": "Is there any human evaluation involved in evaluating this famework?", "title": "Language-Based Image Editing with Recurrent Attentive Models" }, { "answers": [ "" ], "context": "The MULTEXT-East project, (Multilingual Text Tools and Corpora for Central and Eastern European Languages) ran from ’95 to ’97 and developed standardised language resources for six Central and Eastern European languages, as well as for English, the “hub” language of the project BIBREF0. The project was a spin-off of the MULTEXT project BIBREF1, which pursued similar goals for six Western European languages. The main results of the project were morphosyntactic specifications, defining the tagsets for lexical and corpus annotations in a common format, lexical resources and annotated multilingual corpora. In addition to delivering resources, a focus of the project was also the adoption and promotion of encoding standardization. On the one hand, the morphosyntactic annotations and lexicons were developed in the formalism used in MULTEXT, itself based on the specifications of the Expert Advisory Group on Language Engineering Standards, EAGLES BIBREF2. On the other, the corpus resources were encoded in SGML, using CES, the Corpus Encoding Standard BIBREF3, a derivative of the Text Encoding Initiative Guidelines, version P3, BIBREF4.", "id": 2539, "question": "How big is multilingual dataset?", "title": "MULTEXT-East" }, { "answers": [ "" ], "context": "Businesses rely on contracts to capture critical obligations with other parties, such as: scope of work, amounts owed, and cancellation policies. Various efforts have gone into automatically extracting and classifying these terms. These efforts have usually been modeled as: classification, entity and relation extraction tasks. In this paper we focus on classification, but in our application we have found that our findings apply equally and sometimes, more profoundly, on other tasks.", "id": 2540, "question": "How big is dataset used for fine-tuning BERT?", "title": "BERT Goes to Law School: Quantifying the Competitive Advantage of Access to Large Legal Corpora in Contract Understanding" }, { "answers": [ "" ], "context": "Prompt: What is your team’s vision for your Socialbot? How do you want your customers to feel at the end of an interaction with your socialbot? How would your team measure success in competition?", "id": 2541, "question": "How big are datasets for 2019 Amazon Alexa competition?", "title": "Proposal Towards a Personalized Knowledge-powered Self-play Based Ensemble Dialog System" }, { "answers": [ "They use self-play learning , optimize the model for specific metrics, train separate models per user, use model and response classification predictors, and filter the dataset to obtain higher quality training data." ], "context": "Prompt: Please share a sample interaction/conversation you expect your Socialbot to achieve by the end of the Competition.", "id": 2542, "question": "What is novel in author's approach?", "title": "Proposal Towards a Personalized Knowledge-powered Self-play Based Ensemble Dialog System" }, { "answers": [ "Training datasets: TTS System dataset and embedding selection dataset. Evaluation datasets: Common Prosody Errors dataset and LFR dataset." ], "context": "Corresponding author email: tshubhi@amazon.com. Paper submitted to IEEE ICASSP 2020", "id": 2543, "question": "What dataset is used for train/test of this method?", "title": "Dynamic Prosody Generation for Speech Synthesis using Linguistics-Driven Acoustic Embedding Selection" }, { "answers": [ "The mixed objective improves EM by 2.5% and F1 by 2.2%" ], "context": "Existing state-of-the-art question answering models are trained to produce exact answer spans for a question and a document. In this setting, a ground truth answer used to supervise the model is defined as a start and an end position within the document. Existing training approaches optimize using cross entropy loss over the two positions. However, this suffers from a fundamental disconnect between the optimization, which is tied to the position of a particular ground truth answer span, and the evaluation, which is based on the textual content of the answer. This disconnect is especially harmful in cases where answers that are textually similar to, but distinct in positions from, the ground truth are penalized in the same fashion as answers that are textually dissimilar. For example, suppose we are given the sentence “Some believe that the Golden State Warriors team of 2017 is one of the greatest teams in NBA history”, the question “which team is considered to be one of the greatest teams in NBA history”, and a ground truth answer of “the Golden State Warriors team of 2017”. The span “Warriors” is also a correct answer, but from the perspective of traditional cross entropy based training it is no better than the span “history”.", "id": 2544, "question": "How much is the gap between using the proposed objective and using only cross-entropy objective?", "title": "DCN+: Mixed Objective and Deep Residual Coattention for Question Answering" }, { "answers": [ "" ], "context": "A core step in statistical data-to-text generation concerns learning correspondences between structured data representations (e.g., facts in a database) and paired texts BIBREF0 , BIBREF1 , BIBREF2 . These correspondences describe how data representations are expressed in natural language (content realisation) but also indicate which subset of the data is verbalised in the text (content selection).", "id": 2545, "question": "What is the multi-instance learning?", "title": "Bootstrapping Generators from Noisy Data" }, { "answers": [ "5 domains: software, stuff, african wildlife, healthcare, datatypes" ], "context": "Within the field of ontology engineering, Competency Questions (CQs) BIBREF0 are natural language questions outlining the scope of knowledge represented by an ontology. They represent functional requirements in the sense that the developed ontology or an ontology-based information system should be able to answer them; hence contain all the relevant knowledge. For example, a CQ may be What are the implementations of C4.5 algorithm?, indicating that the ontology needs to contain classes, such as Algorithm and C4.5 as subclass of Algorithm, and something about implementations such that the answer to the CQ will be non-empty.", "id": 2546, "question": "How many domains of ontologies do they gather data from?", "title": "Competency Questions and SPARQL-OWL Queries Dataset and Analysis" }, { "answers": [ "" ], "context": "Answering questions posed in natural language is a fundamental AI task, with a large number of impressive QA systems built over the years. Today's Internet search engines, for instance, can successfully retrieve factoid style answers to many natural language queries by efficiently searching the Web. Information Retrieval (IR) systems work under the assumption that answers to many questions of interest are often explicitly stated somewhere BIBREF0 , and all one needs, in principle, is access to a sufficiently large corpus. Similarly, statistical correlation based methods, such as those using Pointwise Mutual Information or PMI BIBREF1 , work under the assumption that many questions can be answered by looking for words that tend to co-occur with the question words in a large corpus.", "id": 2547, "question": "How is the semi-structured knowledge base created?", "title": "Question Answering via Integer Programming over Semi-Structured Knowledge" }, { "answers": [ "Improve existing NLP methods. Improve linguistic analysis. Measure impact of word normalization tools." ], "context": "Morphology deals with the internal structure of words BIBREF0 , BIBREF1 . Languages of the world have different word production processes. Morphological richness vary from language to language, depending on their linguistic typology. In natural language processing (NLP), taking into account the morphological complexity inherent to each language could be important for improving or adapting the existing methods, since the amount of semantic and grammatical information encoded at the word level, may vary significantly from language to language.", "id": 2548, "question": "what is the practical application for this paper?", "title": "Comparing morphological complexity of Spanish, Otomi and Nahuatl" }, { "answers": [ "" ], "context": "The notion of word sense is central to computational lexical semantics. Word senses can be either encoded manually in lexical resources or induced automatically from text. The former knowledge-based sense representations, such as those found in the BabelNet lexical semantic network BIBREF0 , are easily interpretable by humans due to the presence of definitions, usage examples, taxonomic relations, related words, and images. The cost of such interpretability is that every element mentioned above is encoded manually in one of the underlying resources, such as Wikipedia. Unsupervised knowledge-free approaches, e.g. BIBREF1 , BIBREF2 , require no manual labor, but the resulting sense representations lack the above-mentioned features enabling interpretability. For instance, systems based on sense embeddings are based on dense uninterpretable vectors. Therefore, the meaning of a sense can be interpreted only on the basis of a list of related senses.", "id": 2549, "question": "Do they use a neural model for their task?", "title": "Unsupervised, Knowledge-Free, and Interpretable Word Sense Disambiguation" }, { "answers": [ "Two neural networks: an extractor based on an encoder (BERT) and a decoder (LSTM Pointer Network BIBREF22) and an abstractor identical to the one proposed in BIBREF8." ], "context": "The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information of the original text. In general, there are two ways to do text summarization: Extractive and Abstractive BIBREF0. Extractive approaches generate summaries by selecting salient sentences or phrases from a source text, while abstractive approaches involve a process of paraphrasing or generating sentences to write a summary.", "id": 2550, "question": "What's the method used here?", "title": "Summary Level Training of Sentence Rewriting for Abstractive Summarization" }, { "answers": [ "AE-HCN outperforms by 17%, AE-HCN-CNN outperforms by 20% on average" ], "context": "Recently, there has been a surge of excitement in developing chatbots for various purposes in research and enterprise. Data-driven approaches offered by common bot building platforms (e.g. Google Dialogflow, Amazon Alexa Skills Kit, Microsoft Bot Framework) make it possible for a wide range of users to easily create dialog systems with a limited amount of data in their domain of interest. Although most task-oriented dialog systems are built for a closed set of target domains, any failure to detect out-of-domain (OOD) utterances and respond with an appropriate fallback action can lead to frustrating user experience. There have been a set of prior approaches for OOD detection which require both in-domain (IND) and OOD data BIBREF0 , BIBREF1 . However, it is a formidable task to collect sufficient data to cover in theory unbounded variety of OOD utterances. In contrast, BIBREF2 introduced an in-domain verification method that requires only IND utterances. Later, with the rise of deep neural networks, BIBREF3 proposed an autoencoder-based OOD detection method which surpasses prior approaches without access to OOD data. However, those approaches still have some restrictions such that there must be multiple sub-domains to learn utterance representation and one must set a decision threshold for OOD detection. This can prohibit these methods from being used for most bots that focus on a single task.", "id": 2551, "question": "By how much does their method outperform state-of-the-art OOD detection?", "title": "Contextual Out-of-Domain Utterance Handling With Counterfeit Data Augmentation" }, { "answers": [ "Similar to standard convolutional networks but instead they skip some input values effectively operating on a broader scale." ], "context": "Keyword spotting (KWS) aims at detecting a pre-defined keyword or set of keywords in a continuous stream of audio. In particular, wake-word detection is an increasingly important application of KWS, used to initiate an interaction with a voice interface. In practice, such systems run on low-resource devices and listen continuously for a specific wake word. An effective on-device KWS therefore requires real-time response and high accuracy for a good user experience, while limiting memory footprint and computational cost.", "id": 2552, "question": "What are dilated convolutions?", "title": "Efficient keyword spotting using dilated convolutions and gating" }, { "answers": [ "" ], "context": "Knowledge graphs are a vital source for disambiguation and discovery in various tasks such as question answering BIBREF0 , information extraction BIBREF1 and search BIBREF2 . They are, however, known to suffer from data quality issues BIBREF3 . Most prominently, since formal knowledge is inherently sparse, relevant facts are often missing from the graph.", "id": 2553, "question": "what was the evaluation metrics studied in this work?", "title": "An Open-World Extension to Knowledge Graph Completion Models" }, { "answers": [ "" ], "context": "Parameters of the encoder-decoder were tuned on a dedicated validation set. We experienced with different learning rates (0.1, 0.01, 0.001), dropout-rates (0.1, 0.2, 0.3, 0.5) BIBREF11 and optimization techniques (AdaGrad BIBREF6 , AdaDelta BIBREF30 , Adam BIBREF15 and RMSprop BIBREF29 ). We also experimented with different batch sizes (8, 16, 32), and found improvement in runtime but no significant improvement in performance.", "id": 2554, "question": "Do they analyze ELMo?", "title": "Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks" }, { "answers": [ "Optimized TF-IDF, iterated TF-IDF, BERT re-ranking." ], "context": "The Explanation Regeneration shared task asked participants to develop methods to reconstruct gold explanations for elementary science questions BIBREF1, using a new corpus of gold explanations BIBREF2 that provides supervision and instrumentation for this multi-hop inference task.", "id": 2555, "question": "what are the three methods presented in the paper?", "title": "Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation" }, { "answers": [ "Kaggle\nSubversive Kaggle\nWikipedia\nSubversive Wikipedia\nReddit\nSubversive Reddit " ], "context": "Online communities abound today, forming on social networks, on webforums, within videogames, and even in the comments sections of articles and videos. While this increased international contact and exchange of ideas has been a net positive, it has also been matched with an increase in the spread of high-risk and toxic content, a category which includes cyberbullying, racism, sexual predation, and other negative behaviors that are not tolerated in society. The two main strategies used by online communities to moderate themselves and stop the spread of toxic comments are automated filtering and human surveillance. However, given the sheer number of messages sent online every day, human moderation simply cannot keep up, and either leads to a severe slowdown of the conversation (if messages are pre-moderated before posting) or allows toxic messages to be seen and shared thousands of times before they are deleted (if they are post-moderated after being posted and reported). In addition, human moderation cannot scale up easily to the number of messages to monitor; for example, Facebook has a team of 20,000 human moderators, which is both massive compared to the total of 25,000 other employees in the company, and minuscule compared to the fact its automated algorithms flagged messages that would require 180,000 human moderators to review. Keyword detection, on the other hand, is instantaneous, scales up to the number of messages, and prevents toxic messages from being posted at all, but it can only stop messages that use one of a small set of denied words, and, are thus fairly easy to circumvent by introducing minor misspellings (i.e. writing \"kl urself\" instead of \"kill yourself\"). In BIBREF0 , the authors show how minor changes can elude even complex systems. These attempts to bypass the toxicity detection system are called subverting the system, and toxic users doing it are referred to as subversive users.", "id": 2556, "question": "what datasets did the authors use?", "title": "Impact of Sentiment Detection to Recognize Toxic and Subversive Online Comments" }, { "answers": [ "" ], "context": "This letter arises from two intriguing questions about human language. The first question is: To what extent language, and also language evolution, can be viewed as a graph-theoretical problem? Language is an amazing example of a system of interrelated units at different organization scales. Several recent works have stressed indeed the fact that human languages can be viewed language as a (complex) network of interacting parts BIBREF0, BIBREF1, BIBREF2, BIBREF3. Within the graph-based approach to human language, one may think word-meaning mappings (that is, vocabularies) as bipartite graphs, formed by two sets: words and meanings BIBREF2.", "id": 2557, "question": "What are three possible phases for language formation?", "title": "Phase transitions in a decentralized graph-based approach to human language" }, { "answers": [ "" ], "context": "Comments: An approach to handle the OOV issue in multilingual BERT is proposed. A great deal of nice experiments were done but ultimately (and in message board discussions) the reviewers agreed there wasn't enough novelty or result here to justify acceptance.", "id": 2558, "question": "How many parameters does the model have?", "title": "Improving Pre-Trained Multilingual Models with Vocabulary Expansion" }, { "answers": [ "" ], "context": "Following seminal work by Bengio and Collobert, the use of deep learning models for natural language processing (NLP) applications received an increasing attention in recent years. In parallel, initiated by the computer vision domain, there is also a trend toward understanding deep learning models through visualization techniques BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 or through decision tree extraction BIBREF6 . Most work dedicated to understanding neural network classifiers for NLP tasks BIBREF7 , BIBREF8 use gradient-based approaches. Recently, a technique called layer-wise relevance propagation (LRP) BIBREF4 has been shown to produce more meaningful explanations in the context of image classifications BIBREF9 . In this paper, we apply the same LRP technique to a NLP task, where a neural network maps a sequence of word2vec vectors representing a text document to its category, and evaluate whether similar benefits in terms of explanation quality are observed.", "id": 2559, "question": "Do the experiments explore how various architectures and layers contribute towards certain decisions?", "title": "Explaining Predictions of Non-Linear Classifiers in NLP" }, { "answers": [ "" ], "context": "In recent years, social networking has grown and become prevalent with every people, it makes easy for people to interact and share with each other. However, every problem has two sides. It also has some negative issues, hate speech is a hot topic in the domain of social media. With the freedom of speech on social networks and anonymity on the internet, some people are free to comment on hate and insults. Hate speech can have an adverse effect on human behavior as well as directly affect society. We don't manually delete each of those comments, which is time-consuming and boring. This spurs research to build an automated system that detects hate speech and eliminates them. With that system, we can detect and eliminate hate speech and thus reduce their spread on social media. With Vietnamese, we can use methods to apply specific extraction techniques manually and in combination with string labeling algorithms such as Conditional Random Field (CRF)[1], Model Hidden Markov (HMM)[2] or Entropy[3]. However, we have to choose the features manually to bring the model with high accuracy. Deep Neural Network architectures can handle the weaknesses of the above methods. In this report we apply Bidirectional Long Short-Term Memory (Bi-LSTM) to build the model. Also combined with the word embedding matrix to increase the accuracy of the model.", "id": 2560, "question": "What social media platform does the data come from?", "title": "Hate Speech Detection on Vietnamese Social Media Text using the Bidirectional-LSTM Model" }, { "answers": [ "Compared to baselines SAN (Table 1) shows improvement of 1.096% on EM and 0.689% F1. Compared to other published SQuAD results (Table 2) SAN is ranked second. " ], "context": "Machine reading comprehension (MRC) is a challenging task: the goal is to have machines read a text passage and then answer any question about the passage. This task is an useful benchmark to demonstrate natural language understanding, and also has important applications in e.g. conversational agents and customer service support. It has been hypothesized that difficult MRC problems require some form of multi-step synthesis and reasoning. For instance, the following example from the MRC dataset SQuAD BIBREF0 illustrates the need for synthesis of information across sentences and multiple steps of reasoning:", "id": 2561, "question": "How much performance improvements they achieve on SQuAD?", "title": "Stochastic Answer Networks for Machine Reading Comprehension" }, { "answers": [ "" ], "context": "If you're good at replying to a single request, are you also likely to be good at doing dialogue? Much current work seems to assume that the answer to this question is yes, in that it attempts a scaling up from single pairs of utterance plus response to longer dialogues: See, e.g., the work on neural chatbots following on from BIBREF0, where the main evaluation metric is “next utterance retrieval”; and on visual dialogue BIBREF1, which views itself as a natural extension of visual question answering BIBREF2.", "id": 2562, "question": "Do the authors perform experiments using their proposed method?", "title": "Grounded Agreement Games: Emphasizing Conversational Grounding in Visual Dialogue Settings" }, { "answers": [ "" ], "context": "Deep and recurrent neural networks with large network capacity have become increasingly accurate for challenging language processing tasks. For example, machine translation models have been able to attain impressive accuracies, with models that use hundreds of millions BIBREF0 , BIBREF1 or billions BIBREF2 of parameters. These models, however, may not be feasible in all computational settings. In particular, models running on mobile devices are often constrained in terms of memory and computation.", "id": 2563, "question": "What NLP tasks do the authors evaluate feed-forward networks on?", "title": "Natural Language Processing with Small Feed-Forward Networks" }, { "answers": [ "" ], "context": "As time passes, language usage changes. For example, the names `Bert' and `Elmo' would only rarely make an appearance prior to 2018 in the context of scientific writing. After the publication of BERT BIBREF0 and ELMo BIBREF1, however, usage has increased in frequency. In the context of named entities on Twitter, it is also likely that these names would be tagged as person prior to 2018, and are now more likely to refer to an artefact. As such, their part-of-speech tags will also differ. Evidently, evolution of language usage affects multiple natural language processing (NLP) tasks and models based on data from one point in time cannot be expected to operate for an extended period of time.", "id": 2564, "question": "What are three challenging tasks authors evaluated their sequentially aligned representations?", "title": "Back to the Future -- Sequential Alignment of Text Representations" }, { "answers": [ "" ], "context": " BIBREF0 propose a reinforcement learning framework for question answering, called active question answering (ActiveQA), that aims to improve answering by systematically perturbing input questions (cf. BIBREF1 ). Figure 1 depicts the generic agent-environment framework. The agent (AQA) interacts with the environment (E) in order to answer a question ( $q_0$ ). The environment includes a question answering system (Q&A), and emits observations and rewards. A state $s_t$ at time $t$ is the sequence of observations and previous actions generated starting from $q_0$ : $s_t=x_0,u_0,x_1,\\ldots ,u_{t-1},x_t$ , where $x_i$ includes the question asked ( $q_{i}$ ), the corresponding answer returned by the QA system ( $a_i$ ), and possibly additional information such as features or auxiliary tasks. The agent includes an action scoring component (U), which produced and action $u_t$ by deciding whether to submit a new question to the environment or to return a final answer. Formally, $u_t\\in \\mathcal {Q}\\cup \\mathcal {A}$ , where $s_t$0 is the set of all possible questions, and $s_t$1 is the set of all possible answers. The agent relies on a question reformulation system (QR), that provides candidate follow up questions, and on an answer ranking system (AR), which scores the answers contained in $s_t$2 . Each answer returned is assigned a reward. The objective is to maximize the expected reward over a set of questions.", "id": 2565, "question": "What is the difference in findings of Buck et al? It looks like the same conclusion was mentioned in Buck et al..", "title": "Analyzing Language Learned by an Active Question Answering Agent" }, { "answers": [ "The baseline is a multi-task architecture inspired by another paper." ], "context": "It is natural to think of NLP tasks existing in a hierarchy, with each task building upon the previous tasks. For example, Part of Speech (POS) is known to be an extremely strong feature for Noun Phrase Chunking, and downstream tasks such as greedy Language Modeling (LM) can make use of information about the syntactic and semantic structure recovered from junior tasks in making predictions.", "id": 2566, "question": "What is the baseline?", "title": "Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies" }, { "answers": [ "" ], "context": "When we speak and understand language we are arguably performing many different linguistic tasks at once. At the top level we might be trying to formulate the best possible sequence of words given all of the contextual and prior information, but this requires us to do lower-level tasks like understanding the syntactic and semantic roles of the words we choose in a specific context.", "id": 2567, "question": "What is the unsupervised task in the final layer?", "title": "Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies" }, { "answers": [ "" ], "context": "In the original introductory paper to Noun Phrase Chunking, abney1991parsing, Chunking is motivated by describing a three-phase process - first, you read the words and assign a Part of Speech tag, you then use a ‘Chunker’ to group these words together into chunks depending on the context and the Parts of Speech, and finally you build a parse tree on top of the chunks.", "id": 2568, "question": "How many supervised tasks are used?", "title": "Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies" }, { "answers": [ "The network architecture has a multi-task Bi-Directional Recurrent Neural Network, with an unsupervised sequence labeling task and a low-dimensional embedding layer between tasks. There is a hidden layer after each successive task with skip connections to the senior supervised layers." ], "context": "In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep.", "id": 2569, "question": "What is the network architecture?", "title": "Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies" }, { "answers": [ "" ], "context": "Despite its rapid adoption by academia and industry and its recent success BIBREF0 , neural machine translation has been found largely incapable of exploiting additional context other than the current source sentence. This incapability stems from the fact that larger-context machine translation systems tend to ignore additional context, such as previous sentences and associated images. Much of recent efforts have gone into building a novel network architecture that can better exploit additional context however without much success BIBREF1 , BIBREF2 , BIBREF3 .", "id": 2570, "question": "Is the proposed model more sensitive than previous context-aware models too?", "title": "Context-Aware Learning for Neural Machine Translation" }, { "answers": [ "" ], "context": "A larger-context neural machine translation system extends upon the conventional neural machine translation system by incorporating the context $C$ , beyond a source sentence $X$ , when translating into a sentence $Y$ in the target language. In the case of multimodal machine translation, this additional context is an image which the source sentence $X$ describes. In the case of document-level machine translation, the additional context $C$ may include other sentences in a document in which the source sentence $X$ appears. Such a larger-context neural machine translation system consists of an encoder $f^C$ that encodes the additional context $C$ into a set of vector representations that are combined with those extracted from the source sentence $X$ by the original encoder $f^X$ . These vectors are then used by the decoder $X$0 to compute the conditional distribution over the target sequences $X$1 in the autoregressive paradigm, i.e., $X$2 ", "id": 2571, "question": "In what ways the larger context is ignored for the models that do consider larger context?", "title": "Context-Aware Learning for Neural Machine Translation" }, { "answers": [ "Stacks and joins outputs of previous frames with inputs of the current frame" ], "context": "Ever since the introduction of Deep Neural Networks (DNNs) to Automatic Speech Recognition (ASR) tasks BIBREF0 , researchers had been trying to use additional inputs to the raw input features. We extracted features that are more representative using the first and second order differentiates of the raw input features. And we utilized features in multiple neighboring frames to make use of the context information.", "id": 2572, "question": "What does recurrent deep stacking network do?", "title": "Recurrent Deep Stacking Networks for Speech Recognition" }, { "answers": [ "" ], "context": "Task-oriented dialog systems help a user to accomplish some goal using natural language, such as making a restaurant reservation, getting technical support, or placing a phonecall. Historically, these dialog systems have been built as a pipeline, with modules for language understanding, state tracking, action selection, and language generation. However, dependencies between modules introduce considerable complexity – for example, it is often unclear how to define the dialog state and what history to maintain, yet action selection relies exclusively on the state for input. Moreover, training each module requires specialized labels.", "id": 2573, "question": "Does the latent dialogue state heklp their model?", "title": "Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning" }, { "answers": [ "" ], "context": "At a high level, the four components of a Hybrid Code Network are a recurrent neural network; domain-specific software; domain-specific action templates; and a conventional entity extraction module for identifying entity mentions in text. Both the RNN and the developer code maintain state. Each action template can be a textual communicative action or an API call. The HCN model is summarized in Figure 1 .", "id": 2574, "question": "Do the authors test on datasets other than bAbl?", "title": "Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning" }, { "answers": [ "reward 1 for successfully completing the task, with a discount by the number of turns, and reward 0 when fail" ], "context": "Broadly there are two lines of work applying machine learning to dialog control. The first decomposes a dialog system into a pipeline, typically including language understanding, dialog state tracking, action selection policy, and language generation BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Specifically related to HCNs, past work has implemented the policy as feed-forward neural networks BIBREF12 , trained with supervised learning followed by reinforcement learning BIBREF13 . In these works, the policy has not been recurrent – i.e., the policy depends on the state tracker to summarize observable dialog history into state features, which requires design and specialized labeling. By contrast, HCNs use an RNN which automatically infers a representation of state. For learning efficiency, HCNs use an external light-weight process for tracking entity values, but the policy is not strictly dependent on it: as an illustration, in Section \"Supervised learning evaluation II\" below, we demonstrate an HCN-based dialog system which has no external state tracker. If there is context which is not apparent in the text in the dialog, such as database status, this can be encoded as a context feature to the RNN.", "id": 2575, "question": "What is the reward model for the reinforcement learning appraoch?", "title": "Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning" }, { "answers": [ "No, there has been previous work on recognizing social norm violation." ], "context": "Social norms are informal understandings that govern human behavior. They serve as the basis for our beliefs and expectations about others, and are instantiated in human-human conversation through verbal and nonverbal behaviors BIBREF0 , BIBREF1 . There is considerable body of work on modeling socially normative behavior in intelligent agent-based systems BIBREF2 , BIBREF3 , aiming to facilitate lifelike conversations with human users. Violating such social norms and impoliteness in the conversation, on the other hand, have also been demonstrated to positively affect certain aspects of the social interaction. For instance, BIBREF4 suggests impoliteness may challenge rapport in strangers but it is also an indicator of built relationship among friends. The literature on social psychology BIBREF5 shows that the task of managing interpersonal bond like rapport requires management of face which, in turn, relies on behavioral expectation, which are allied with social norms early in a relationship, and become more interpersonally determined as the relationship proceeds. BIBREF6 advanced the arguments by proposing that with the increasing knowledge of one another, more general norms may be purposely violated in order to accommodate each other's behavior expectation. Moreover, they proposed that such kind of social norm violation in fact reinforce the sense of in-group connectedness. Finally in BIBREF7 , the authors discovered the effect of temporally co-occurring smile and social norm violation that signal high interpersonal rapport. Thus, we believe that recognizing the phenomena of social norm violation in dialog can contribute important insights into understanding the interpersonal dynamics that unfold between the interlocutors.", "id": 2576, "question": "Does this paper propose a new task that others can try to improve performance on?", "title": "Leveraging Recurrent Neural Networks for Multimodal Recognition of Social Norm Violation in Dialog" }, { "answers": [ "" ], "context": "Semantic parsing is the task of mapping a phrase in natural language onto a formal query in some fixed schema, which can then be executed against a knowledge base (KB) BIBREF0 , BIBREF1 . For example, the phrase “Who is the president of the United States?” might be mapped onto the query $\\lambda (x).$ $\\textsc {/government/president\\_of}$ ( $x$ , $\\textsc {USA}$ ), which, when executed against Freebase BIBREF2 , returns $\\textsc {Barack Obama}$ . By mapping phrases to executable statements, semantic parsers can leverage large, curated sources of knowledge to answer questions BIBREF3 .", "id": 2577, "question": "What knowledge base do they use?", "title": "Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge" }, { "answers": [ "3 million webpages processed with a CCG parser for training, 220 queries for development, and 307 queries for testing" ], "context": "In this section, we briefly describe the current state-of-the-art model for open vocabulary semantic parsing, introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary. Instead of mapping text to Freebase queries, as done by a traditional semantic parser, their method parses text to a surface logical form with predicates derived directly from the words in the text (see Figure 1 ). Next, a distribution over denotations for each predicate is learned using a matrix factorization approach similar to that of Riedel et al. riedel-2013-mf-universal-schema. This distribution is concisely represented using a probabilistic database, which also enables efficient probabilistic execution of logical form queries.", "id": 2578, "question": "How big is their dataset?", "title": "Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge" }, { "answers": [ "Fill-in-the-blank natural language questions" ], "context": "Our key insight is that the executable queries used by traditional semantic parsers can be converted into features that provide KB information to the execution models of open vocabulary semantic parsers. Here we show how this is done.", "id": 2579, "question": "What task do they evaluate on?", "title": "Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge" }, { "answers": [ "" ], "context": "State-of-the-art models for natural language processing (NLP) tasks like translation, question answering, and parsing include components intended to extract representations for the meaning and contents of each input sentence. These sentence encoder components are typically trained directly for the target task at hand. This approach can be effective on data rich tasks and yields human performance on some narrowly-defined benchmarks BIBREF1 , BIBREF2 , but it is tenable only for the few NLP tasks with millions of examples of training data. This has prompted interest in pretraining for sentence encoding: There is good reason to believe it should be possible to exploit outside data and training signals to effectively pretrain these encoders, both because they are intended to primarily capture sentence meaning rather than any task-specific skill, and because we have seen dramatic successes with pretraining in the related domains of word embeddings BIBREF3 and image encoders BIBREF4 .", "id": 2580, "question": "Do some pretraining objectives perform better than others for sentence level understanding tasks?", "title": "Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling" }, { "answers": [ "" ], "context": "Large-scale knowledge bases (KBs), such as YAGO BIBREF0 , Freebase BIBREF1 and DBpedia BIBREF2 , are usually databases of triples representing the relationships between entities in the form of fact (head entity, relation, tail entity) denoted as (h, r, t), e.g., (Melbourne, cityOf, Australia). These KBs are useful resources in many applications such as semantic searching and ranking BIBREF3 , BIBREF4 , BIBREF5 , question answering BIBREF6 , BIBREF7 and machine reading BIBREF8 . However, the KBs are still incomplete, i.e., missing a lot of valid triples BIBREF9 , BIBREF10 . Therefore, much research work has been devoted towards knowledge base completion or link prediction to predict whether a triple (h, r, t) is valid or not BIBREF11 .", "id": 2581, "question": "Did the authors try stacking multiple convolutional layers?", "title": "A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network" }, { "answers": [ "3 feature maps for a given tuple" ], "context": "A knowledge base $\\mathcal {G}$ is a collection of valid factual triples in the form of (head entity, relation, tail entity) denoted as $(h, r, t)$ such that $h, t \\in \\mathcal {E}$ and $r \\in \\mathcal {R}$ where $\\mathcal {E}$ is a set of entities and $\\mathcal {R}$ is a set of relations. Embedding models aim to define a score function $f$ giving an implausibility score for each triple $(h, r, t)$ such that valid triples receive lower scores than invalid triples. Table 1 presents score functions in previous SOTA models.", "id": 2582, "question": "How many feature maps are generated for a given triple?", "title": "A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network" }, { "answers": [ "" ], "context": "We evaluate ConvKB on two benchmark datasets: WN18RR BIBREF30 and FB15k-237 BIBREF31 . WN18RR and FB15k-237 are correspondingly subsets of two common datasets WN18 and FB15k BIBREF13 . As noted by BIBREF31 , WN18 and FB15k are easy because they contain many reversible relations. So knowing relations are reversible allows us to easily predict the majority of test triples, e.g. state-of-the-art results on both WN18 and FB15k are obtained by using a simple reversal rule as shown in BIBREF30 . Therefore, WN18RR and FB15k-237 are created to not suffer from this reversible relation problem in WN18 and FB15k, for which the knowledge base completion task is more realistic. Table 2 presents the statistics of WN18RR and FB15k-237.", "id": 2583, "question": "How does the number of parameters compare to other knowledge base completion models?", "title": "A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network" } ]