{ "1912.01214": [ { "question": "which multilingual approaches do they compare with?", "answers": [ { "answer": "BIBREF19, BIBREF20", "type": "extractive" }, { "answer": "multilingual NMT (MNMT) BIBREF19", "type": "extractive" } ], "q_uid": "b6f15fb6279b82e34a5bf4828b7b5ddabfdf1d54", "evidence": [ { "raw_evidence": [ "Table TABREF19 and TABREF26 report zero-shot results on Europarl and Multi-UN evaluation sets, respectively. We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23. Pivoting translates source to pivot then to target in two steps, causing inefficient translation process. Our approaches use one encoder-decoder model to translate between any zero-shot directions, which is more efficient than pivoting. Regarding the comparison between transfer approaches, our cross-lingual pretraining based transfer outperforms transfer method that does not use pretraining by a large margin." ], "highlighted_evidence": [ "We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. ", "The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23." ] }, { "raw_evidence": [ "Table TABREF19 and TABREF26 report zero-shot results on Europarl and Multi-UN evaluation sets, respectively. We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23. Pivoting translates source to pivot then to target in two steps, causing inefficient translation process. Our approaches use one encoder-decoder model to translate between any zero-shot directions, which is more efficient than pivoting. Regarding the comparison between transfer approaches, our cross-lingual pretraining based transfer outperforms transfer method that does not use pretraining by a large margin." ], "highlighted_evidence": [ "We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16." ] } ] }, { "question": "what are the pivot-based baselines?", "answers": [ { "answer": "pivoting, pivoting$_{\\rm m}$", "type": "extractive" }, { "answer": "firstly translates a source language into the pivot language which is later translated to the target language", "type": "extractive" } ], "q_uid": "f5e6f43454332e0521a778db0b769481e23e7682", "evidence": [ { "raw_evidence": [ "Table TABREF19 and TABREF26 report zero-shot results on Europarl and Multi-UN evaluation sets, respectively. We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23. Pivoting translates source to pivot then to target in two steps, causing inefficient translation process. Our approaches use one encoder-decoder model to translate between any zero-shot directions, which is more efficient than pivoting. Regarding the comparison between transfer approaches, our cross-lingual pretraining based transfer outperforms transfer method that does not use pretraining by a large margin.", "Although it is challenging for one model to translate all zero-shot directions between multiple distant language pairs of MultiUN, MLM+BRLM-SA still achieves better performances on Es $\\rightarrow $ Ar and Es $\\rightarrow $ Ru than strong pivoting$_{\\rm m}$, which uses MNMT to translate source to pivot then to target in two separate steps with each step receiving supervised signal of parallel corpora. Our approaches surpass pivoting$_{\\rm m}$ in all zero-shot directions by adding back translation BIBREF33 to generate pseudo parallel sentences for all zero-shot directions based on our pretrained models such as MLM+BRLM-SA, and further training our universal encoder-decoder model with these pseudo data. BIBREF22 gu2019improved introduces back translation into MNMT, while we adopt it in our transfer approaches. Finally, our best MLM+BRLM-SA with back translation outperforms pivoting$_{\\rm m}$ by 2.4 BLEU points averagely, and outperforms MNMT BIBREF22 by 4.6 BLEU points averagely. Again, in supervised translation directions, MLM+BRLM-SA with back translation also achieves better performance than the original supervised Transformer." ], "highlighted_evidence": [ "We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16.", "Although it is challenging for one model to translate all zero-shot directions between multiple distant language pairs of MultiUN, MLM+BRLM-SA still achieves better performances on Es $\\rightarrow $ Ar and Es $\\rightarrow $ Ru than strong pivoting$_{\\rm m}$, which uses MNMT to translate source to pivot then to target in two separate steps with each step receiving supervised signal of parallel corpora. " ] }, { "raw_evidence": [ "We use traditional transfer learning, pivot-based method and multilingual NMT as our baselines. For the fair comparison, the Transformer-big model with 1024 embedding/hidden units, 4096 feed-forward filter size, 6 layers and 8 heads per layer is adopted for all translation models in our experiments. We set the batch size to 2400 per batch and limit sentence length to 100 BPE tokens. We set the $\\text{attn}\\_\\text{drop}=0$ (a dropout rate on each attention head), which is favorable to the zero-shot translation and has no effect on supervised translation directions BIBREF22. For the model initialization, we use Facebook's cross-lingual pretrained models released by XLM to initialize the encoder part, and the rest parameters are initialized with xavier uniform. We employ the Adam optimizer with $\\text{lr}=0.0001$, $t_{\\text{warm}\\_\\text{up}}=4000$ and $\\text{dropout}=0.1$. At decoding time, we generate greedily with length penalty $\\alpha =1.0$.", "Pivot-based Method is a common strategy to obtain a source$\\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14. Although the pivot-based methods can achieve not bad performance, it always falls into a computation-expensive and parameter-vast dilemma of quadratic growth in the number of source languages, and suffers from the error propagation problem BIBREF15." ], "highlighted_evidence": [ "We use traditional transfer learning, pivot-based method and multilingual NMT as our baselines.", "Pivot-based Method is a common strategy to obtain a source$\\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14." ] } ] }, { "question": "which datasets did they experiment with?", "answers": [ { "answer": "Europarl, MultiUN", "type": "extractive" }, { "answer": "Europarl BIBREF31, MultiUN BIBREF32", "type": "extractive" } ], "q_uid": "9a05a5f4351db75da371f7ac12eb0b03607c4b87", "evidence": [ { "raw_evidence": [ "We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance. In all experiments, we use BLEU as the automatic metric for translation evaluation." ], "highlighted_evidence": [ "We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance." ] }, { "raw_evidence": [ "We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance. In all experiments, we use BLEU as the automatic metric for translation evaluation." ], "highlighted_evidence": [ "We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance." ] } ] } ], "1810.08699": [ { "question": "what ner models were evaluated?", "answers": [ { "answer": "Stanford NER, spaCy 2.0 , recurrent model with a CRF top layer", "type": "extractive" }, { "answer": "Stanford NER, spaCy 2.0, recurrent model with a CRF top layer", "type": "extractive" } ], "q_uid": "18c5d366b1da8447b5404eab71f4cc658ba12e6f", "evidence": [ { "raw_evidence": [ "In this section we describe a number of experiments targeted to compare the performance of popular named entity recognition algorithms on our data. We trained and evaluated Stanford NER, spaCy 2.0, and a recurrent model similar to BIBREF13 , BIBREF14 that uses bidirectional LSTM cells for character-based feature extraction and CRF, described in Guillaume Genthial's Sequence Tagging with Tensorflow blog post BIBREF15 .", "Stanford NER is conditional random fields (CRF) classifier based on lexical and contextual features such as the current word, character-level n-grams of up to length 6 at its beginning and the end, previous and next words, word shape and sequence features BIBREF16 .", "spaCy 2.0 uses a CNN-based transition system for named entity recognition. For each token, a Bloom embedding is calculated based on its lowercase form, prefix, suffix and shape, then using residual CNNs, a contextual representation of that token is extracted that potentially draws information from up to 4 tokens from each side BIBREF17 . Each update of the transition system's configuration is a classification task that uses the contextual representation of the top token on the stack, preceding and succeeding tokens, first two tokens of the buffer, and their leftmost, second leftmost, rightmost, second rightmost children. The valid transition with the highest score is applied to the system. This approach reportedly performs within 1% of the current state-of-the-art for English . In our experiments, we tried out 50-, 100-, 200- and 300-dimensional pre-trained GloVe embeddings. Due to time constraints, we did not tune the rest of hyperparameters and used their default values.", "The main model that we focused on was the recurrent model with a CRF top layer, and the above-mentioned methods served mostly as baselines. The distinctive feature of this approach is the way contextual word embeddings are formed. For each token separately, to capture its word shape features, character-based representation is extracted using a bidirectional LSTM BIBREF18 . This representation gets concatenated with a distributional word vector such as GloVe, forming an intermediate word embedding. Using another bidirectional LSTM cell on these intermediate word embeddings, the contextual representation of tokens is obtained (Figure FIGREF17 ). Finally, a CRF layer labels the sequence of these contextual representations. In our experiments, we used Guillaume Genthial's implementation of the algorithm. We set the size of character-based biLSTM to 100 and the size of second biLSTM network to 300." ], "highlighted_evidence": [ "In this section we describe a number of experiments targeted to compare the performance of popular named entity recognition algorithms on our data. We trained and evaluated Stanford NER, spaCy 2.0, and a recurrent model similar to BIBREF13 , BIBREF14 that uses bidirectional LSTM cells for character-based feature extraction and CRF, described in Guillaume Genthial's Sequence Tagging with Tensorflow blog post BIBREF15 .", "Stanford NER is conditional random fields (CRF) classifier based on lexical and contextual features such as the current word, character-level n-grams of up to length 6 at its beginning and the end, previous and next words, word shape and sequence features BIBREF16 .", "spaCy 2.0 uses a CNN-based transition system for named entity recognition.", "The main model that we focused on was the recurrent model with a CRF top layer, and the above-mentioned methods served mostly as baselines. " ] }, { "raw_evidence": [ "Stanford NER is conditional random fields (CRF) classifier based on lexical and contextual features such as the current word, character-level n-grams of up to length 6 at its beginning and the end, previous and next words, word shape and sequence features BIBREF16 .", "spaCy 2.0 uses a CNN-based transition system for named entity recognition. For each token, a Bloom embedding is calculated based on its lowercase form, prefix, suffix and shape, then using residual CNNs, a contextual representation of that token is extracted that potentially draws information from up to 4 tokens from each side BIBREF17 . Each update of the transition system's configuration is a classification task that uses the contextual representation of the top token on the stack, preceding and succeeding tokens, first two tokens of the buffer, and their leftmost, second leftmost, rightmost, second rightmost children. The valid transition with the highest score is applied to the system. This approach reportedly performs within 1% of the current state-of-the-art for English . In our experiments, we tried out 50-, 100-, 200- and 300-dimensional pre-trained GloVe embeddings. Due to time constraints, we did not tune the rest of hyperparameters and used their default values.", "The main model that we focused on was the recurrent model with a CRF top layer, and the above-mentioned methods served mostly as baselines. The distinctive feature of this approach is the way contextual word embeddings are formed. For each token separately, to capture its word shape features, character-based representation is extracted using a bidirectional LSTM BIBREF18 . This representation gets concatenated with a distributional word vector such as GloVe, forming an intermediate word embedding. Using another bidirectional LSTM cell on these intermediate word embeddings, the contextual representation of tokens is obtained (Figure FIGREF17 ). Finally, a CRF layer labels the sequence of these contextual representations. In our experiments, we used Guillaume Genthial's implementation of the algorithm. We set the size of character-based biLSTM to 100 and the size of second biLSTM network to 300." ], "highlighted_evidence": [ "Stanford NER is conditional random fields (CRF) classifier based on lexical and contextual features such as the current word, character-level n-grams of up to length 6 at its beginning and the end, previous and next words, word shape and sequence features BIBREF16 .", "spaCy 2.0 uses a CNN-based transition system for named entity recognition.", "The main model that we focused on was the recurrent model with a CRF top layer, and the above-mentioned methods served mostly as baselines." ] } ] }, { "question": "what is the source of the news sentences?", "answers": [ { "answer": "ilur.am", "type": "extractive" }, { "answer": "links between Wikipedia articles to generate sequences of named-entity annotated tokens", "type": "extractive" } ], "q_uid": "b5e4866f0685299f1d7af267bbcc4afe2aab806f", "evidence": [ { "raw_evidence": [ "In order to evaluate the models trained on generated data, we manually annotated a named entities dataset comprising 53453 tokens and 2566 sentences selected from over 250 news texts from ilur.am. This dataset is comparable in size with the test sets of other languages (Table TABREF10 ). Included sentences are from political, sports, local and world news (Figures FIGREF8 , FIGREF9 ), covering the period between August 2012 and July 2018. The dataset provides annotations for 3 popular named entity classes: people (PER), organizations (ORG), and locations (LOC), and is released in CoNLL03 format with IOB tagging scheme. Tokens and sentences were segmented according to the UD standards for the Armenian language BIBREF11 ." ], "highlighted_evidence": [ "In order to evaluate the models trained on generated data, we manually annotated a named entities dataset comprising 53453 tokens and 2566 sentences selected from over 250 news texts from ilur.am." ] }, { "raw_evidence": [ "We used Sysoev and Andrianov's modification of the Nothman et al. approach to automatically generate data for training a named entity recognizer. This approach uses links between Wikipedia articles to generate sequences of named-entity annotated tokens." ], "highlighted_evidence": [ "This approach uses links between Wikipedia articles to generate sequences of named-entity annotated tokens." ] } ] }, { "question": "did they use a crowdsourcing platform for manual annotations?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "1f085b9bb7bfd0d6c8cba1a9d73f08fcf2da7590", "evidence": [ { "raw_evidence": [ "In order to evaluate the models trained on generated data, we manually annotated a named entities dataset comprising 53453 tokens and 2566 sentences selected from over 250 news texts from ilur.am. This dataset is comparable in size with the test sets of other languages (Table TABREF10 ). Included sentences are from political, sports, local and world news (Figures FIGREF8 , FIGREF9 ), covering the period between August 2012 and July 2018. The dataset provides annotations for 3 popular named entity classes: people (PER), organizations (ORG), and locations (LOC), and is released in CoNLL03 format with IOB tagging scheme. Tokens and sentences were segmented according to the UD standards for the Armenian language BIBREF11 .", "During annotation, we generally relied on categories and guidelines assembled by BBN Technologies for TREC 2002 question answering track. Only named entities corresponding to BBN's person name category were tagged as PER. Those include proper names of people, including fictional people, first and last names, family names, unique nicknames. Similarly, organization name categories, including company names, government agencies, educational and academic institutions, sports clubs, musical ensembles and other groups, hospitals, museums, newspaper names, were marked as ORG. However, unlike BBN, we did not mark adjectival forms of organization names as named entities. BBN's gpe name, facility name, location name categories were combined and annotated as LOC." ], "highlighted_evidence": [ "In order to evaluate the models trained on generated data, we manually annotated a named entities dataset comprising 53453 tokens and 2566 sentences selected from over 250 news texts from ilur.am.", "During annotation, we generally relied on categories and guidelines assembled by BBN Technologies for TREC 2002 question answering track." ] }, { "raw_evidence": [ "Instead of manually classifying Wikipedia articles as it was done in Nothman et al., we developed a rule-based classifier that used an article's Wikidata instance of and subclass of attributes to find the corresponding named entity type.", "In order to evaluate the models trained on generated data, we manually annotated a named entities dataset comprising 53453 tokens and 2566 sentences selected from over 250 news texts from ilur.am. This dataset is comparable in size with the test sets of other languages (Table TABREF10 ). Included sentences are from political, sports, local and world news (Figures FIGREF8 , FIGREF9 ), covering the period between August 2012 and July 2018. The dataset provides annotations for 3 popular named entity classes: people (PER), organizations (ORG), and locations (LOC), and is released in CoNLL03 format with IOB tagging scheme. Tokens and sentences were segmented according to the UD standards for the Armenian language BIBREF11 ." ], "highlighted_evidence": [ "Instead of manually classifying Wikipedia articles as it was done in Nothman et al., we developed a rule-based classifier that used an article's Wikidata instance of and subclass of attributes to find the corresponding named entity type.", "In order to evaluate the models trained on generated data, we manually annotated a named entities dataset comprising 53453 tokens and 2566 sentences selected from over 250 news texts from ilur.am." ] } ] } ], "1609.00425": [ { "question": "what are the topics pulled from Reddit?", "answers": [ { "answer": "politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage. ", "type": "extractive" }, { "answer": "training data has posts from politics, business, science and other popular topics; the trained model is applied to millions of unannotated posts on all of Reddit", "type": "abstractive" } ], "q_uid": "b6ae8e10c6a0d34c834f18f66ab730b670fb528c", "evidence": [ { "raw_evidence": [ "Data collection. Subreddits are sub-communities on Reddit oriented around specific interests or topics, such as technology or politics. Sampling from Reddit as a whole would bias the model towards the most commonly discussed content. But by sampling posts from individual subreddits, we can control the kinds of posts we use to train our model. To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage. All posts in our sample appeared between January 2007 and March 2015, and to control for length effects, contain between 300 and 400 characters. This results in a total training dataset of 5000 posts." ], "highlighted_evidence": [ "To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage." ] }, { "raw_evidence": [ "Data collection. Subreddits are sub-communities on Reddit oriented around specific interests or topics, such as technology or politics. Sampling from Reddit as a whole would bias the model towards the most commonly discussed content. But by sampling posts from individual subreddits, we can control the kinds of posts we use to train our model. To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage. All posts in our sample appeared between January 2007 and March 2015, and to control for length effects, contain between 300 and 400 characters. This results in a total training dataset of 5000 posts.", "We now apply our dogmatism classifier to a larger dataset of posts, examining how dogmatic language shapes the Reddit community. Concretely, we apply the BOW+LING model trained on the full Reddit dataset to millions of new unannotated posts, labeling these posts with a probability of dogmatism according to the classifier (0=non-dogmatic, 1=dogmatic). We then use these dogmatism annotations to address four research questions." ], "highlighted_evidence": [ "To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage.", "Concretely, we apply the BOW+LING model trained on the full Reddit dataset to millions of new unannotated posts, labeling these posts with a probability of dogmatism according to the classifier (0=non-dogmatic, 1=dogmatic)." ] } ] }, { "question": "What predictive model do they build?", "answers": [ { "answer": "logistic regression models", "type": "extractive" }, { "answer": "logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features.", "type": "extractive" } ], "q_uid": "a87a009c242d57c51fc94fe312af5e02070f898b", "evidence": [ { "raw_evidence": [ "We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features. BOW and SENT provide baselines for the task. We compute BOW features using term frequency-inverse document frequency (TF-IDF) and category-based features by normalizing counts for each category by the number of words in each document. The BOW classifiers are trained with regularization (L2 penalties of 1.5)." ], "highlighted_evidence": [ "We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features." ] }, { "raw_evidence": [ "We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features. BOW and SENT provide baselines for the task. We compute BOW features using term frequency-inverse document frequency (TF-IDF) and category-based features by normalizing counts for each category by the number of words in each document. The BOW classifiers are trained with regularization (L2 penalties of 1.5)." ], "highlighted_evidence": [ "We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features." ] } ] } ], "1801.05147": [ { "question": "What crowdsourcing platform is used?", "answers": [ { "answer": "They did not use any platform, instead they hired undergraduate students to do the annotation.", "type": "abstractive" } ], "q_uid": "2df4a045a9cd7b44874340b6fdf9308d3c55327a", "evidence": [ { "raw_evidence": [ "With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences." ], "highlighted_evidence": [ "With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences." ] } ] } ], "1811.00383": [ { "question": "How do they match words before reordering them?", "answers": [ { "answer": "CFILT-preorder system", "type": "extractive" } ], "q_uid": "a313e98994fc039a82aa2447c411dda92c65a470", "evidence": [ { "raw_evidence": [ "We use the CFILT-preorder system for reordering English sentences to match the Indian language word order. It contains two re-ordering systems: (1) generic rules that apply to all Indian languages BIBREF17 , and (2) hindi-tuned rules which improve the generic rules by incorporating improvements found through an error analysis of English-Hindi reordering BIBREF28 . These Hindi-tuned rules have been found to improve reordering for many English to Indian language pairs BIBREF29 ." ], "highlighted_evidence": [ "We use the CFILT-preorder system for reordering English sentences to match the Indian language word order. It contains two re-ordering systems: (1) generic rules that apply to all Indian languages BIBREF17 , and (2) hindi-tuned rules which improve the generic rules by incorporating improvements found through an error analysis of English-Hindi reordering BIBREF28 ." ] } ] }, { "question": "On how many language pairs do they show that preordering assisting language sentences helps translation quality?", "answers": [ { "answer": "5", "type": "abstractive" }, { "answer": "Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks.", "type": "extractive" } ], "q_uid": "37861be6aecd9242c4fdccdfcd06e48f3f1f8f81", "evidence": [ { "raw_evidence": [ "We experimented with English INLINEFORM0 Hindi translation as the parent task. English is the assisting source language. Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages. All these languages have a canonical SOV word order." ], "highlighted_evidence": [ "Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. " ] }, { "raw_evidence": [ "Languages", "We experimented with English INLINEFORM0 Hindi translation as the parent task. English is the assisting source language. Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages. All these languages have a canonical SOV word order." ], "highlighted_evidence": [ "Languages\nWe experimented with English INLINEFORM0 Hindi translation as the parent task. English is the assisting source language. Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages. All these languages have a canonical SOV word order." ] } ] }, { "question": "Which dataset(s) do they experiment with?", "answers": [ { "answer": "IITB English-Hindi parallel corpus BIBREF22, ILCI English-Hindi parallel corpus", "type": "extractive" }, { "answer": "IITB English-Hindi parallel corpus, ILCI English-Hindi parallel corpus", "type": "extractive" } ], "q_uid": "7e62a53823aba08bc26b2812db016f5ce6159565", "evidence": [ { "raw_evidence": [ "Datasets", "For training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus BIBREF23 spans multiple Indian languages from the health and tourism domains. We use the 520-sentence dev-set of the IITB parallel corpus for validation. For each child task, we use INLINEFORM2 sentences from ILCI corpus as the test set." ], "highlighted_evidence": [ "Datasets\nFor training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus BIBREF23 spans multiple Indian languages from the health and tourism domains. We use the 520-sentence dev-set of the IITB parallel corpus for validation. For each child task, we use INLINEFORM2 sentences from ILCI corpus as the test set." ] }, { "raw_evidence": [ "For training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus BIBREF23 spans multiple Indian languages from the health and tourism domains. We use the 520-sentence dev-set of the IITB parallel corpus for validation. For each child task, we use INLINEFORM2 sentences from ILCI corpus as the test set." ], "highlighted_evidence": [ "For training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). " ] } ] } ], "1909.09067": [ { "question": "Which information about text structure is included in the corpus?", "answers": [ { "answer": "paragraphs, lines, Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation", "type": "extractive" }, { "answer": "paragraph, lines, textspan element (paragraph segmentation, line segmentation, Information on physical page segmentation(for PDF only))", "type": "abstractive" } ], "q_uid": "9eabb54c2408dac24f00f92cf1061258c7ea2e1a", "evidence": [ { "raw_evidence": [ "The paper at hand introduces a corpus developed for use in automatic readability assessment and automatic text simplification of German. The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. The importance of considering such information has repeatedly been asserted theoretically BIBREF11, BIBREF12, BIBREF0. The remainder of this paper is structured as follows: Section SECREF2 presents previous corpora used for automatic readability assessment and text simplification. Section SECREF3 describes our corpus, introducing its novel aspects and presenting the primary data (Section SECREF7), the metadata (Section SECREF10), the secondary data (Section SECREF28), the profile (Section SECREF35), and the results of machine learning experiments carried out on the corpus (Section SECREF37).", "Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layer" ], "highlighted_evidence": [ "The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information.", "Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layer" ] }, { "raw_evidence": [ "The paper at hand introduces a corpus developed for use in automatic readability assessment and automatic text simplification of German. The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. The importance of considering such information has repeatedly been asserted theoretically BIBREF11, BIBREF12, BIBREF0. The remainder of this paper is structured as follows: Section SECREF2 presents previous corpora used for automatic readability assessment and text simplification. Section SECREF3 describes our corpus, introducing its novel aspects and presenting the primary data (Section SECREF7), the metadata (Section SECREF10), the secondary data (Section SECREF28), the profile (Section SECREF35), and the results of machine learning experiments carried out on the corpus (Section SECREF37).", "Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layer" ], "highlighted_evidence": [ "The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. ", "Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layer" ] } ] }, { "question": "Which information about typography is included in the corpus?", "answers": [ { "answer": "font type, font style, Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page", "type": "extractive" }, { "answer": "font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer, A separate fonts layer was introduced to preserve detailed information on the font configurations referenced in the tokens layer", "type": "extractive" } ], "q_uid": "3d013f15796ae7fed5272183a166c45f16e24e39", "evidence": [ { "raw_evidence": [ "The paper at hand introduces a corpus developed for use in automatic readability assessment and automatic text simplification of German. The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. The importance of considering such information has repeatedly been asserted theoretically BIBREF11, BIBREF12, BIBREF0. The remainder of this paper is structured as follows: Section SECREF2 presents previous corpora used for automatic readability assessment and text simplification. Section SECREF3 describes our corpus, introducing its novel aspects and presenting the primary data (Section SECREF7), the metadata (Section SECREF10), the secondary data (Section SECREF28), the profile (Section SECREF35), and the results of machine learning experiments carried out on the corpus (Section SECREF37).", "Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)" ], "highlighted_evidence": [ "The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information.", "Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)" ] }, { "raw_evidence": [ "Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)", "A separate fonts layer was introduced to preserve detailed information on the font configurations referenced in the tokens layer", "For the webpages, a static dump of all documents was created. Following this, the documents were manually checked to verify the language. The main content was subsequently extracted, i.e., HTML markup and boilerplate removed using the Beautiful Soup library for Python. Information on text structure (e.g., paragraphs, lines) and typography (e.g., boldface, italics) was retained. Similarly, image information (content, position, and dimensions of an image) was preserved." ], "highlighted_evidence": [ "Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)", "A separate fonts layer was introduced to preserve detailed information on the font configurations referenced in the tokens layer", "Information on text structure (e.g., paragraphs, lines) and typography (e.g., boldface, italics) was retained." ] } ] } ], "1704.06194": [ { "question": "What they use in their propsoed framework?", "answers": [ { "answer": "break the relation names into word sequences, relation-level and word-level relation representations, bidirectional LSTMs (BiLSTMs), residual learning method", "type": "extractive" }, { "answer": "break the relation names into word sequences for question-relation matching, build both relation-level and word-level relation representations, use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations, residual learning method for sequence matching, a simple KBQA implementation composed of two-step relation detection", "type": "extractive" } ], "q_uid": "d3aa0449708cc861a51551b128d73e11d62207d2", "evidence": [ { "raw_evidence": [ "This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching." ], "highlighted_evidence": [ "First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching." ] }, { "raw_evidence": [ "This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching.", "In order to assess how the proposed improved relation detection could benefit the KBQA end task, we also propose a simple KBQA implementation composed of two-step relation detection. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the raw question text by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each topic entity selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers." ], "highlighted_evidence": [ "This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching.\n\nIn order to assess how the proposed improved relation detection could benefit the KBQA end task, we also propose a simple KBQA implementation composed of two-step relation detection. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the raw question text by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each topic entity selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers." ] } ] }, { "question": "What does KBQA abbreviate for", "answers": [ { "answer": "Knowledge Base Question Answering", "type": "extractive" }, { "answer": "Knowledge Base Question Answering ", "type": "extractive" } ], "q_uid": "cfbec1ef032ac968560a7c76dec70faf1269b27c", "evidence": [ { "raw_evidence": [ "Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$ head-entity, relation, tail-entity $>$ KB tuple BIBREF6 , BIBREF7 , BIBREF2 ; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links $n$ -grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to." ], "highlighted_evidence": [ "Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5" ] }, { "raw_evidence": [ "Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$ head-entity, relation, tail-entity $>$ KB tuple BIBREF6 , BIBREF7 , BIBREF2 ; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links $n$ -grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to." ], "highlighted_evidence": [ "Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . " ] } ] }, { "question": "What is te core component for KBQA?", "answers": [ { "answer": "answer questions by obtaining information from KB tuples ", "type": "extractive" }, { "answer": "hierarchical matching between questions and relations with residual learning", "type": "extractive" } ], "q_uid": "c0e341c4d2253eb42c8840381b082aae274eddad", "evidence": [ { "raw_evidence": [ "Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$ head-entity, relation, tail-entity $>$ KB tuple BIBREF6 , BIBREF7 , BIBREF2 ; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links $n$ -grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to." ], "highlighted_evidence": [ "Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ." ] }, { "raw_evidence": [ "Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks." ], "highlighted_evidence": [ "Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks." ] } ] } ], "1909.00512": [ { "question": "What experiments are proposed to test that upper layers produce context-specific embeddings?", "answers": [ { "answer": "They measure self-similarity, intra-sentence similarity and maximum explainable variance of the embeddings in the upper layers.", "type": "abstractive" }, { "answer": "They plot the average cosine similarity between uniformly random words increases exponentially from layers 8 through 12. \nThey plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2 and shown that the higher layer produces more context-specific embeddings.\nThey plot that word representations in a sentence become more context-specific in upper layers, they drift away from one another.", "type": "abstractive" } ], "q_uid": "1ec152119cf756b16191b236c85522afeed11f59", "evidence": [ { "raw_evidence": [ "We measure how contextual a word representation is using three different metrics: self-similarity, intra-sentence similarity, and maximum explainable variance." ], "highlighted_evidence": [ "We measure how contextual a word representation is using three different metrics: self-similarity, intra-sentence similarity, and maximum explainable variance." ] }, { "raw_evidence": [ "Recall from Definition 1 that the self-similarity of a word, in a given layer of a given model, is the average cosine similarity between its representations in different contexts, adjusted for anisotropy. If the self-similarity is 1, then the representations are not context-specific at all; if the self-similarity is 0, that the representations are maximally context-specific. In Figure FIGREF24, we plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2. For example, the self-similarity is 1.0 in ELMo's input layer because representations in that layer are static character-level embeddings.", "In all three models, the higher the layer, the lower the self-similarity is on average. In other words, the higher the layer, the more context-specific the contextualized representations. This finding makes intuitive sense. In image classification models, lower layers recognize more generic features such as edges while upper layers recognize more class-specific features BIBREF19. Similarly, upper layers of LSTMs trained on NLP tasks learn more task-specific representations BIBREF4. Therefore, it follows that upper layers of neural language models learn more context-specific representations, so as to predict the next word for a given context more accurately. Of all three models, representations in GPT-2 are the most context-specific, with those in GPT-2's last layer being almost maximally context-specific.", "As seen in Figure FIGREF20, for GPT-2, the average cosine similarity between uniformly randomly words is roughly 0.6 in layers 2 through 8 but increases exponentially from layers 8 through 12. In fact, word representations in GPT-2's last layer are so anisotropic that any two words have on average an almost perfect cosine similarity! This pattern holds for BERT and ELMo as well, though there are exceptions: for example, the anisotropy in BERT's penultimate layer is much higher than in its final layer.", "As word representations in a sentence become more context-specific in upper layers, they drift away from one another, although there are exceptions (see layer 12 in Figure FIGREF25). However, in all layers, the average similarity between words in the same sentence is still greater than the average similarity between randomly chosen words (i.e., the anisotropy baseline). This suggests a more nuanced contextualization than in ELMo, with BERT recognizing that although the surrounding sentence informs a word's meaning, two words in the same sentence do not necessarily have a similar meaning because they share the same context." ], "highlighted_evidence": [ "Recall from Definition 1 that the self-similarity of a word, in a given layer of a given model, is the average cosine similarity between its representations in different contexts, adjusted for anisotropy. If the self-similarity is 1, then the representations are not context-specific at all; if the self-similarity is 0, that the representations are maximally context-specific. In Figure FIGREF24, we plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2. For example, the self-similarity is 1.0 in ELMo's input layer because representations in that layer are static character-level embeddings.\n\nIn all three models, the higher the layer, the lower the self-similarity is on average. In other words, the higher the layer, the more context-specific the contextualized representations. ", "As seen in Figure FIGREF20, for GPT-2, the average cosine similarity between uniformly randomly words is roughly 0.6 in layers 2 through 8 but increases exponentially from layers 8 through 12. In fact, word representations in GPT-2's last layer are so anisotropic that any two words have on average an almost perfect cosine similarity! This pattern holds for BERT and ELMo as well, though there are exceptions: for example, the anisotropy in BERT's penultimate layer is much higher than in its final layer.", "As word representations in a sentence become more context-specific in upper layers, they drift away from one another, although there are exceptions (see layer 12 in Figure FIGREF25). However, in all layers, the average similarity between words in the same sentence is still greater than the average similarity between randomly chosen words (i.e., the anisotropy baseline). " ] } ] } ], "2003.03106": [ { "question": "What are the other algorithms tested?", "answers": [ { "answer": "NER model, CRF classifier trained with sklearn-crfsuite, classifier has been developed that consists of regular-expressions and dictionary look-up", "type": "extractive" }, { "answer": "As the simplest baseline, a sensitive data recogniser and classifier, Conditional Random Fields (CRF), spaCy ", "type": "extractive" } ], "q_uid": "6b53e1f46ae4ba9b75117fc6e593abded89366be", "evidence": [ { "raw_evidence": [ "Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature. In this paper, we propose as one of the competitive baselines a CRF classifier trained with sklearn-crfsuite for Python 3.5 and the following configuration: algorithm = lbfgs; maximum iterations = 100; c1 = c2 = 0.1; all transitions = true; optimise = false. The features extracted from each token are as follows:", "spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels. For this purpose, the new model uses all the labels of the training corpus coded with its context at sentence level. The network optimisation parameters and dropout values are the ones recommended in the documentation for small datasets. Finally, the model is trained using batches of size 64. No more features are included, so the classifier is language-dependent but not domain-dependent.", "As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estad\u00edstica (INE; Spanish Statistical Office)." ], "highlighted_evidence": [ "Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature. In this paper, we propose as one of the competitive baselines a CRF classifier trained with sklearn-crfsuite for Python 3.5 and the following configuration: algorithm = lbfgs; maximum iterations = 100; c1 = c2 = 0.1; all transitions = true; optimise = false.", "spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels.", "As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estad\u00edstica (INE; Spanish Statistical Office)." ] }, { "raw_evidence": [ "Apart from experimenting with a pre-trained BERT model, we have run experiments with other systems and baselines, to compare them and obtain a better perspective about BERT's performance in these datasets.", "As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estad\u00edstica (INE; Spanish Statistical Office).", "Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature. In this paper, we propose as one of the competitive baselines a CRF classifier trained with sklearn-crfsuite for Python 3.5 and the following configuration: algorithm = lbfgs; maximum iterations = 100; c1 = c2 = 0.1; all transitions = true; optimise = false. The features extracted from each token are as follows:", "spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels. For this purpose, the new model uses all the labels of the training corpus coded with its context at sentence level. The network optimisation parameters and dropout values are the ones recommended in the documentation for small datasets. Finally, the model is trained using batches of size 64. No more features are included, so the classifier is language-dependent but not domain-dependent." ], "highlighted_evidence": [ "Apart from experimenting with a pre-trained BERT model, we have run experiments with other systems and baselines, to compare them and obtain a better perspective about BERT's performance in these datasets.", "As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estad\u00edstica (INE; Spanish Statistical Office).", "Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature.", "spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels." ] } ] }, { "question": "Does BERT reach the best performance among all the algorithms compared?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "c0bee6539eb6956a7347daa9d2419b367bd02064", "evidence": [ { "raw_evidence": [ "In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3). Finally, we include the results obtained by mao2019hadoken with a CRF output layer on top of BERT embeddings. MEDDOCAN consists of two scenarios:", "With regard to the winner of the MEDDOCAN shared task, the BERT-based model has not improved the scores obtained by neither the domain-dependent (S3) nor the domain-independent (S2) NLNDE model. However, attending to the obtained results, BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, the task can be considered almost solved, and it is not clear if the differences among the systems are actually significant, or whether they stem from minor variations in initialisation or a long-tail of minor labelling inconsistencies." ], "highlighted_evidence": [ "In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. ", "However, attending to the obtained results, BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, the task can be considered almost solved, and it is not clear if the differences among the systems are actually significant, or whether they stem from minor variations in initialisation or a long-tail of minor labelling inconsistencies." ] }, { "raw_evidence": [ "The results of the experiments show that, in NUBes-PHI, the BERT-based model outperforms the other systems without requiring any adaptation or domain-specific feature engineering, just by being trained on the provided labelled data. Interestingly, the BERT-based model obtains a remarkably higher recall than the other systems. High recall is a desirable outcome because, when anonymising sensible documents, the accidental leak of sensible data is likely to be more dangerous than the unintended over-obfuscation of non-sensitive text.", "The experiments with the MEDDOCAN 2019 shared task dataset follow the same pattern. In this case, the BERT-based model falls 0.3 F1-score points behind the shared task winning system, but it would have achieved the second position in the competition with no further refinement." ], "highlighted_evidence": [ "The results of the experiments show that, in NUBes-PHI, the BERT-based model outperforms the other systems without requiring any adaptation or domain-specific feature engineering, just by being trained on the provided labelled data.", "The experiments with the MEDDOCAN 2019 shared task dataset follow the same pattern. In this case, the BERT-based model falls 0.3 F1-score points behind the shared task winning system, but it would have achieved the second position in the competition with no further refinement." ] } ] }, { "question": "What are the clinical datasets used in the paper?", "answers": [ { "answer": "MEDDOCAN, NUBes-PHI", "type": "extractive" }, { "answer": "MEDDOCAN, NUBes ", "type": "extractive" } ], "q_uid": "3de0487276bb5961586acc6e9f82934ef8cb668c", "evidence": [ { "raw_evidence": [ "Two datasets are exploited in this article. Both datasets consist of plain text containing clinical narrative written in Spanish, and their respective manual annotations of sensitive information in BRAT BIBREF13 standoff format. In order to feed the data to the different algorithms presented in Section SECREF7, these datasets were transformed to comply with the commonly used BIO sequence representation scheme BIBREF14.", "NUBes BIBREF4 is a corpus of around 7,000 real medical reports written in Spanish and annotated with negation and uncertainty information. Before being published, sensitive information had to be manually annotated and replaced for the corpus to be safely shared. In this article, we work with the NUBes version prior to its anonymisation, that is, with the manual annotations of sensitive information. It follows that the version we work with is not publicly available and, due to contractual restrictions, we cannot reveal the provenance of the data. In order to avoid confusion between the two corpus versions, we henceforth refer to the version relevant in this paper as NUBes-PHI (from `NUBes with Personal Health Information').", "The organisers of the MEDDOCAN shared task BIBREF3 curated a synthetic corpus of clinical cases enriched with sensitive information by health documentalists. In this regard, the MEDDOCAN evaluation scenario could be said to be somewhat far from the real use case the technology developed for the shared task is supposed to be applied in. However, at the moment it also provides the only public means for a rigorous comparison between systems for sensitive health information detection in Spanish texts." ], "highlighted_evidence": [ "Two datasets are exploited in this article. Both datasets consist of plain text containing clinical narrative written in Spanish, and their respective manual annotations of sensitive information in BRAT BIBREF13 standoff format.", "NUBes BIBREF4 is a corpus of around 7,000 real medical reports written in Spanish and annotated with negation and uncertainty information.", "In order to avoid confusion between the two corpus versions, we henceforth refer to the version relevant in this paper as NUBes-PHI (from `NUBes with Personal Health Information').", "The organisers of the MEDDOCAN shared task BIBREF3 curated a synthetic corpus of clinical cases enriched with sensitive information by health documentalists" ] }, { "raw_evidence": [ "The anonymisation systems based on NLP techniques perform reasonably well, but are far from perfect. Depending on the difficulty posed by each dataset or the amount of available data for training machine learning models, the performance achieved by these methods is not enough to fully rely on them in certain situations BIBREF0. However, in the last two years, the NLP community has reached an important milestone thanks to the appearance of the so-called Transformers neural network architectures BIBREF1. In this paper, we conduct several experiments in sensitive information detection and classification on Spanish clinical text using BERT (from `Bidirectional Encoder Representations from Transformers') BIBREF2 as the base for a sequence labelling approach. The experiments are carried out on two datasets: the MEDDOCAN: Medical Document Anonymization shared task dataset BIBREF3, and NUBes BIBREF4, a corpus of real medical reports in Spanish. In these experiments, we compare the performance of BERT with other machine-learning-based systems, some of which use language-specific features. Our aim is to evaluate how good a BERT-based model performs without language nor domain specialisation apart from the training data labelled for the task at hand." ], "highlighted_evidence": [ " In this paper, we conduct several experiments in sensitive information detection and classification on Spanish clinical text using BERT (from `Bidirectional Encoder Representations from Transformers') BIBREF2 as the base for a sequence labelling approach. The experiments are carried out on two datasets: the MEDDOCAN: Medical Document Anonymization shared task dataset BIBREF3, and NUBes BIBREF4, a corpus of real medical reports in Spanish." ] } ] } ], "1708.01464": [ { "question": "how is model compactness measured?", "answers": [ { "answer": "Using file size on disk", "type": "abstractive" }, { "answer": "15.4 MB", "type": "extractive" } ], "q_uid": "113d791df6fcfc9cecfb7b1bebaf32cc2e4402ab", "evidence": [ { "raw_evidence": [ "Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB." ], "highlighted_evidence": [ "Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB." ] }, { "raw_evidence": [ "Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB." ], "highlighted_evidence": [ "Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB." ] } ] }, { "question": "what was the baseline?", "answers": [ { "answer": "system presented by deri2016grapheme", "type": "extractive" }, { "answer": "wFST", "type": "extractive" } ], "q_uid": "0752d71a0a1f73b3482a888313622ce9e9870d6e", "evidence": [ { "raw_evidence": [ "Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST. Their results can be divided into two parts:" ], "highlighted_evidence": [ "Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST." ] }, { "raw_evidence": [ "Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST. Their results can be divided into two parts:" ], "highlighted_evidence": [ "Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST. " ] } ] }, { "question": "what evaluation metrics were used?", "answers": [ { "answer": "Phoneme Error Rate (PER), Word Error Rate (WER), Word Error Rate 100 (WER 100)", "type": "extractive" }, { "answer": "PER, WER, WER 100", "type": "extractive" } ], "q_uid": "55c8f7acbfd4f5cde634aaecd775b3bb32e9ffa3", "evidence": [ { "raw_evidence": [ "We use the following three evaluation metrics:", "Phoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences.", "Word Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence.", "Word Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system." ], "highlighted_evidence": [ "We use the following three evaluation metrics:\n\nPhoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences.\n\nWord Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence.\n\nWord Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system." ] }, { "raw_evidence": [ "We use the following three evaluation metrics:", "Phoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences.", "Word Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence.", "Word Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system.", "In system evaluations, WER, WER 100, and PER numbers presented for multiple languages are averaged, weighting each language equally BIBREF13 ." ], "highlighted_evidence": [ "We use the following three evaluation metrics:\n\nPhoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences.\n\nWord Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence.\n\nWord Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system.\n\nIn system evaluations, WER, WER 100, and PER numbers presented for multiple languages are averaged, weighting each language equally BIBREF13 ." ] } ] }, { "question": "what datasets did they use?", "answers": [ { "answer": "the Carnegie Mellon Pronouncing Dictionary BIBREF12, the multilingual pronunciation corpus collected by deri2016grapheme , ranscriptions extracted from Wiktionary", "type": "extractive" }, { "answer": "multilingual pronunciation corpus collected by deri2016grapheme", "type": "extractive" } ], "q_uid": "4eaf9787f51cd7cdc45eb85cf223d752328c6ee4", "evidence": [ { "raw_evidence": [ "In order to train a neural g2p system, one needs a large quantity of pronunciation data. A standard dataset for g2p is the Carnegie Mellon Pronouncing Dictionary BIBREF12 . However, that is a monolingual English resource, so it is unsuitable for our multilingual task. Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments. This corpus consists of spelling\u2013pronunciation pairs extracted from Wiktionary. It is already partitioned into training and test sets. Corpus statistics are presented in Table TABREF10 .", "In addition to the raw IPA transcriptions extracted from Wiktionary, the corpus provides an automatically cleaned version of transcriptions. Cleaning is a necessary step because web-scraped data is often noisy and may be transcribed at an inconsistent level of detail. The data cleaning used here attempts to make the transcriptions consistent with the phonemic inventories used in Phoible BIBREF4 . When a transcription contains a phoneme that is not in its language's inventory in Phoible, that phoneme is replaced by the phoneme with the most similar articulatory features that is in the language's inventory. Sometimes this cleaning algorithm works well: in the German examples in Table TABREF11 , the raw German symbols and are both converted to . This is useful because the in Ansbach and the in Kaninchen are instances of the same phoneme, so their phonemic representations should use the same symbol. However, the cleaning algorithm can also have negative effects on the data quality. For example, the phoneme is not present in the Phoible inventory for German, but it is used in several German transcriptions in the corpus. The cleaning algorithm converts to in all German transcriptions, whereas would be a more reasonable guess. The cleaning algorithm also removes most suprasegmentals, even though these are often an important part of a language's phonology. Developing a more sophisticated procedure for cleaning pronunciation data is a direction for future work, but in this paper we use the corpus's provided cleaned transcriptions in order to ease comparison to previous results." ], "highlighted_evidence": [ "In order to train a neural g2p system, one needs a large quantity of pronunciation data. A standard dataset for g2p is the Carnegie Mellon Pronouncing Dictionary BIBREF12 . However, that is a monolingual English resource, so it is unsuitable for our multilingual task. Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments. This corpus consists of spelling\u2013pronunciation pairs extracted from Wiktionary. It is already partitioned into training and test sets. Corpus statistics are presented in Table TABREF10", "In addition to the raw IPA transcriptions extracted from Wiktionary, the corpus provides an automatically cleaned version of transcriptions. Cleaning is a necessary step because web-scraped data is often noisy and may be transcribed at an inconsistent level of detail. The data cleaning used here attempts to make the transcriptions consistent with the phonemic inventories used in Phoible BIBREF4 . " ] }, { "raw_evidence": [ "In order to train a neural g2p system, one needs a large quantity of pronunciation data. A standard dataset for g2p is the Carnegie Mellon Pronouncing Dictionary BIBREF12 . However, that is a monolingual English resource, so it is unsuitable for our multilingual task. Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments. This corpus consists of spelling\u2013pronunciation pairs extracted from Wiktionary. It is already partitioned into training and test sets. Corpus statistics are presented in Table TABREF10 ." ], "highlighted_evidence": [ " Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments." ] } ] } ], "2002.03407": [ { "question": "Who were the human evaluators used?", "answers": [ { "answer": "20 evaluators were recruited from our institution and asked to each perform 20 annotations", "type": "extractive" }, { "answer": "20 annotatos from author's institution", "type": "abstractive" } ], "q_uid": "31735ec3d83c40b79d11df5c34154849aeb3fb47", "evidence": [ { "raw_evidence": [ "Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance." ], "highlighted_evidence": [ ". While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent." ] }, { "raw_evidence": [ "Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance." ], "highlighted_evidence": [ "20 evaluators were recruited from our institution and asked to each perform 20 annotations." ] } ] }, { "question": "Is the template-based model realistic? ", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "10d450960907091f13e0be55f40bcb96f44dd074", "evidence": [ { "raw_evidence": [ "Hypothesis 4 (H4) : The proposed template-based synthesis model outperforms a simple word replacement model.", "To validate our next set of hypothesises (H3, H4. H5), we use the synthesized data in two settings: either using it for training (rows 7, 8 and 19, 20) or tuning (rows 10, 11 and 22, 23). Table TABREF13 supports H4 by showing that the proposed synthesis model outperforms the WordNet baseline in training (rows 7, 8 and 19, 20) except Stat2016, and tuning (10, 11 and 22, 23) over all courses. It also shows that while adding synthetic data from the baseline is not always helpful, adding synthetic data from the template model helps to improve both the training and the tuning process. In both CS and ENGR courses, tuning with synthetic data enhances all ROUGE scores compared to tuning with only the original data. (rows 9 and 11). As for Stat2015, R-1 and R-$L$ improved, while R-2 decreased. For Stat2016, R-2 and R-$L$ improved, and R-1 decreased (rows 21 and 23). Training with both student reflection data and synthetic data compared to training with only student reflection data yields similar improvements, supporting H3 (rows 6, 8 and 18, 20). While the increase in ROUGE scores is small, our results show that enriching training data with synthetic data can benefit both the training and tuning of other models. In general, the best results are obtained when using data synthesis for both training and tuning (rows 11 and 23), supporting H5." ], "highlighted_evidence": [ "Hypothesis 4 (H4) : The proposed template-based synthesis model outperforms a simple word replacement model.", "Table TABREF13 supports H4 by showing that the proposed synthesis model outperforms the WordNet baseline in training (rows 7, 8 and 19, 20) except Stat2016, and tuning (10, 11 and 22, 23) over all courses. It also shows that while adding synthetic data from the baseline is not always helpful, adding synthetic data from the template model helps to improve both the training and the tuning process. In both CS and ENGR courses, tuning with synthetic data enhances all ROUGE scores compared to tuning with only the original data. (rows 9 and 11). As for Stat2015, R-1 and R-$L$ improved, while R-2 decreased. For Stat2016, R-2 and R-$L$ improved, and R-1 decreased (rows 21 and 23). Training with both student reflection data and synthetic data compared to training with only student reflection data yields similar improvements, supporting H3 (rows 6, 8 and 18, 20)." ] }, { "raw_evidence": [ "Finally, while the goal of our template model was to synthesize data, using it for summarization is surprisingly competitive, supporting H6. We believe that training the model with little data is doable due to the small number of parameters (logistic regression classifier only). While rows 12 and 24 are never the best results, they are close to the best involving tuning. This encourages us to enhance our template model and explore templates not so tailored to our data.", "Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance." ], "highlighted_evidence": [ "Finally, while the goal of our template model was to synthesize data, using it for summarization is surprisingly competitive, supporting H6. ", "This encourages us to enhance our template model and explore templates not so tailored to our data.\n\nHuman Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model inc", "This encourages us to enhance our template model and explore templates not so tailored to our data." ] } ] }, { "question": "Is the student reflection data very different from the newspaper data? ", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "b5608076d91450b0d295ad14c3e3a90d7e168d0e", "evidence": [ { "raw_evidence": [ "To improve performance in low resource domains, we explore three directions. First, we explore domain transfer for abstractive summarization. While domain transfer is not new, compared to prior summarization studies BIBREF6, BIBREF7, our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small. Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, we combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods." ], "highlighted_evidence": [ " While domain transfer is not new, compared to prior summarization studies BIBREF6, BIBREF7, our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small." ] }, { "raw_evidence": [ "To our knowledge, training such neural abstractive summarization models in low resource domains using domain transfer has not been thoroughly explored on domains different than news. For example, BIBREF4 reported the results of training on CNN/DM data while evaluating on DUC data without any tuning. Note that these two datasets are both in the news domain, and both consist of well written, structured documents. The domain transfer experiments of BIBREF1 similarly used two different news summarization datasets (CNN/DM and NYT). Our work differs in several ways from these two prior domain transfer efforts. First, our experiments involve two entirely different domains: news and student reflections. Unlike news, student reflection documents lack global structure, are repetitive, and contain many sentence fragments and grammatical mistakes. Second, the prior approaches either trained a part of the model using NYT data while retaining the other part of the model trained only on CNN/DM data BIBREF1, or didn't perform any tuning at all BIBREF4. In contrast, we do the training in two consecutive phases, pretraining and fine tuning. Finally, BIBREF1 reported that while training with domain transfer outperformed training only on out-of-domain data, it was not able to beat training only on in-domain data. This is likely because their in and out-of-domain data sizes are comparable, unlike in our case of scarce in-domain data." ], "highlighted_evidence": [ " First, our experiments involve two entirely different domains: news and student reflections. Unlike news, student reflection documents lack global structure, are repetitive, and contain many sentence fragments and grammatical mistakes. Second, the prior approaches either trained a part of the model using NYT data while retaining the other part of the model trained only on CNN/DM data BIBREF1, or didn't perform any tuning at all BIBREF4. " ] } ] }, { "question": "What is the recent abstractive summarization method in this paper?", "answers": [ { "answer": "pointer networks with coverage mechanism (PG-net)", "type": "extractive" }, { "answer": " pointer networks with coverage mechanism (PG-net)BIBREF0", "type": "extractive" } ], "q_uid": "c21b87c97d1afac85ece2450ee76d01c946de668", "evidence": [ { "raw_evidence": [ "To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. To experiment with domain transfer, the model was pretrained using the CNN/DM dataset, then fine tuned using the student reflection dataset (see the Experiments section). A second approach we explore to overcome the lack of reflection data is data synthesis. We first propose a template model for synthesizing new data, then investigate the performance impact of using this data when training the summarization model. The proposed model makes use of the nature of datasets such as ours, where the reference summaries tend to be close in structure: humans try to find the major points that students raise, then present the points in a way that marks their relative importance (recall the CS example in Table TABREF4). Our third explored approach is to combine domain transfer with data synthesis." ], "highlighted_evidence": [ "To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. " ] }, { "raw_evidence": [ "To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. To experiment with domain transfer, the model was pretrained using the CNN/DM dataset, then fine tuned using the student reflection dataset (see the Experiments section). A second approach we explore to overcome the lack of reflection data is data synthesis. We first propose a template model for synthesizing new data, then investigate the performance impact of using this data when training the summarization model. The proposed model makes use of the nature of datasets such as ours, where the reference summaries tend to be close in structure: humans try to find the major points that students raise, then present the points in a way that marks their relative importance (recall the CS example in Table TABREF4). Our third explored approach is to combine domain transfer with data synthesis." ], "highlighted_evidence": [ "To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. To experiment with domain transfer, the model was pretrained using the CNN/DM dataset, then fine tuned using the student reflection dataset (see the Experiments section)." ] } ] } ], "1909.11687": [ { "question": "Why are prior knowledge distillation techniques models are ineffective in producing student models with vocabularies different from the original teacher models? ", "answers": [ { "answer": "While there has been existing work on reducing NLP model vocabulary sizes BIBREF15, distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes.", "type": "extractive" }, { "answer": "distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes.", "type": "extractive" } ], "q_uid": "d087539e6a38c42f0a521ff2173ef42c0733878e", "evidence": [ { "raw_evidence": [ "However, a significant bottleneck that has been overlooked by previous efforts is the input vocabulary size and its corresponding word embedding matrix, often accounting for a significant proportion of all model parameters. For instance, the embedding table of the BERTBASE model, comprising over 30K WordPiece tokens BIBREF14, accounts for over $21\\%$ of the model size. While there has been existing work on reducing NLP model vocabulary sizes BIBREF15, distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes." ], "highlighted_evidence": [ "However, a significant bottleneck that has been overlooked by previous efforts is the input vocabulary size and its corresponding word embedding matrix, often accounting for a significant proportion of all model parameters. For instance, the embedding table of the BERTBASE model, comprising over 30K WordPiece tokens BIBREF14, accounts for over $21\\%$ of the model size. While there has been existing work on reducing NLP model vocabulary sizes BIBREF15, distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes." ] }, { "raw_evidence": [ "However, a significant bottleneck that has been overlooked by previous efforts is the input vocabulary size and its corresponding word embedding matrix, often accounting for a significant proportion of all model parameters. For instance, the embedding table of the BERTBASE model, comprising over 30K WordPiece tokens BIBREF14, accounts for over $21\\%$ of the model size. While there has been existing work on reducing NLP model vocabulary sizes BIBREF15, distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes." ], "highlighted_evidence": [ "While there has been existing work on reducing NLP model vocabulary sizes BIBREF15, distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes." ] } ] } ], "1605.06083": [ { "question": "What is the size of the dataset?", "answers": [ { "answer": "30,000", "type": "extractive" }, { "answer": "collection of over 30,000 images with 5 crowdsourced descriptions each", "type": "extractive" } ], "q_uid": "7561a968470a8936d10e1ba722d2f38b5a9a4d38", "evidence": [ { "raw_evidence": [ "The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). An untested assumption behind the dataset is that the descriptions are based on the images, and nothing else. Here are the authors (about the Flickr8K dataset, a subset of Flickr30K):", "This paper aims to give an overview of linguistic bias and unwarranted inferences resulting from stereotypes and prejudices. I will build on earlier work on linguistic bias in general BIBREF3 , providing examples from the Flickr30K data, and present a taxonomy of unwarranted inferences. Finally, I will discuss several methods to analyze the data in order to detect biases." ], "highlighted_evidence": [ "The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ).", "This paper aims to give an overview of linguistic bias and unwarranted inferences resulting from stereotypes and prejudices. I will build on earlier work on linguistic bias in general BIBREF3 , providing examples from the Flickr30K data, and present a taxonomy of unwarranted inferences. Finally, I will discuss several methods to analyze the data in order to detect biases." ] }, { "raw_evidence": [ "The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). An untested assumption behind the dataset is that the descriptions are based on the images, and nothing else. Here are the authors (about the Flickr8K dataset, a subset of Flickr30K):" ], "highlighted_evidence": [ "The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each." ] } ] }, { "question": "Which methods are considered to find examples of biases and unwarranted inferences??", "answers": [ { "answer": "spot patterns by just looking at a collection of images, tag all descriptions with part-of-speech information, I applied Louvain clustering", "type": "extractive" }, { "answer": "Looking for adjectives marking the noun \"baby\" and also looking for most-common adjectives related to certain nouns using POS-tagging", "type": "abstractive" } ], "q_uid": "6d4400f45bd97b812e946b8a682b018826e841f1", "evidence": [ { "raw_evidence": [ "It may be difficult to spot patterns by just looking at a collection of images. Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 . This dataset enriches Flickr30K by adding coreference annotations, i.e. which phrase in each description refers to the same entity in the corresponding image. I have used this data to create a coreference graph by linking all phrases that refer to the same entity. Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities. Looking at those clusters helps to get a sense of the enormous variation in referring expressions. To get an idea of the richness of this data, here is a small sample of the phrases used to describe beards (cluster 268): a scruffy beard; a thick beard; large white beard; a bubble beard; red facial hair; a braided beard; a flaming red beard. In this case, `red facial hair' really stands out as a description; why not choose the simpler `beard' instead?" ], "highlighted_evidence": [ "It may be difficult to spot patterns by just looking at a collection of images. Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 .", "Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities." ] }, { "raw_evidence": [ "We don't know whether or not an entity belongs to a particular social class (in this case: ethnic group) until it is marked as such. But we can approximate the proportion by looking at all the images where the annotators have used a marker (in this case: adjectives like black, white, asian), and for those images count how many descriptions (out of five) contain a marker. This gives us an upper bound that tells us how often ethnicity is indicated by the annotators. Note that this upper bound lies somewhere between 20% (one description) and 100% (5 descriptions). Figure TABREF22 presents count data for the ethnic marking of babies. It includes two false positives (talking about a white baby stroller rather than a white baby). In the Asian group there is an additional complication: sometimes the mother gets marked rather than the baby. E.g. An Asian woman holds a baby girl. I have counted these occurrences as well.", "One interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased?", "It may be difficult to spot patterns by just looking at a collection of images. Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 . This dataset enriches Flickr30K by adding coreference annotations, i.e. which phrase in each description refers to the same entity in the corresponding image. I have used this data to create a coreference graph by linking all phrases that refer to the same entity. Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities. Looking at those clusters helps to get a sense of the enormous variation in referring expressions. To get an idea of the richness of this data, here is a small sample of the phrases used to describe beards (cluster 268): a scruffy beard; a thick beard; large white beard; a bubble beard; red facial hair; a braided beard; a flaming red beard. In this case, `red facial hair' really stands out as a description; why not choose the simpler `beard' instead?" ], "highlighted_evidence": [ "We don't know whether or not an entity belongs to a particular social class (in this case: ethnic group) until it is marked as such. But we can approximate the proportion by looking at all the images where the annotators have used a marker (in this case: adjectives like black, white, asian), and for those images count how many descriptions (out of five) contain a marker. This gives us an upper bound that tells us how often ethnicity is indicated by the annotators.", "One interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased?", "Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 . This dataset enriches Flickr30K by adding coreference annotations, i.e. which phrase in each description refers to the same entity in the corresponding image. I have used this data to create a coreference graph by linking all phrases that refer to the same entity. Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities. Looking at those clusters helps to get a sense of the enormous variation in referring expressions." ] } ] }, { "question": "What biases are found in the dataset?", "answers": [ { "answer": "Ethnic bias", "type": "abstractive" }, { "answer": "adjectives are used to create \u201cmore narrow labels [or subtypes] for individuals who do not fit with general social category expectations\u201d", "type": "extractive" } ], "q_uid": "26c2e1eb12143d985e4fb50543cf0d1eb4395e67", "evidence": [ { "raw_evidence": [ "Ethnicity/race", "One interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased?", "The numbers in Table TABREF22 are striking: there seems to be a real, systematic difference in ethnicity marking between the groups. We can take one step further and look at all the 697 pictures with the word `baby' in it. If there turn out to be disproportionately many white babies, this strengthens the conclusion that the dataset is biased." ], "highlighted_evidence": [ "Ethnicity/race\nOne interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased?\n\n", "The numbers in Table TABREF22 are striking: there seems to be a real, systematic difference in ethnicity marking between the groups. We can take one step further and look at all the 697 pictures with the word `baby' in it. If there turn out to be disproportionately many white babies, this strengthens the conclusion that the dataset is biased." ] }, { "raw_evidence": [ "One well-studied example BIBREF4 , BIBREF5 is sexist language, where the sex of a person tends to be mentioned more frequently if their role or occupation is inconsistent with `traditional' gender roles (e.g. female surgeon, male nurse). Beukeboom also notes that adjectives are used to create \u201cmore narrow labels [or subtypes] for individuals who do not fit with general social category expectations\u201d (p. 3). E.g. tough woman makes an exception to the `rule' that women aren't considered to be tough." ], "highlighted_evidence": [ "One well-studied example BIBREF4 , BIBREF5 is sexist language, where the sex of a person tends to be mentioned more frequently if their role or occupation is inconsistent with `traditional' gender roles (e.g. female surgeon, male nurse).", "Beukeboom also notes that adjectives are used to create \u201cmore narrow labels [or subtypes] for individuals who do not fit with general social category expectations\u201d (p. 3). E.g. tough woman makes an exception to the `rule' that women aren't considered to be tough." ] } ] } ], "1804.05918": [ { "question": "How much does this model improve state-of-the-art?", "answers": [ { "answer": "the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 )., full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent., Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. ", "type": "extractive" }, { "answer": "1 percent", "type": "extractive" } ], "q_uid": "bd5bd1765362c2d972a762ca12675108754aa437", "evidence": [ { "raw_evidence": [ "The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).", "After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.", "As we explained in section 4.2, we ran our models for 10 times to obtain stable average performance. Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. Furthermore, the ensemble model achieves the best performance for predicting both implicit and explicit discourse relations simultaneously." ], "highlighted_evidence": [ "the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).", "In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.", "Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. " ] }, { "raw_evidence": [ "After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent." ], "highlighted_evidence": [ "In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent." ] } ] } ], "1809.04267": [ { "question": "Where is a question generation model used?", "answers": [ { "answer": "The question generation model provides each candidate answer with a score by measuring semantic relevance between the question and the generated question based on the semantics of the candidate answer. ", "type": "extractive" }, { "answer": "framework consisting of both a question answering model and a question generation model", "type": "extractive" } ], "q_uid": "d9b6c61fc6d29ad399d27b931b6cb7b1117b314a", "evidence": [ { "raw_evidence": [ "We implement a framework consisting of both a question answering model and a question generation model, both of which take the knowledge extracted from the document as well as relevant facts from an external knowledge base such as Freebase/ProBase/Reverb/NELL. The question answering model gives each candidate answer a score by measuring the semantic relevance between representation and the candidate answer representation in vector space. The question generation model provides each candidate answer with a score by measuring semantic relevance between the question and the generated question based on the semantics of the candidate answer. We implement an MRC model BiDAF BIBREF10 as a baseline for the proposed dataset. To test the scalability of our approach in leveraging external KBs, we use both manually created and automatically extracted KBs, including Freebase BIBREF11 , ProBase BIBREF12 , NELL BIBREF13 and Reverb BIBREF14 . Experiments show that incorporating evidence from external KBs improves both the matching-based and question generation-based approaches. Qualitative analysis shows the advantages and limitations of our approaches, as well as the remaining challenges." ], "highlighted_evidence": [ "The question generation model provides each candidate answer with a score by measuring semantic relevance between the question and the generated question based on the semantics of the candidate answer. " ] }, { "raw_evidence": [ "We implement a framework consisting of both a question answering model and a question generation model, both of which take the knowledge extracted from the document as well as relevant facts from an external knowledge base such as Freebase/ProBase/Reverb/NELL. The question answering model gives each candidate answer a score by measuring the semantic relevance between representation and the candidate answer representation in vector space. The question generation model provides each candidate answer with a score by measuring semantic relevance between the question and the generated question based on the semantics of the candidate answer. We implement an MRC model BiDAF BIBREF10 as a baseline for the proposed dataset. To test the scalability of our approach in leveraging external KBs, we use both manually created and automatically extracted KBs, including Freebase BIBREF11 , ProBase BIBREF12 , NELL BIBREF13 and Reverb BIBREF14 . Experiments show that incorporating evidence from external KBs improves both the matching-based and question generation-based approaches. Qualitative analysis shows the advantages and limitations of our approaches, as well as the remaining challenges." ], "highlighted_evidence": [ "We implement a framework consisting of both a question answering model and a question generation model, both of which take the knowledge extracted from the document as well as relevant facts from an external knowledge base such as Freebase/ProBase/Reverb/NELL. ", "he question answering model gives each candidate answer a score by measuring the semantic relevance between representation and the candidate answer representation in vector space. The question generation model provides each candidate answer with a score by measuring semantic relevance between the question and the generated question based on the semantics of the candidate answer. " ] } ] } ], "1901.05287": [ { "question": "Were any of these tasks evaluated in any previous work?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "d27438b11bc70e706431dda0af2b1c0b0d209f96", "evidence": [ { "raw_evidence": [ "Tables 1 , 2 and 3 show the results. All cases exhibit high scores\u2014in the vast majority of the cases substantially higher than reported in previous work. As discussed above, the results are not directly comparable to previous work: the BERT models are trained on different (and larger) data, are allowed to access the suffix of the sentence in addition to its prefix, and are evaluated on somewhat different data due to discarding OOV items. Still, taken together, the high performance numbers indicate that the purely attention-based BERT models are likely capable of capturing the same kind of syntactic regularities that LSTM-based models are capable of capturing, at least as well as the LSTM models and probably better.", "Recent work examines the extent to which RNN-based models capture syntax-sensitive phenomena that are traditionally taken as evidence for the existence in hierarchical structure. In particular, in BIBREF1 we assess the ability of LSTMs to learn subject-verb agreement patterns in English, and evaluate on naturally occurring wikipedia sentences. BIBREF2 also consider subject-verb agreement, but in a \u201ccolorless green ideas\u201d setting in which content words in naturally occurring sentences are replaced with random words with the same part-of-speech and inflection, thus ensuring a focus on syntax rather than on selectional-preferences based cues. BIBREF3 consider a wider range of syntactic phenomena (subject-verb agreement, reflexive anaphora, negative polarity items) using manually constructed stimuli, allowing for greater coverage and control than in the naturally occurring setting." ], "highlighted_evidence": [ "All cases exhibit high scores\u2014in the vast majority of the cases substantially higher than reported in previous work.", "In particular, in BIBREF1 we assess the ability of LSTMs to learn subject-verb agreement patterns in English, and evaluate on naturally occurring wikipedia sentences. ", "BIBREF2 also consider subject-verb agreement, but in a \u201ccolorless green ideas\u201d setting in which content words in naturally occurring sentences are replaced with random words with the same part-of-speech and inflection, thus ensuring a focus on syntax rather than on selectional-preferences based cues. ", "BIBREF3 consider a wider range of syntactic phenomena (subject-verb agreement, reflexive anaphora, negative polarity items) using manually constructed stimuli, allowing for greater coverage and control than in the naturally occurring setting." ] }, { "raw_evidence": [ "Recent work examines the extent to which RNN-based models capture syntax-sensitive phenomena that are traditionally taken as evidence for the existence in hierarchical structure. In particular, in BIBREF1 we assess the ability of LSTMs to learn subject-verb agreement patterns in English, and evaluate on naturally occurring wikipedia sentences. BIBREF2 also consider subject-verb agreement, but in a \u201ccolorless green ideas\u201d setting in which content words in naturally occurring sentences are replaced with random words with the same part-of-speech and inflection, thus ensuring a focus on syntax rather than on selectional-preferences based cues. BIBREF3 consider a wider range of syntactic phenomena (subject-verb agreement, reflexive anaphora, negative polarity items) using manually constructed stimuli, allowing for greater coverage and control than in the naturally occurring setting.", "I use the stimuli provided by BIBREF1 , BIBREF2 , BIBREF3 , but change the experimental protocol to adapt it to the bidirectional nature of the BERT model. This requires discarding some of the stimuli, as described below. Thus, the numbers are not strictly comparable to those reported in previous work." ], "highlighted_evidence": [ "In particular, in BIBREF1 we assess the ability of LSTMs to learn subject-verb agreement patterns in English, and evaluate on naturally occurring wikipedia sentences. BIBREF2 also consider subject-verb agreement, but in a \u201ccolorless green ideas\u201d setting in which content words in naturally occurring sentences are replaced with random words with the same part-of-speech and inflection, thus ensuring a focus on syntax rather than on selectional-preferences based cues. BIBREF3 consider a wider range of syntactic phenomena (subject-verb agreement, reflexive anaphora, negative polarity items) using manually constructed stimuli, allowing for greater coverage and control than in the naturally occurring setting.", "I use the stimuli provided by BIBREF1 , BIBREF2 , BIBREF3 , but change the experimental protocol to adapt it to the bidirectional nature of the BERT model. This requires discarding some of the stimuli, as described below. Thus, the numbers are not strictly comparable to those reported in previous work." ] } ] } ], "1612.06685": [ { "question": "Do they build a model to automatically detect demographic, lingustic or psycological dimensons of people?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "8d4ac4afbf5b14f412171729ceb5e822afcfa3f4", "evidence": [ { "raw_evidence": [ "LIWC. In addition to individual words, we can also create maps for word categories that reflect a certain psycholinguistic or semantic property. Several lexical resources, such as Roget or Linguistic Inquiry and Word Count BIBREF9 , group words into categories. Examples of such categories are Money, which includes words such as remuneration, dollar, and payment; or Positive feelings with words such as happy, cheerful, and celebration. Using the distribution of the individual words in a category, we can compile distributions for the entire category, and therefore generate maps for these word categories. For instance, figure FIGREF8 shows the maps created for two categories: Positive Feelings and Money. The maps are not surprising, and interestingly they also reflect an inverse correlation between Money and Positive Feelings ." ], "highlighted_evidence": [ "Using the distribution of the individual words in a category, we can compile distributions for the entire category, and therefore generate maps for these word categories. " ] }, { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "Which demographic dimensions of people do they obtain?", "answers": [ { "answer": "occupation, industry, profile information, language use, gender ", "type": "extractive" }, { "answer": "density of users, gender distribution", "type": "extractive" } ], "q_uid": "3c93894c4baf49deacc6ed2a14ef5e0f13b7d96f", "evidence": [ { "raw_evidence": [ "We first started by collecting a set of profiles of bloggers that met our location specifications by searching individual states on the profile finder on http://www.blogger.com. Starting with this list, we can locate the profile page for a user, and subsequently extract additional information, which includes fields such as name, email, occupation, industry, and so forth. It is important to note that the profile finder only identifies users that have an exact match to the location specified in the query; we thus built and ran queries that used both state abbreviations (e.g., TX, AL), as well as the states' full names (e.g., Texas, Alabama).", "We also generate two maps that delineate the gender distribution in the dataset. Overall, the blogging world seems to be dominated by females: out of 153,209 users who self-reported their gender, only 52,725 are men and 100,484 are women. Figures FIGREF1 and FIGREF1 show the percentage of male and female bloggers in each of the 50 states. As seen in this figure, there are more than the average number of male bloggers in states such as California and New York, whereas Utah and Idaho have a higher percentage of women bloggers.", "Our dataset provides mappings between location, profile information, and language use, which we can leverage to generate maps that reflect demographic, linguistic, and psycholinguistic properties of the population represented in the dataset." ], "highlighted_evidence": [ "Starting with this list, we can locate the profile page for a user, and subsequently extract additional information, which includes fields such as name, email, occupation, industry, and so forth.", "We also generate two maps that delineate the gender distribution in the dataset.", "Our dataset provides mappings between location, profile information, and language use, which we can leverage to generate maps that reflect demographic, linguistic, and psycholinguistic properties of the population represented in the dataset." ] }, { "raw_evidence": [ "The first map we generate depicts the distribution of the bloggers in our dataset across the U.S. Figure FIGREF1 shows the density of users in our dataset in each of the 50 states. For instance, the densest state was found to be California with 11,701 users. The second densest is Texas, with 9,252 users, followed by New York, with 9,136. The state with the fewest bloggers is Delaware with 1,217 users. Not surprisingly, this distribution correlates well with the population of these states, with a Spearman's rank correlation INLINEFORM0 of 0.91 and a p-value INLINEFORM1 0.0001, and is very similar to the one reported in Lin and Halavais Lin04.", "We also generate two maps that delineate the gender distribution in the dataset. Overall, the blogging world seems to be dominated by females: out of 153,209 users who self-reported their gender, only 52,725 are men and 100,484 are women. Figures FIGREF1 and FIGREF1 show the percentage of male and female bloggers in each of the 50 states. As seen in this figure, there are more than the average number of male bloggers in states such as California and New York, whereas Utah and Idaho have a higher percentage of women bloggers." ], "highlighted_evidence": [ "Figure FIGREF1 shows the density of users in our dataset in each of the 50 states.", "We also generate two maps that delineate the gender distribution in the dataset. " ] } ] }, { "question": "How do they obtain psychological dimensions of people?", "answers": [ { "answer": "using the Meaning Extraction Method", "type": "extractive" } ], "q_uid": "07d15501a599bae7eb4a9ead63e9df3d55b3dc35", "evidence": [ { "raw_evidence": [ "Values. We also measure the usage of words related to people's core values as reported by Boyd et al. boyd2015. The sets of words, or themes, were excavated using the Meaning Extraction Method (MEM) BIBREF10 . MEM is a topic modeling approach applied to a corpus of texts created by hundreds of survey respondents from the U.S. who were asked to freely write about their personal values. To illustrate, Figure FIGREF9 shows the geographical distributions of two of these value themes: Religion and Hard Work. Southeastern states often considered as the nation's \u201cBible Belt\u201d BIBREF11 were found to have generally higher usage of Religion words such as God, bible, and church. Another broad trend was that western-central states (e.g., Wyoming, Nebraska, Iowa) commonly blogged about Hard Work, using words such as hard, work, and job more often than bloggers in other regions." ], "highlighted_evidence": [ "We also measure the usage of words related to people's core values as reported by Boyd et al. boyd2015. The sets of words, or themes, were excavated using the Meaning Extraction Method (MEM) BIBREF10 ." ] } ] } ], "1912.04961": [ { "question": "What is the baseline?", "answers": [ { "answer": "QA PGNet, Multi-decoder QA PGNet with lookup table embedding", "type": "extractive" }, { "answer": "QA PGNet and Multi-decoder QA PGNet", "type": "extractive" } ], "q_uid": "99e78c390932594bd833be0f5c890af5c605d808", "evidence": [ { "raw_evidence": [ "We consider QA PGNet and Multi-decoder QA PGNet with lookup table embedding as baseline models and improve on the baselines with other variations described below." ], "highlighted_evidence": [ "We consider QA PGNet and Multi-decoder QA PGNet with lookup table embedding as baseline models and improve on the baselines with other variations described below." ] }, { "raw_evidence": [ "We consider QA PGNet and Multi-decoder QA PGNet with lookup table embedding as baseline models and improve on the baselines with other variations described below." ], "highlighted_evidence": [ "We consider QA PGNet and Multi-decoder QA PGNet with lookup table embedding as baseline models and improve on the baselines with other variations described below." ] } ] }, { "question": "Is the data de-identified?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "861187338c5ad445b9acddba8f2c7688785667b1", "evidence": [ { "raw_evidence": [ "Our dataset consists of a total of 6,693 real doctor-patient conversations recorded in a clinical setting using distant microphones of varying quality. The recordings have an average duration of 9min 28s and have a verbatim transcript of 1,500 words on average (written by the experts). Both the audio and the transcript are de-identified (by removing the identifying information) with digital zeros and [de-identified] tags, respectively. The sentences in the transcript are grounded to the audio with the timestamps of its first and last word." ], "highlighted_evidence": [ "Both the audio and the transcript are de-identified (by removing the identifying information) with digital zeros and [de-identified] tags, respectively." ] }, { "raw_evidence": [ "Our dataset consists of a total of 6,693 real doctor-patient conversations recorded in a clinical setting using distant microphones of varying quality. The recordings have an average duration of 9min 28s and have a verbatim transcript of 1,500 words on average (written by the experts). Both the audio and the transcript are de-identified (by removing the identifying information) with digital zeros and [de-identified] tags, respectively. The sentences in the transcript are grounded to the audio with the timestamps of its first and last word." ], "highlighted_evidence": [ "Both the audio and the transcript are de-identified (by removing the identifying information) with digital zeros and [de-identified] tags, respectively." ] } ] }, { "question": "What embeddings are used?", "answers": [ { "answer": " simple lookup table embeddings learned from scratch, using high-performance contextual embeddings, which are ELMo BIBREF11, BERT BIBREF16 and ClinicalBERT BIBREF13", "type": "extractive" }, { "answer": "ELMO BIBREF11, BERT BIBREF12 and ClinicalBERT BIBREF13", "type": "extractive" } ], "q_uid": "f161e6d5aecf8fae3a26374dcb3e4e1b40530c95", "evidence": [ { "raw_evidence": [ "Embedding: We developed different variations of our models with a simple lookup table embeddings learned from scratch and using high-performance contextual embeddings, which are ELMo BIBREF11, BERT BIBREF16 and ClinicalBERT BIBREF13 (trained and provided by the authors). Refer to Table TABREF5 for the performance comparisons." ], "highlighted_evidence": [ "Embedding: We developed different variations of our models with a simple lookup table embeddings learned from scratch and using high-performance contextual embeddings, which are ELMo BIBREF11, BERT BIBREF16 and ClinicalBERT BIBREF13 (trained and provided by the authors)." ] }, { "raw_evidence": [ "Lack of availability of a large volume of data is a typical challenge in healthcare. A conversation corpus by itself is a rare commodity in the healthcare data space because of the cost and difficulty in handing (because of data privacy concerns). Moreover, transcribing and labeling the conversations is a costly process as it requires domain-specific medical annotation expertise. To address data shortage and improve the model performance, we investigate different high-performance contextual embeddings (ELMO BIBREF11, BERT BIBREF12 and ClinicalBERT BIBREF13), and pretrain the models on a clinical summarization task. We further investigate the effects of training data size on our models." ], "highlighted_evidence": [ "To address data shortage and improve the model performance, we investigate different high-performance contextual embeddings (ELMO BIBREF11, BERT BIBREF12 and ClinicalBERT BIBREF13), and pretrain the models on a clinical summarization task." ] } ] } ], "1910.10781": [ { "question": "What datasets did they use for evaluation?", "answers": [ { "answer": "CSAT dataset, 20 newsgroups, Fisher Phase 1 corpus", "type": "extractive" }, { "answer": "CSAT dataset , 20 newsgroups, Fisher Phase 1 corpus", "type": "extractive" } ], "q_uid": "12c50dea84f9a8845795fa8b8c1679328bd66246", "evidence": [ { "raw_evidence": [ "We evaluated our models on 3 different datasets:", "CSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR).", "20 newsgroups for topic identification task, consisting of written text;", "Fisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual);" ], "highlighted_evidence": [ "We evaluated our models on 3 different datasets:\n\nCSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR).\n\n20 newsgroups for topic identification task, consisting of written text;\n\nFisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual);" ] }, { "raw_evidence": [ "We evaluated our models on 3 different datasets:", "CSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR).", "20 newsgroups for topic identification task, consisting of written text;", "Fisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual);", "CSAT dataset consists of US English telephone speech from call centers. For each call in this dataset, customers participated in that call gave a rating on his experience with agent. Originally, this dataset has labels rated on a scale 1-9 with 9 being extremely satisfied and 1 being extremely dissatisfied. Fig. FIGREF16 shows the histogram of ratings for our dataset. As the distribution is skewed towards extremes, we choose to do binary classification with ratings above 4.5 as satisfied and below 4.5 as dissatisfied. Quantization of ratings also helped us to create a balanced dataset. This dataset contains 4331 calls and we split them into 3 sets for our experiments: 2866 calls for training, 362 calls for validation and, finally, 1103 calls for testing.", "20 newsgroups data set is one of the frequently used datasets in the text processing community for text classification and text clustering. This data set contains approximately 20,000 English documents from 20 topics to be identified, with 11314 documents for training and 7532 for testing. In this work, we used only 90% of documents for training and the remaining 10% for validation. For fair comparison with other publications, we used 53160 words vocabulary set available in the datasets website.", "Fisher Phase 1 US English corpus is often used for automatic speech recognition in speech community. In this work, we used it for topic identification as in BIBREF3. The documents are 10-minute long telephone conversations between two people discussing a given topic. We used same training and test splits as BIBREF3 in which 1374 and 1372 documents are used for training and testing respectively. For validation of our model, we used 10% of training dataset and the remaining 90% was used for actual model training. The number of topics in this data set is 40." ], "highlighted_evidence": [ "We evaluated our models on 3 different datasets:\n\nCSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR).\n\n20 newsgroups for topic identification task, consisting of written text;\n\nFisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual);", "CSAT dataset consists of US English telephone speech from call centers. For each call in this dataset, customers participated in that call gave a rating on his experience with agent. Originally, this dataset has labels rated on a scale 1-9 with 9 being extremely satisfied and 1 being extremely dissatisfied. Fig. FIGREF16 shows the histogram of ratings for our dataset.", "20 newsgroups data set is one of the frequently used datasets in the text processing community for text classification and text clustering. This data set contains approximately 20,000 English documents from 20 topics to be identified, with 11314 documents for training and 7532 for testing. ", "Fisher Phase 1 US English corpus is often used for automatic speech recognition in speech community. In this work, we used it for topic identification as in BIBREF3. The documents are 10-minute long telephone conversations between two people discussing a given topic." ] } ] }, { "question": "On top of BERT does the RNN layer work better or the transformer layer?", "answers": [ { "answer": "Transformer over BERT (ToBERT)", "type": "extractive" }, { "answer": "The transformer layer", "type": "abstractive" } ], "q_uid": "0810b43404686ddfe4ca84783477ae300fdd2ea4", "evidence": [ { "raw_evidence": [ "In this paper, we propose a method that builds upon BERT's architecture. We split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT). Given that these models introduce a hierarchy of representations (segment-wise and document-wise), we refer to them as Hierarchical Transformers. To the best of our knowledge, no attempt has been done before to use the Transformer architecture for classification of such long sequences.", "In this paper, we presented two methods for long documents using BERT model: RoBERT and ToBERT. We evaluated our experiments on two classification tasks - customer satisfaction prediction and topic identification - using 3 datasets: CSAT, 20newsgroups and Fisher. We observed that ToBERT outperforms RoBERT on pre-trained BERT features and fine-tuned BERT features for all our tasks. Also, we noticed that fine-tuned BERT performs better than pre-trained BERT. We have shown that both RoBERT and ToBERT improved the simple baselines of taking an average (or the most frequent) of segment-wise predictions for long documents to obtain final prediction. Position embeddings did not significantly affect our models performance, but slightly improved the accuracy on the CSAT task. We obtained the best results on Fisher dataset and good improvements for CSAT task compared to the CNN baseline. It is interesting to note that the longer the average input in a given task, the bigger improvement we observe w.r.t. the baseline for that task. Our results confirm that both RoBERT and ToBERT can be used for long sequences with competitive performance and quick fine-tuning procedure. For future work, we shall focus on training models on long documents directly (i.e. in an end-to-end manner)." ], "highlighted_evidence": [ "We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT).", "We observed that ToBERT outperforms RoBERT on pre-trained BERT features and fine-tuned BERT features for all our tasks. " ] }, { "raw_evidence": [ "In this paper, we propose a method that builds upon BERT's architecture. We split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT). Given that these models introduce a hierarchy of representations (segment-wise and document-wise), we refer to them as Hierarchical Transformers. To the best of our knowledge, no attempt has been done before to use the Transformer architecture for classification of such long sequences.", "Table TABREF25 presents results using pre-trained BERT features. We extracted features from the pooled output of final transformer block as these were shown to be working well for most of the tasks BIBREF1. The features extracted from a pre-trained BERT model without any fine-tuning lead to a sub-par performance. However, We also notice that ToBERT model exploited the pre-trained BERT features better than RoBERT. It also converged faster than RoBERT. Table TABREF26 shows results using features extracted after fine-tuning BERT model with our datasets. Significant improvements can be observed compared to using pre-trained BERT features. Also, it can be noticed that ToBERT outperforms RoBERT on Fisher and 20newsgroups dataset by 13.63% and 0.81% respectively. On CSAT, ToBERT performs slightly worse than RoBERT but it is not statistically significant as this dataset is small." ], "highlighted_evidence": [ "Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT).", "Also, it can be noticed that ToBERT outperforms RoBERT on Fisher and 20newsgroups dataset by 13.63% and 0.81% respectively. On CSAT, ToBERT performs slightly worse than RoBERT but it is not statistically significant as this dataset is small." ] } ] } ], "1603.09631": [ { "question": "How was this data collected?", "answers": [ { "answer": "CrowdFlower", "type": "extractive" }, { "answer": "The crowdsourcing platform CrowdFlower was used to obtain natural dialog data that prompted the user to paraphrase, explain, and/or answer a question from a Simple questions BIBREF7 dataset. The CrowdFlower users were restricted to English-speaking countries to avoid dialogs with poor English.", "type": "abstractive" } ], "q_uid": "455d4ef8611f62b1361be4f6387b222858bb5e56", "evidence": [ { "raw_evidence": [ "However, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection." ], "highlighted_evidence": [ "Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection." ] }, { "raw_evidence": [ "However, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.", "A CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section \"Interactive Learning Evaluation\" ), explanation (see Section \"Future Work\" ) and answer of the question (see Section \"Conclusion\" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. The collected dialogs has been annotated (see Section \"Acknowledgments\" ) by expert annotators afterwards." ], "highlighted_evidence": [ "Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.\n\nA CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section \"Interactive Learning Evaluation\" ), explanation (see Section \"Future Work\" ) and answer of the question (see Section \"Conclusion\" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. " ] } ] }, { "question": "What is the average length of dialog?", "answers": [ { "answer": "4.49 turns", "type": "abstractive" }, { "answer": "4.5 turns per dialog (8533 turns / 1900 dialogs)", "type": "abstractive" } ], "q_uid": "bc16ce6e9c61ae13d46970ebe6c4728a47f8f425", "evidence": [ { "raw_evidence": [ "We collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities." ], "highlighted_evidence": [ "We collected the dataset with 1900 dialogs and 8533 turns. " ] }, { "raw_evidence": [ "We collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities." ], "highlighted_evidence": [ "We collected the dataset with 1900 dialogs and 8533 turns. " ] } ] } ], "1911.06964": [ { "question": "How are models evaluated in this human-machine communication game?", "answers": [ { "answer": "by training an autocomplete system on 500K randomly sampled sentences from Yelp reviews", "type": "extractive" }, { "answer": "efficiency of a communication scheme $(q_{\\alpha },p_{\\beta })$ by the retention rate of tokens, which is measured as the fraction of tokens that are kept in the keywords, accuracy of a scheme is measured as the fraction of sentences generated by greedily decoding the model that exactly matches the target sentence", "type": "extractive" } ], "q_uid": "1ff0fccf0dca95a6630380c84b0422bed854269a", "evidence": [ { "raw_evidence": [ "We evaluate our approach by training an autocomplete system on 500K randomly sampled sentences from Yelp reviews BIBREF6 (see Appendix for details). We quantify the efficiency of a communication scheme $(q_{\\alpha },p_{\\beta })$ by the retention rate of tokens, which is measured as the fraction of tokens that are kept in the keywords. The accuracy of a scheme is measured as the fraction of sentences generated by greedily decoding the model that exactly matches the target sentence." ], "highlighted_evidence": [ "We evaluate our approach by training an autocomplete system on 500K randomly sampled sentences from Yelp reviews BIBREF6 (see Appendix for details). We quantify the efficiency of a communication scheme $(q_{\\alpha },p_{\\beta })$ by the retention rate of tokens, which is measured as the fraction of tokens that are kept in the keywords. The accuracy of a scheme is measured as the fraction of sentences generated by greedily decoding the model that exactly matches the target sentence." ] }, { "raw_evidence": [ "We evaluate our approach by training an autocomplete system on 500K randomly sampled sentences from Yelp reviews BIBREF6 (see Appendix for details). We quantify the efficiency of a communication scheme $(q_{\\alpha },p_{\\beta })$ by the retention rate of tokens, which is measured as the fraction of tokens that are kept in the keywords. The accuracy of a scheme is measured as the fraction of sentences generated by greedily decoding the model that exactly matches the target sentence." ], "highlighted_evidence": [ "We quantify the efficiency of a communication scheme $(q_{\\alpha },p_{\\beta })$ by the retention rate of tokens, which is measured as the fraction of tokens that are kept in the keywords. The accuracy of a scheme is measured as the fraction of sentences generated by greedily decoding the model that exactly matches the target sentence." ] } ] }, { "question": "How many participants were trying this communication game?", "answers": [ { "answer": "100 ", "type": "extractive" }, { "answer": "100 crowdworkers ", "type": "extractive" } ], "q_uid": "3d7d865e905295d11f1e85af5fa89b210e3e9fdf", "evidence": [ { "raw_evidence": [ "We recruited 100 crowdworkers on Amazon Mechanical Turk (AMT) and measured completion times and accuracies for typing randomly sampled sentences from the Yelp corpus. Each user was shown alternating autocomplete and writing tasks across 50 sentences (see Appendix for user interface). For the autocomplete task, we gave users a target sentence and asked them to type a set of keywords into the system. The users were shown the top three suggestions from the autocomplete system, and were asked to mark whether each of these three suggestions was semantically equivalent to the target sentence. For the writing task, we gave users a target sentence and asked them to either type the sentence verbatim or a sentence that preserves the meaning of the target sentence." ], "highlighted_evidence": [ "We recruited 100 crowdworkers on Amazon Mechanical Turk (AMT) and measured completion times and accuracies for typing randomly sampled sentences from the Yelp corpus. " ] }, { "raw_evidence": [ "We recruited 100 crowdworkers on Amazon Mechanical Turk (AMT) and measured completion times and accuracies for typing randomly sampled sentences from the Yelp corpus. Each user was shown alternating autocomplete and writing tasks across 50 sentences (see Appendix for user interface). For the autocomplete task, we gave users a target sentence and asked them to type a set of keywords into the system. The users were shown the top three suggestions from the autocomplete system, and were asked to mark whether each of these three suggestions was semantically equivalent to the target sentence. For the writing task, we gave users a target sentence and asked them to either type the sentence verbatim or a sentence that preserves the meaning of the target sentence." ], "highlighted_evidence": [ "We recruited 100 crowdworkers on Amazon Mechanical Turk (AMT) and measured completion times and accuracies for typing randomly sampled sentences from the Yelp corpus." ] } ] }, { "question": "What user variations have been tested?", "answers": [ { "answer": "completion times and accuracies ", "type": "extractive" } ], "q_uid": "2ad4d3d222f5237ed97923640bc8e199409cbe52", "evidence": [ { "raw_evidence": [ "We recruited 100 crowdworkers on Amazon Mechanical Turk (AMT) and measured completion times and accuracies for typing randomly sampled sentences from the Yelp corpus. Each user was shown alternating autocomplete and writing tasks across 50 sentences (see Appendix for user interface). For the autocomplete task, we gave users a target sentence and asked them to type a set of keywords into the system. The users were shown the top three suggestions from the autocomplete system, and were asked to mark whether each of these three suggestions was semantically equivalent to the target sentence. For the writing task, we gave users a target sentence and asked them to either type the sentence verbatim or a sentence that preserves the meaning of the target sentence." ], "highlighted_evidence": [ "We recruited 100 crowdworkers on Amazon Mechanical Turk (AMT) and measured completion times and accuracies for typing randomly sampled sentences from the Yelp corpus. " ] } ] }, { "question": "What are the baselines used?", "answers": [ { "answer": "Unif and Stopword", "type": "extractive" }, { "answer": "Unif and Stopword", "type": "extractive" } ], "q_uid": "3fad42be0fb2052bb404b989cc7d58b440cd23a0", "evidence": [ { "raw_evidence": [ "We quantify the efficiency-accuracy tradeoff compared to two rule-based baselines: Unif and Stopword. The Unif encoder randomly keeps tokens to generate keywords with the probability $\\delta $. The Stopword encoder keeps all tokens but drops stop words (e.g. `the', `a', `or') all the time ($\\delta =0$) or half of the time ($\\delta =0.5$). The corresponding decoders for these encoders are optimized using gradient descent to minimize the reconstruction error (i.e. $\\mathrm {loss}(x, \\alpha , \\beta )$)." ], "highlighted_evidence": [ "We quantify the efficiency-accuracy tradeoff compared to two rule-based baselines: Unif and Stopword. " ] }, { "raw_evidence": [ "We quantify the efficiency-accuracy tradeoff compared to two rule-based baselines: Unif and Stopword. The Unif encoder randomly keeps tokens to generate keywords with the probability $\\delta $. The Stopword encoder keeps all tokens but drops stop words (e.g. `the', `a', `or') all the time ($\\delta =0$) or half of the time ($\\delta =0.5$). The corresponding decoders for these encoders are optimized using gradient descent to minimize the reconstruction error (i.e. $\\mathrm {loss}(x, \\alpha , \\beta )$)." ], "highlighted_evidence": [ "We quantify the efficiency-accuracy tradeoff compared to two rule-based baselines: Unif and Stopword." ] } ] } ], "2001.02284": [ { "question": "Do they use off-the-shelf NLP systems to build their assitant?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "ee417fea65f9b1029455797671da0840c8c1abbe", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] }, { "raw_evidence": [ "Natural Language Understanding (NLU): We implemented an NLU unit utilizing handcrafted rules, Regular Expressions (RegEx) and Elasticsearch (ES) API. The NLU module contains following functionalities:", "Dialogue Manager consists of the Dialogue State Tracker (DST), that maintains a representation of the current dialog state, and of the Policy Learner (PL) that defines the next system action. In our model, the system's next action is defined by the state of the previously obtained information stored in the Information Dictionary. For instance, if the system recognizes that the student works on the final examination, it also understands (defined by the logic in the predefined rules) that there is no need to ask for sub-topic because the final examination always corresponds to a chapter level (due to the design of OMB+ platform). If the system identifies that the user has difficulties in solving a quiz, it has to ask for the corresponding topic and sub-topic if not yet provided by a user (because the quiz always refers to a section level). To determine all of the potential dialogue flows, we implemented Mutually Exclusive Rules (MER), which indicate that two events $e_{1}$ and $e_{2}$ are mutually exclusive or disjoint if they cannot both occur at the same time (thus, the intersection of these events is empty: $P(A \\cap B) = 0$). Additionally, we defined transition and mapping rules. The formal explanation of rules can be found in Section SECREF12 of the Appendix. Following the rules, we generated 56 state transitions, which define next system actions. Being on a new dialogue state, the system compares the extracted (i.e., updated) information in the ID with the valid dialogue states (see Section SECREF12 of the Appendix for the explanation of the validness) and picks the mapped action as the next system's action." ], "highlighted_evidence": [ "We implemented an NLU unit utilizing handcrafted rules, Regular Expressions (RegEx) and Elasticsearch (ES) API.", "To determine all of the potential dialogue flows, we implemented Mutually Exclusive Rules (MER), which indicate that two events $e_{1}$ and $e_{2}$ are mutually exclusive or disjoint if they cannot both occur at the same time (thus, the intersection of these events is empty: $P(A \\cap B) = 0$). Additionally, we defined transition and mapping rules." ] } ] }, { "question": "How does the IPA label data after interacting with users?", "answers": [ { "answer": "It defined a sequence labeling task to extract custom entities from user input and label the next action (out of 13 custom actions defined).", "type": "abstractive" }, { "answer": "Plain dialogues with unique dialogue indexes, Plain Information Dictionary information (e.g., extracted entities) collected for the whole dialogue, Pairs of questions (i.e., user requests) and responses (i.e., bot responses), Triples in the form of (User Request, Next Action, Response)", "type": "extractive" } ], "q_uid": "ca5a82b54cb707c9b947aa8445aac51ea218b23a", "evidence": [ { "raw_evidence": [ "Named Entity Recognition: We defined a sequence labeling task to extract custom entities from user input. We assumed seven (7) possible entities (see Table TABREF43) to be recognized by the model: topic, subtopic, examination mode and level, question number, intent, as well as the entity other for remaining words in the utterance. Since the data obtained from the rule-based system already contains information on the entities extracted from each user query (i.e., by means of Elasticsearch), we could use it to train a domain-specific NER unit. However, since the user-input was informal, the same information could be provided in different writing styles. That means that a single entity could have different surface forms (e.g., synonyms, writing styles) (although entities that we extracted from the rule-based system were all converted to a universal standard, e.g., official chapter names). To consider all of the variable entity forms while post-labeling the original dataset, we defined generic entity names (e.g., chapter, question nr.) and mapped variations of entities from the user input (e.g., Chapter = [Elementary Calculus, Chapter $I$, ...]) to them.", "Next Action Prediction: We defined a classification problem to predict the system's next action according to the given user input. We assumed 13 custom actions (see Table TABREF42) that we considered being our labels. In the conversational dataset, each input was automatically labeled by the rule-based system with the corresponding next action and the dialogue-id. Thus, no additional post-labeling was required. We investigated two settings:" ], "highlighted_evidence": [ " We defined a sequence labeling task to extract custom entities from user input. We assumed seven (7) possible entities (see Table TABREF43) to be recognized by the model: topic, subtopic, examination mode and level, question number, intent, as well as the entity other for remaining words in the utterance. ", " We defined a classification problem to predict the system's next action according to the given user input. We assumed 13 custom actions (see Table TABREF42) that we considered being our labels. In the conversational dataset, each input was automatically labeled by the rule-based system with the corresponding next action and the dialogue-id. Thus, no additional post-labeling was required. " ] }, { "raw_evidence": [ "Plain dialogues with unique dialogue indexes;", "Plain Information Dictionary information (e.g., extracted entities) collected for the whole dialogue;", "Pairs of questions (i.e., user requests) and responses (i.e., bot responses) with the unique dialogue- and turn-indexes;", "Triples in the form of (User Request, Next Action, Response). Information on the next system's action could be employed to train a Dialogue Manager unit with (deep-) machine learning algorithms;" ], "highlighted_evidence": [ "Plain dialogues with unique dialogue indexes;\n\nPlain Information Dictionary information (e.g., extracted entities) collected for the whole dialogue;\n\nPairs of questions (i.e., user requests) and responses (i.e., bot responses) with the unique dialogue- and turn-indexes;\n\nTriples in the form of (User Request, Next Action, Response). Information on the next system's action could be employed to train a Dialogue Manager unit with (deep-) machine learning algorithms;" ] } ] }, { "question": "What kind of repetitive and time-consuming activities does their assistant handle?", "answers": [ { "answer": " What kind of topic (or sub-topic) a student has a problem with, At which examination mode (i.e., quiz, chapter level training or exercise, section level training or exercise, or final examination) the student is working right now, the exact question number and exact problem formulation", "type": "extractive" } ], "q_uid": "da55bd769721b878dd17f07f124a37a0a165db02", "evidence": [ { "raw_evidence": [ "In general, student questions can be grouped into three main categories: organizational questions (e.g., course certificate), contextual questions (e.g., content, theorem) and mathematical questions (e.g., exercises, solutions). To assist a student with a mathematical question, a tutor has to know the following regular information: What kind of topic (or sub-topic) a student has a problem with. At which examination mode (i.e., quiz, chapter level training or exercise, section level training or exercise, or final examination) the student is working right now. And finally, the exact question number and exact problem formulation. This means that a tutor has to request the same information every time a new dialogue opens, which is very time consuming and could be successfully solved by means of an IPA dialogue bot." ], "highlighted_evidence": [ "To assist a student with a mathematical question, a tutor has to know the following regular information: What kind of topic (or sub-topic) a student has a problem with. At which examination mode (i.e., quiz, chapter level training or exercise, section level training or exercise, or final examination) the student is working right now. And finally, the exact question number and exact problem formulation. This means that a tutor has to request the same information every time a new dialogue opens, which is very time consuming and could be successfully solved by means of an IPA dialogue bot." ] } ] } ], "2002.01664": [ { "question": "How was the audio data gathered?", "answers": [ { "answer": "Through the All India Radio new channel where actors read news.", "type": "abstractive" }, { "answer": " $\\textbf {All India Radio}$ news channel", "type": "extractive" } ], "q_uid": "feb448860918ef5b905bb25d7b855ba389117c1f", "evidence": [ { "raw_evidence": [ "In this paper, we explore multiple pooling strategies for language identification task. Mainly we propose Ghost-VLAD based pooling method for language identification. Inspired by the recent work by W. Xie et al. [9] and Y. Zhong et al. [10], we use Ghost-VLAD to improve the accuracy of language identification task for Indian languages. We explore multiple pooling strategies including NetVLAD pooling [11], Average pooling and Statistics pooling( as proposed in X-vectors [7]) and show that Ghost-VLAD pooling is the best pooling strategy for language identification. Our model obtains the best accuracy of 98.24%, and it outperforms all the other previously proposed pooling methods. We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\\textbf {All India Radio}$ news channel. The paper is organized as follows. In section 2, we explain the proposed pooling method for language identification. In section 3, we explain our dataset. In section 4, we describe the experiments, and in section 5, we describe the results.", "In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. The amount of audio data for training and testing for each of the language is shown in the table bellow." ], "highlighted_evidence": [ "We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\\textbf {All India Radio}$ news channel. ", "We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. " ] }, { "raw_evidence": [ "In this paper, we explore multiple pooling strategies for language identification task. Mainly we propose Ghost-VLAD based pooling method for language identification. Inspired by the recent work by W. Xie et al. [9] and Y. Zhong et al. [10], we use Ghost-VLAD to improve the accuracy of language identification task for Indian languages. We explore multiple pooling strategies including NetVLAD pooling [11], Average pooling and Statistics pooling( as proposed in X-vectors [7]) and show that Ghost-VLAD pooling is the best pooling strategy for language identification. Our model obtains the best accuracy of 98.24%, and it outperforms all the other previously proposed pooling methods. We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\\textbf {All India Radio}$ news channel. The paper is organized as follows. In section 2, we explain the proposed pooling method for language identification. In section 3, we explain our dataset. In section 4, we describe the experiments, and in section 5, we describe the results." ], "highlighted_evidence": [ "We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\\textbf {All India Radio}$ news channel." ] } ] }, { "question": "What is the GhostVLAD approach?", "answers": [ { "answer": "extension of the NetVLAD, adds Ghost clusters along with the NetVLAD clusters", "type": "extractive" }, { "answer": "An extension of NetVLAD which replaces hard assignment-based clustering with soft assignment-based clustering with the additon o fusing Ghost clusters to deal with noisy content.", "type": "abstractive" } ], "q_uid": "4bc2784be43d599000cb71d31928908250d4cef3", "evidence": [ { "raw_evidence": [ "GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section. The GhostVLAD model was proposed for face recognition by Y. Zhong [10]. GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters. Where G is the number of ghost clusters, we want to add (typically 2-4). The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side). Which means that we compute the matrix V for both normal cluster K and ghost clusters G, but we will not include the vectors belongs to ghost cluster from V during concatenation of the features. Due to which, during feature aggregation stage the contribution of the noisy and unwanted features to normal VLAD clusters are assigned less weights while Ghost clusters absorb most of the weight. We illustrate this in Figure 1(Right Side), where the ghost clusters are shown in red color. We use Ghost clusters when we are computing the V matrix, but they are excluded during the concatenation stage. These concatenated features are fed into the projection layer, followed by softmax to predict the language label." ], "highlighted_evidence": [ "GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section.", "GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters.", "The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side)." ] }, { "raw_evidence": [ "GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section. The GhostVLAD model was proposed for face recognition by Y. Zhong [10]. GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters. Where G is the number of ghost clusters, we want to add (typically 2-4). The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side). Which means that we compute the matrix V for both normal cluster K and ghost clusters G, but we will not include the vectors belongs to ghost cluster from V during concatenation of the features. Due to which, during feature aggregation stage the contribution of the noisy and unwanted features to normal VLAD clusters are assigned less weights while Ghost clusters absorb most of the weight. We illustrate this in Figure 1(Right Side), where the ghost clusters are shown in red color. We use Ghost clusters when we are computing the V matrix, but they are excluded during the concatenation stage. These concatenated features are fed into the projection layer, followed by softmax to predict the language label.", "The NetVLAD pooling strategy was initially developed for place recognition by R. Arandjelovic et al. [11]. The NetVLAD is an extension to VLAD [18] approach where they were able to replace the hard assignment based clustering with soft assignment based clustering so that it can be trained with neural network in an end to end fashion. In our case, we use the NetVLAD layer to map N local features of dimension D into a fixed dimensional vector, as shown in Figure 1 (Left side)." ], "highlighted_evidence": [ "GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section. The GhostVLAD model was proposed for face recognition by Y. Zhong [10]. GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters. Where G is the number of ghost clusters, we want to add (typically 2-4). The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side). ", "The NetVLAD pooling strategy was initially developed for place recognition by R. Arandjelovic et al. [11]. The NetVLAD is an extension to VLAD [18] approach where they were able to replace the hard assignment based clustering with soft assignment based clustering so that it can be trained with neural network in an end to end fashion. In our case, we use the NetVLAD layer to map N local features of dimension D into a fixed dimensional vector, as shown in Figure 1 (Left side)." ] } ] } ], "1808.09111": [ { "question": "What datasets do they evaluate on?", "answers": [ { "answer": " Wall Street Journal (WSJ) portion of the Penn Treebank", "type": "extractive" } ], "q_uid": "6424e442b34a576f904d9649d63acf1e4fdefdfc", "evidence": [ { "raw_evidence": [ "For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank. To create the observed data embeddings, we train skip-gram word embeddings BIBREF7 that are found to capture syntactic properties well when trained with small context window BIBREF8 , BIBREF9 . Following BIBREF9 , the dimensionality INLINEFORM0 is set to 100, and the training context window size is set to 1 to encode more syntactic information. The skip-gram embeddings are trained on the one billion word language modeling benchmark dataset BIBREF21 in addition to the WSJ corpus." ], "highlighted_evidence": [ "For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank." ] } ] }, { "question": "Do they evaluate only on English datasets?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "5eabfc6cc8aa8a99e6e42514ef9584569cb75dec", "evidence": [ { "raw_evidence": [ "For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank. To create the observed data embeddings, we train skip-gram word embeddings BIBREF7 that are found to capture syntactic properties well when trained with small context window BIBREF8 , BIBREF9 . Following BIBREF9 , the dimensionality INLINEFORM0 is set to 100, and the training context window size is set to 1 to encode more syntactic information. The skip-gram embeddings are trained on the one billion word language modeling benchmark dataset BIBREF21 in addition to the WSJ corpus." ], "highlighted_evidence": [ "For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank." ] } ] }, { "question": "What is the invertibility condition?", "answers": [ { "answer": "The neural projector must be invertible.", "type": "abstractive" }, { "answer": "we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists", "type": "extractive" } ], "q_uid": "887c6727e9f25ade61b4853a869fe712fe0b703d", "evidence": [ { "raw_evidence": [ "In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. Invertible transformations have been explored before in independent components analysis BIBREF14 , gaussianization BIBREF15 , and deep density models BIBREF16 , BIBREF17 , BIBREF18 , for unstructured data. Here, we generalize this style of approach to structured learning, and augment it with discrete latent variables ( INLINEFORM2 ). Under the invertibility condition, we derive a learning algorithm and give another view of our approach revealed by the objective function. Then, we present the architecture of a neural projector we use in experiments: a volume-preserving invertible neural network proposed by BIBREF16 for independent components estimation." ], "highlighted_evidence": [ "In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. " ] }, { "raw_evidence": [ "In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. Invertible transformations have been explored before in independent components analysis BIBREF14 , gaussianization BIBREF15 , and deep density models BIBREF16 , BIBREF17 , BIBREF18 , for unstructured data. Here, we generalize this style of approach to structured learning, and augment it with discrete latent variables ( INLINEFORM2 ). Under the invertibility condition, we derive a learning algorithm and give another view of our approach revealed by the objective function. Then, we present the architecture of a neural projector we use in experiments: a volume-preserving invertible neural network proposed by BIBREF16 for independent components estimation." ], "highlighted_evidence": [ "Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists." ] } ] } ], "1906.08593": [ { "question": "Do they show on which examples how conflict works better than attention?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "6236762b5631d9e395f81e1ebccc4bf3ab9b24ac", "evidence": [ { "raw_evidence": [ "We also show qualitative results where we can observe that our model with attention and conflict combined does better on cases where pairs are non-duplicate and has very small difference. We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences.", "Sequence 1: What are the best ways to learn French ?", "Sequence 2: How do I learn french genders ?", "Attention only: 1", "Attention+Conflict: 0", "Ground Truth: 0", "Sequence 1: How do I prevent breast cancer ?", "Sequence 2: Is breast cancer preventable ?", "We provide two examples with predictions from the models with only attention and combination of attention and conflict. Each example is accompanied by the ground truth in our data." ], "highlighted_evidence": [ "We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences.\n\nSequence 1: What are the best ways to learn French ?\n\nSequence 2: How do I learn french genders ?\n\nAttention only: 1\n\nAttention+Conflict: 0\n\nGround Truth: 0\n\nSequence 1: How do I prevent breast cancer ?\n\nSequence 2: Is breast cancer preventable ?\n\nAttention only: 1\n\nAttention+Conflict: 0\n\nGround Truth: 0\n\nWe provide two examples with predictions from the models with only attention and combination of attention and conflict. Each example is accompanied by the ground truth in our data." ] }, { "raw_evidence": [ "We also show qualitative results where we can observe that our model with attention and conflict combined does better on cases where pairs are non-duplicate and has very small difference. We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences.", "Sequence 1: What are the best ways to learn French ?", "Sequence 2: How do I learn french genders ?", "Attention only: 1", "Attention+Conflict: 0", "Ground Truth: 0", "Sequence 1: How do I prevent breast cancer ?", "Sequence 2: Is breast cancer preventable ?" ], "highlighted_evidence": [ "We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences.\n\nSequence 1: What are the best ways to learn French ?\n\nSequence 2: How do I learn french genders ?\n\nAttention only: 1\n\nAttention+Conflict: 0\n\nGround Truth: 0\n\nSequence 1: How do I prevent breast cancer ?\n\nSequence 2: Is breast cancer preventable ?\n\nAttention only: 1\n\nAttention+Conflict: 0\n\nGround Truth: 0" ] } ] }, { "question": "Which neural architecture do they use as a base for their attention conflict mechanisms?", "answers": [ { "answer": "GRU-based encoder, interaction block, and classifier consisting of stacked fully-connected layers.", "type": "abstractive" }, { "answer": "two stacked GRU layers, attention for one model while for the another one it consists of attention and conflict combined, fully-connected layers", "type": "extractive" } ], "q_uid": "31d695ba855d821d3e5cdb7bea638c7dbb7c87c7", "evidence": [ { "raw_evidence": [ "We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input. Except interaction, all the other parts are exactly identical between the two models. The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. Figure 3 shows a block diagram of how our model looks like." ], "highlighted_evidence": [ "We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input.", "The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. " ] }, { "raw_evidence": [ "We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input. Except interaction, all the other parts are exactly identical between the two models. The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. Figure 3 shows a block diagram of how our model looks like." ], "highlighted_evidence": [ "We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input. Except interaction, all the other parts are exactly identical between the two models. The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. Figure 3 shows a block diagram of how our model looks like." ] } ] }, { "question": "On which tasks do they test their conflict method?", "answers": [ { "answer": "Task 1: Quora Duplicate Question Pair Detection, Task 2: Ranking questions", "type": "extractive" }, { "answer": "Quora Duplicate Question Pair Detection, Ranking questions in Bing's People Also Ask", "type": "extractive" } ], "q_uid": "b14217978ad9c3c9b6b1ce393b1b5c6e7f49ecab", "evidence": [ { "raw_evidence": [ "Task 1: Quora Duplicate Question Pair Detection", "Task 2: Ranking questions in Bing's People Also Ask" ], "highlighted_evidence": [ "Task 1: Quora Duplicate Question Pair Detection", "Task 2: Ranking questions in Bing's People Also Ask" ] }, { "raw_evidence": [ "Task 1: Quora Duplicate Question Pair Detection", "Task 2: Ranking questions in Bing's People Also Ask" ], "highlighted_evidence": [ "Task 1: Quora Duplicate Question Pair Detection", "Task 2: Ranking questions in Bing's People Also Ask" ] } ] } ], "1809.00540": [ { "question": "What are the sources of the datasets?", "answers": [ { "answer": "rupnik2016news", "type": "extractive" }, { "answer": "rupnik2016news, Deutsche Welle's news website", "type": "extractive" } ], "q_uid": "2c78993524ca62bf1f525b60f2220a374d0e3535", "evidence": [ { "raw_evidence": [ "More recently, crosslingual linking of clusters has been discussed by rupnik2016news in the context of linking existing clusters from the Event Registry BIBREF7 in a batch fashion, and by steinberger2016mediagist who also present a batch clustering linking system. However, these are not \u201ctruly\u201d online crosslingual clustering systems since they only decide on the linking of already-built monolingual clusters. In particular, rupnik2016news compute distances of document pairs across clusters using nearest neighbors, which might not scale well in an online setting. As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach. Preliminary work makes use of deep learning techniques BIBREF8 , BIBREF9 to cluster documents while learning their representations, but not in an online or multilingual fashion, and with a very small number of cluster labels (4, in the case of the text benchmark)." ], "highlighted_evidence": [ "As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach." ] }, { "raw_evidence": [ "More recently, crosslingual linking of clusters has been discussed by rupnik2016news in the context of linking existing clusters from the Event Registry BIBREF7 in a batch fashion, and by steinberger2016mediagist who also present a batch clustering linking system. However, these are not \u201ctruly\u201d online crosslingual clustering systems since they only decide on the linking of already-built monolingual clusters. In particular, rupnik2016news compute distances of document pairs across clusters using nearest neighbors, which might not scale well in an online setting. As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach. Preliminary work makes use of deep learning techniques BIBREF8 , BIBREF9 to cluster documents while learning their representations, but not in an online or multilingual fashion, and with a very small number of cluster labels (4, in the case of the text benchmark).", "Statistics about this dataset are given in Table TABREF30 . As described further, we tune the hyper-parameter INLINEFORM0 on the development set. As for the hyper-parameters related to the timestamp features, we fixed INLINEFORM1 and tuned INLINEFORM2 on the development set, yielding INLINEFORM3 . To compute IDF scores (which are global numbers computed across a corpus), we used a different and much larger dataset that we collected from Deutsche Welle's news website (http://www.dw.com/). The dataset consists of 77,268, 118,045 and 134,243 documents for Spanish, English and German, respectively." ], "highlighted_evidence": [ "As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach.", "To compute IDF scores (which are global numbers computed across a corpus), we used a different and much larger dataset that we collected from Deutsche Welle's news website (http://www.dw.com/). " ] } ] } ], "2004.03354": [ { "question": "What in-domain text did they use?", "answers": [ { "answer": "PubMed+PMC", "type": "extractive" }, { "answer": "PubMed+PMC (the data used for BioBERTv1.0) and/or CORD-19 (Covid-19 Open Research Dataset)", "type": "extractive" } ], "q_uid": "16535db1d73a9373ffe9d6eedaa2369cefd91ac4", "evidence": [ { "raw_evidence": [ "We train Word2Vec with vector size $d_\\mathrm {W2V} = d_\\mathrm {LM} = 768$ on PubMed+PMC (see Appendix for details). Then, we follow the procedure described in Section SECREF3 to update the wordpiece embedding layer and tokenizer of general-domain BERT." ], "highlighted_evidence": [ "We train Word2Vec with vector size $d_\\mathrm {W2V} = d_\\mathrm {LM} = 768$ on PubMed+PMC (see Appendix for details). Then, we follow the procedure described in Section SECREF3 to update the wordpiece embedding layer and tokenizer of general-domain BERT." ] }, { "raw_evidence": [ "In Section SECREF4, we use the proposed method to domain-adapt BERT on PubMed+PMC (the data used for BioBERTv1.0) and/or CORD-19 (Covid-19 Open Research Dataset). We improve over general-domain BERT on eight out of eight biomedical NER tasks, using a fraction of the compute cost associated with BioBERT. In Section SECREF5, we show how to quickly adapt an existing Question Answering model to text about the Covid-19 pandemic, without any target-domain Language Model pretraining or finetuning." ], "highlighted_evidence": [ "In Section SECREF4, we use the proposed method to domain-adapt BERT on PubMed+PMC (the data used for BioBERTv1.0) and/or CORD-19 (Covid-19 Open Research Dataset)." ] } ] } ], "1611.04798": [ { "question": "Which languages do they test on for the under-resourced scenario?", "answers": [ { "answer": "English, German", "type": "extractive" }, { "answer": "small portion of the large parallel corpus for English-German is used as a simulation", "type": "extractive" } ], "q_uid": "41ac23e32bf208b69414f4b687c4f324c6132464", "evidence": [ { "raw_evidence": [ "First, we consider the translation for an under-resourced pair of languages. Here a small portion of the large parallel corpus for English-German is used as a simulation for the scenario where we do not have much parallel data: Translating texts in English to German. We perform language-specific coding in both source and target sides. By accommodating the German monolingual data as an additional input (German INLINEFORM0 German), which we called the mix-source approach, we could enrich the training data in a simple, natural way. Given this under-resourced situation, it could help our NMT obtain a better representation of the source side, hence, able to learn the translation relationship better. Including monolingual data in this way might also improve the translation of some rare word types such as named entities. Furthermore, as the ultimate goal of our work, we would like to investigate the advantages of multilinguality in NMT. We incorporate a similar portion of French-German parallel corpus into the English-German one. As discussed in Section SECREF5 , it is expected to help reducing the ambiguity in translation between one language pair since it utilizes the semantic context provided by the other source language. We name this mix-multi-source." ], "highlighted_evidence": [ " Here a small portion of the large parallel corpus for English-German is used as a simulation for the scenario where we do not have much parallel data: Translating texts in English to German. " ] }, { "raw_evidence": [ "First, we consider the translation for an under-resourced pair of languages. Here a small portion of the large parallel corpus for English-German is used as a simulation for the scenario where we do not have much parallel data: Translating texts in English to German. We perform language-specific coding in both source and target sides. By accommodating the German monolingual data as an additional input (German INLINEFORM0 German), which we called the mix-source approach, we could enrich the training data in a simple, natural way. Given this under-resourced situation, it could help our NMT obtain a better representation of the source side, hence, able to learn the translation relationship better. Including monolingual data in this way might also improve the translation of some rare word types such as named entities. Furthermore, as the ultimate goal of our work, we would like to investigate the advantages of multilinguality in NMT. We incorporate a similar portion of French-German parallel corpus into the English-German one. As discussed in Section SECREF5 , it is expected to help reducing the ambiguity in translation between one language pair since it utilizes the semantic context provided by the other source language. We name this mix-multi-source." ], "highlighted_evidence": [ "First, we consider the translation for an under-resourced pair of languages. Here a small portion of the large parallel corpus for English-German is used as a simulation for the scenario where we do not have much parallel data: Translating texts in English to German." ] } ] } ], "1912.13337": [ { "question": "Are the automatically constructed datasets subject to quality control?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "e97186c51d4af490dba6faaf833d269c8256426c", "evidence": [ { "raw_evidence": [ "Dataset Probes and Construction", "Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed.", "For convenience, we will describe each source of expert knowledge as a directed, edge-labeled graph $G$. The nodes of this graph are $\\mathcal {V} = \\mathcal {C} \\cup \\mathcal {W} \\cup \\mathcal {S} \\cup \\mathcal {D}$, where $\\mathcal {C}$ is a set of atomic concepts, $\\mathcal {W}$ a set of words, $\\mathcal {S}$ a set of sentences, and $\\mathcal {D}$ a set of definitions (see Table TABREF4 for details for WordNet and GCIDE). Each edge of $G$ is directed from an atomic concept in $\\mathcal {C}$ to another node in $V$, and is labeled with a relation, such as hypernym or isa$^\\uparrow $, from a set of relations $\\mathcal {R}$ (see Table TABREF4).", "When defining our probe question templates, it will be useful to view $G$ as a set of (relation, source, target) triples $\\mathcal {T} \\subseteq \\mathcal {R} \\times \\mathcal {C} \\times \\mathcal {V}$. Due to their origin in an expert knowledge source, such triples preserve semantic consistency. For instance, when the relation in a triple is def, the corresponding edge maps a concept in $\\mathcal {C}$ to a definition in $\\mathcal {D}$.", "To construct probe datasets, we rely on two heuristic functions, defined below for each individual probe: $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, which generates gold question-answer pairs $(\\textbf {q},\\textbf {a})$ from a set of triples $\\tau \\subseteq \\mathcal {T}$ and question templates $\\mathcal {Q}$, and $\\textsc {distr}(\\tau ^{\\prime })$, which generates distractor answers choices $\\lbrace a^{\\prime }_{1},...a^{\\prime }_{N-1} \\rbrace $ based on another set of triples $\\tau ^{\\prime }$ (where usually $\\tau \\subset \\tau ^{\\prime }$). For brevity, we will use $\\textsc {gen}(\\tau )$ to denote $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, leaving question templates $\\mathcal {Q}$ implicit.", "Dataset Probes and Construction ::: WordNetQA", "WordNet is an English lexical database consisting of around 117k concepts, which are organized into groups of synsets that each contain a gloss (i.e., a definition of the target concept), a set of representative English words (called lemmas), and, in around 33k synsets, example sentences. In addition, many synsets have ISA links to other synsets that express complex taxonomic relations. Figure FIGREF6 shows an example and Table TABREF4 summarizes how we formulate WordNet as a set of triples $\\mathcal {T}$ of various types. These triples together represent a directed, edge-labeled graph $G$. Our main motivation for using WordNet, as opposed to a resource such as ConceptNet BIBREF36, is the availability of glosses ($\\mathcal {D}$) and example sentences ($\\mathcal {S}$), which allows us to construct natural language questions that contextualize the types of concepts we want to probe.", "Dataset Probes and Construction ::: WordNetQA ::: Example Generation @!START@$\\textsc {gen}(\\tau )$@!END@.", "We build 4 individual datasets based on semantic relations native to WordNet (see BIBREF37): hypernymy (i.e., generalization or ISA reasoning up a taxonomy, ISA$^\\uparrow $), hyponymy (ISA$^{\\downarrow }$), synonymy, and definitions. To generate a set of questions in each case, we employ a number of rule templates $\\mathcal {Q}$ that operate over tuples. A subset of such templates is shown in Table TABREF8. The templates were designed to mimic naturalistic questions we observed in our science benchmarks.", "For example, suppose we wish to create a question $\\textbf {q}$ about the definition of a target concept $c \\in \\mathcal {C}$. We first select a question template from $\\mathcal {Q}$ that first introduces the concept $c$ and its lemma $l \\in \\mathcal {W}$ in context using the example sentence $s \\in \\mathcal {S}$, and then asks to identify the corresponding WordNet gloss $d \\in \\mathcal {D}$, which serves as the gold answer $\\textbf {a}$. The same is done for ISA reasoning; each question about a hypernym/hyponym relation between two concepts $c \\rightarrow ^{\\uparrow /\\downarrow } c^{\\prime } \\in \\mathcal {T}_{i}$ (e.g., $\\texttt {dog} \\rightarrow ^{\\uparrow /\\downarrow } \\texttt {animal/terrier}$) first introduces a context for $c$ and then asks for an answer that identifies $c^{\\prime }$ (which is also provided with a gloss so as to contain all available context).", "In the latter case, the rules $(\\texttt {isa}^{r},c,c^{\\prime }) \\in \\mathcal {T}_i$ in Table TABREF8 cover only direct ISA links from $c$ in direction $r \\in \\lbrace \\uparrow ,\\downarrow \\rbrace $. In practice, for each $c$ and direction $r$, we construct tests that cover the set HOPS$(c,r)$ of all direct as well as derived ISA relations of $c$:", "This allows us to evaluate the extent to which models are able to handle complex forms of reasoning that require several inferential steps or hops.", "Dataset Probes and Construction ::: WordNetQA ::: Distractor Generation: @!START@$\\textsc {distr}(\\tau ^{\\prime })$@!END@.", "An example of how distractors are generated is shown in Figure FIGREF6, which relies on similar principles as above. For each concept $c$, we choose 4 distractor answers that are close in the WordNet semantic space. For example, when constructing hypernymy tests for $c$ from the set hops$(c,\\uparrow )$, we build distractors by drawing from $\\textsc {hops}(c,\\downarrow )$ (and vice versa), as well as from the $\\ell $-deep sister family of $c$, defined as follows. The 1-deep sister family is simply $c$'s siblings or sisters, i.e., the other children $\\tilde{c} \\ne c$ of the parent node $c^{\\prime }$ of $c$. For $\\ell > 1$, the $\\ell $-deep sister family also includes all descendants of each $\\tilde{c}$ up to $\\ell -1$ levels deep, denoted $\\textsc {hops}_{\\ell -1}(\\tilde{c},\\downarrow )$. Formally:", "For definitions and synonyms we build distractors from all of these sets (with a similar restriction on the depth of sister distractors as noted above). In doing this, we can systematically investigate model performance on a wide range of distractor sets.", "Dataset Probes and Construction ::: WordNetQA ::: Perturbations and Semantic Clusters", "Based on how we generate data, for each concept $c$ (i.e., atomic WordNet synset) and probe type (i.e., definitions, hypernymy, etc.), we have a wide variety of questions related to $c$ that manipulate 1) the complexity of reasoning that is involved (e.g., the number of inferential hops) and; 2) the types of distractors (or distractor perturbations) that are employed. We call such sets semantic clusters. As we describe in the next section, semantic clusters allow us to devise new types of evaluation that reveal whether models have comprehensive and consistent knowledge of target concepts (e.g., evaluating whether a model can correctly answer several questions associated with a concept, as opposed to a few disjoint instances).", "Details of the individual datasets are shown in Table TABREF12. From these sets, we follow BIBREF22 in allocating a maximum of 3k examples for training and reserve the rest for development and testing. Since we are interested in probing, having large held-out sets allows us to do detailed analysis and cluster-based evaluation.", "Dataset Probes and Construction ::: DictionaryQA", "The DictionaryQA dataset is created from the GCIDE dictionary, which is a comprehensive open-source English dictionary built largely from the Webster's Revised Unabridged Dictionary BIBREF38. Each entry consists of a word, its part-of-speech, its definition, and an optional example sentence (see Table TABREF14). Overall, 33k entries (out of a total of 155k) contain example sentences/usages. As with the WordNet probes, we focus on this subset so as to contextualize each word being probed. In contrast to WordNet, GCIDE does not have ISA relations or explicit synsets, so we take each unique entry to be a distinct sense. We then use the dictionary entries to create a probe that centers around word-sense disambiguation, as described below.", "Dataset Probes and Construction ::: DictionaryQA ::: Example and Distractor Generation.", "To generate gold questions and answers, we use the same generation templates for definitions exemplified in Figure TABREF8 for WordNetQA. To generate distractors, we simply take alternative definitions for the target words that represent a different word sense (e.g., the alternative definitions of gift shown in Table TABREF14), as well as randomly chosen definitions if needed to create a 5-way multiple choice question. As above, we reserve a maximum of 3k examples for training. Since we have only 9k examples in total in this dataset (see WordSense in Table TABREF12), we also reserve 3k each for development and testing.", "We note that initial attempts to build this dataset through standard random splitting gave rise to certain systematic biases that were exploited by the choice-only baseline models described in the next section, and hence inflated overall model scores. After several efforts at filtering we found that, among other factors, using definitions from entries without example sentences as distractors (e.g., the first two entries in Table TABREF14) had a surprising correlation with such biases. This suggests that possible biases involving differences between dictionary entries with and without examples can taint the resulting automatically generated MCQA dataset (for more discussion on the pitfalls involved with automatic dataset construction, see Section SECREF5)." ], "highlighted_evidence": [ "Dataset Probes and Construction\nOur probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed.\n\nFor convenience, we will describe each source of expert knowledge as a directed, edge-labeled graph $G$. The nodes of this graph are $\\mathcal {V} = \\mathcal {C} \\cup \\mathcal {W} \\cup \\mathcal {S} \\cup \\mathcal {D}$, where $\\mathcal {C}$ is a set of atomic concepts, $\\mathcal {W}$ a set of words, $\\mathcal {S}$ a set of sentences, and $\\mathcal {D}$ a set of definitions (see Table TABREF4 for details for WordNet and GCIDE). Each edge of $G$ is directed from an atomic concept in $\\mathcal {C}$ to another node in $V$, and is labeled with a relation, such as hypernym or isa$^\\uparrow $, from a set of relations $\\mathcal {R}$ (see Table TABREF4).\n\nWhen defining our probe question templates, it will be useful to view $G$ as a set of (relation, source, target) triples $\\mathcal {T} \\subseteq \\mathcal {R} \\times \\mathcal {C} \\times \\mathcal {V}$. Due to their origin in an expert knowledge source, such triples preserve semantic consistency. For instance, when the relation in a triple is def, the corresponding edge maps a concept in $\\mathcal {C}$ to a definition in $\\mathcal {D}$.\n\nTo construct probe datasets, we rely on two heuristic functions, defined below for each individual probe: $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, which generates gold question-answer pairs $(\\textbf {q},\\textbf {a})$ from a set of triples $\\tau \\subseteq \\mathcal {T}$ and question templates $\\mathcal {Q}$, and $\\textsc {distr}(\\tau ^{\\prime })$, which generates distractor answers choices $\\lbrace a^{\\prime }_{1},...a^{\\prime }_{N-1} \\rbrace $ based on another set of triples $\\tau ^{\\prime }$ (where usually $\\tau \\subset \\tau ^{\\prime }$). For brevity, we will use $\\textsc {gen}(\\tau )$ to denote $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, leaving question templates $\\mathcal {Q}$ implicit.\n\nDataset Probes and Construction ::: WordNetQA\nWordNet is an English lexical database consisting of around 117k concepts, which are organized into groups of synsets that each contain a gloss (i.e., a definition of the target concept), a set of representative English words (called lemmas), and, in around 33k synsets, example sentences. In addition, many synsets have ISA links to other synsets that express complex taxonomic relations. Figure FIGREF6 shows an example and Table TABREF4 summarizes how we formulate WordNet as a set of triples $\\mathcal {T}$ of various types. These triples together represent a directed, edge-labeled graph $G$. Our main motivation for using WordNet, as opposed to a resource such as ConceptNet BIBREF36, is the availability of glosses ($\\mathcal {D}$) and example sentences ($\\mathcal {S}$), which allows us to construct natural language questions that contextualize the types of concepts we want to probe.\n\nDataset Probes and Construction ::: WordNetQA ::: Example Generation @!START@$\\textsc {gen}(\\tau )$@!END@.\nWe build 4 individual datasets based on semantic relations native to WordNet (see BIBREF37): hypernymy (i.e., generalization or ISA reasoning up a taxonomy, ISA$^\\uparrow $), hyponymy (ISA$^{\\downarrow }$), synonymy, and definitions. To generate a set of questions in each case, we employ a number of rule templates $\\mathcal {Q}$ that operate over tuples. A subset of such templates is shown in Table TABREF8. The templates were designed to mimic naturalistic questions we observed in our science benchmarks.\n\nFor example, suppose we wish to create a question $\\textbf {q}$ about the definition of a target concept $c \\in \\mathcal {C}$. We first select a question template from $\\mathcal {Q}$ that first introduces the concept $c$ and its lemma $l \\in \\mathcal {W}$ in context using the example sentence $s \\in \\mathcal {S}$, and then asks to identify the corresponding WordNet gloss $d \\in \\mathcal {D}$, which serves as the gold answer $\\textbf {a}$. The same is done for ISA reasoning; each question about a hypernym/hyponym relation between two concepts $c \\rightarrow ^{\\uparrow /\\downarrow } c^{\\prime } \\in \\mathcal {T}_{i}$ (e.g., $\\texttt {dog} \\rightarrow ^{\\uparrow /\\downarrow } \\texttt {animal/terrier}$) first introduces a context for $c$ and then asks for an answer that identifies $c^{\\prime }$ (which is also provided with a gloss so as to contain all available context).\n\nIn the latter case, the rules $(\\texttt {isa}^{r},c,c^{\\prime }) \\in \\mathcal {T}_i$ in Table TABREF8 cover only direct ISA links from $c$ in direction $r \\in \\lbrace \\uparrow ,\\downarrow \\rbrace $. In practice, for each $c$ and direction $r$, we construct tests that cover the set HOPS$(c,r)$ of all direct as well as derived ISA relations of $c$:\n\nThis allows us to evaluate the extent to which models are able to handle complex forms of reasoning that require several inferential steps or hops.\n\nDataset Probes and Construction ::: WordNetQA ::: Distractor Generation: @!START@$\\textsc {distr}(\\tau ^{\\prime })$@!END@.\nAn example of how distractors are generated is shown in Figure FIGREF6, which relies on similar principles as above. For each concept $c$, we choose 4 distractor answers that are close in the WordNet semantic space. For example, when constructing hypernymy tests for $c$ from the set hops$(c,\\uparrow )$, we build distractors by drawing from $\\textsc {hops}(c,\\downarrow )$ (and vice versa), as well as from the $\\ell $-deep sister family of $c$, defined as follows. The 1-deep sister family is simply $c$'s siblings or sisters, i.e., the other children $\\tilde{c} \\ne c$ of the parent node $c^{\\prime }$ of $c$. For $\\ell > 1$, the $\\ell $-deep sister family also includes all descendants of each $\\tilde{c}$ up to $\\ell -1$ levels deep, denoted $\\textsc {hops}_{\\ell -1}(\\tilde{c},\\downarrow )$. Formally:\n\nFor definitions and synonyms we build distractors from all of these sets (with a similar restriction on the depth of sister distractors as noted above). In doing this, we can systematically investigate model performance on a wide range of distractor sets.\n\nDataset Probes and Construction ::: WordNetQA ::: Perturbations and Semantic Clusters\nBased on how we generate data, for each concept $c$ (i.e., atomic WordNet synset) and probe type (i.e., definitions, hypernymy, etc.), we have a wide variety of questions related to $c$ that manipulate 1) the complexity of reasoning that is involved (e.g., the number of inferential hops) and; 2) the types of distractors (or distractor perturbations) that are employed. We call such sets semantic clusters. As we describe in the next section, semantic clusters allow us to devise new types of evaluation that reveal whether models have comprehensive and consistent knowledge of target concepts (e.g., evaluating whether a model can correctly answer several questions associated with a concept, as opposed to a few disjoint instances).\n\nDetails of the individual datasets are shown in Table TABREF12. From these sets, we follow BIBREF22 in allocating a maximum of 3k examples for training and reserve the rest for development and testing. Since we are interested in probing, having large held-out sets allows us to do detailed analysis and cluster-based evaluation.\n\nDataset Probes and Construction ::: DictionaryQA\nThe DictionaryQA dataset is created from the GCIDE dictionary, which is a comprehensive open-source English dictionary built largely from the Webster's Revised Unabridged Dictionary BIBREF38. Each entry consists of a word, its part-of-speech, its definition, and an optional example sentence (see Table TABREF14). Overall, 33k entries (out of a total of 155k) contain example sentences/usages. As with the WordNet probes, we focus on this subset so as to contextualize each word being probed. In contrast to WordNet, GCIDE does not have ISA relations or explicit synsets, so we take each unique entry to be a distinct sense. We then use the dictionary entries to create a probe that centers around word-sense disambiguation, as described below.\n\nDataset Probes and Construction ::: DictionaryQA ::: Example and Distractor Generation.\nTo generate gold questions and answers, we use the same generation templates for definitions exemplified in Figure TABREF8 for WordNetQA. To generate distractors, we simply take alternative definitions for the target words that represent a different word sense (e.g., the alternative definitions of gift shown in Table TABREF14), as well as randomly chosen definitions if needed to create a 5-way multiple choice question. As above, we reserve a maximum of 3k examples for training. Since we have only 9k examples in total in this dataset (see WordSense in Table TABREF12), we also reserve 3k each for development and testing.\n\nWe note that initial attempts to build this dataset through standard random splitting gave rise to certain systematic biases that were exploited by the choice-only baseline models described in the next section, and hence inflated overall model scores. After several efforts at filtering we found that, among other factors, using definitions from entries without example sentences as distractors (e.g., the first two entries in Table TABREF14) had a surprising correlation with such biases. This suggests that possible biases involving differences between dictionary entries with and without examples can taint the resulting automatically generated MCQA dataset (for more discussion on the pitfalls involved with automatic dataset construction, see Section SECREF5)." ] }, { "raw_evidence": [ "We emphasize that using synthetic versus naturalistic QA data comes with important trade-offs. While we are able to generate large amounts of systematically controlled data at virtually no cost or need for manual annotation, it is much harder to validate the quality of such data at such a scale and such varying levels of complexity. Conversely, with benchmark QA datasets, it is much harder to perform the type of careful manipulations and cluster-based analyses we report here. While we assume that the expert knowledge we employ, in virtue of being hand-curated by human experts, is generally correct, we know that such resources are fallible and error-prone. Initial crowd-sourcing experiments that look at validating samples of our data show high agreement across probes and that human scores correlate with the model trends across the probe categories. More details of these studies are left for future work." ], "highlighted_evidence": [ "We emphasize that using synthetic versus naturalistic QA data comes with important trade-offs. While we are able to generate large amounts of systematically controlled data at virtually no cost or need for manual annotation, it is much harder to validate the quality of such data at such a scale and such varying levels of complexity.", "While we assume that the expert knowledge we employ, in virtue of being hand-curated by human experts, is generally correct, we know that such resources are fallible and error-prone. Initial crowd-sourcing experiments that look at validating samples of our data show high agreement across probes and that human scores correlate with the model trends across the probe categories. More details of these studies are left for future work." ] } ] }, { "question": "Do they focus on Reading Comprehension or multiple choice question answering?", "answers": [ { "answer": "MULTIPLE CHOICE QUESTION ANSWERING", "type": "abstractive" }, { "answer": "multiple-choice", "type": "extractive" } ], "q_uid": "5bb3c27606c59d73fd6944ba7382096de4fa58d8", "evidence": [ { "raw_evidence": [ "Automatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate)." ], "highlighted_evidence": [ "In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate)." ] }, { "raw_evidence": [ "Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed." ], "highlighted_evidence": [ "Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $." ] } ] }, { "question": "After how many hops does accuracy decrease?", "answers": [ { "answer": "1-hop links to 2-hops", "type": "extractive" }, { "answer": "one additional hop", "type": "abstractive" } ], "q_uid": "8de9f14c7c4f37ab103bc8a639d6d80ade1bc27b", "evidence": [ { "raw_evidence": [ "Our comprehensive assessment reveals several interesting nuances to the overall positive trend. For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. State-of-the-art QA models thus have much room to improve even in some fundamental building blocks, namely definitions and taxonomic hierarchies, of more complex forms of reasoning." ], "highlighted_evidence": [ "For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. " ] }, { "raw_evidence": [ "Our comprehensive assessment reveals several interesting nuances to the overall positive trend. For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. State-of-the-art QA models thus have much room to improve even in some fundamental building blocks, namely definitions and taxonomic hierarchies, of more complex forms of reasoning." ], "highlighted_evidence": [ "For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. " ] } ] }, { "question": "How do they control for annotation artificats?", "answers": [ { "answer": " we use several of the MCQA baseline models first introduced in BIBREF0", "type": "extractive" }, { "answer": "Choice-Only model, which is a variant of the well-known hypothesis-only baseline, Choice-to-choice model, tries to single out a given answer choice relative to other choices, Question-to-choice model, in contrast, uses the contextual representations for each question and individual choice and an attention model Att model to get a score", "type": "extractive" } ], "q_uid": "85590bb26fed01a802241bc537d85ba5ef1c6dc2", "evidence": [ { "raw_evidence": [ "Probing Methodology and Modeling ::: Task Definition and Modeling ::: Baselines and Sanity Checks.", "When creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models." ], "highlighted_evidence": [ "Probing Methodology and Modeling ::: Task Definition and Modeling ::: Baselines and Sanity Checks.\nWhen creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models." ] }, { "raw_evidence": [ "When creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models.", "Following the notation from BIBREF0, for any given sequence $s$ of tokens in $\\lbrace q^{(j)}, a_{1}^{(j)},...,a_{N}^{(j)}\\rbrace $ in $D$, an encoding of $s$ is given as $h_{s}^{(j)} = \\textbf {BiLSTM}(\\textsc {embed}(s)) \\in \\mathbb {R}^{|s| \\times 2h}$ (where $h$ is the dimension of the hidden state in each directional network, and embed$(\\cdot )$ is an embedding function that assigns token-level embeddings to each token in $s$). A contextual representation for each $s$ is then built by applying an element-wise max operation over $h_{s}$ as follows:", "With these contextual representations, different baseline models can be constructed. For example, a Choice-Only model, which is a variant of the well-known hypothesis-only baseline used in NLI BIBREF46, scores each choice $c_{i}$ in the following way:", "for $\\textbf {W}^{T} \\in \\mathbb {R}^{2h}$ independently of the question and assigns a probability to each answer $p_{i}^{(j)} \\propto e^{\\alpha _{i}^{(j)}}$.", "A slight variant of this model, the Choice-to-choice model, tries to single out a given answer choice relative to other choices by scoring all choice pairs $\\alpha _{i,i^{\\prime }}^{(j)} = \\textsc {Att}(r^{(j)}_{c_{i}},r^{(j)}_{c_{i^{\\prime }}}) \\in \\mathbb {R}$ using a learned attention mechanism Att and finding the choice with the minimal similarity to other options (for full details, see their original paper). In using these partial-input baselines, which we train directly on each target probe, we can check whether systematic biases related to answer choices were introduced into the data creation process.", "A Question-to-choice model, in contrast, uses the contextual representations for each question and individual choice and an attention model Att model to get a score $\\alpha ^{(j)}_{q,i} = \\textsc {Att}(r^{(j)}_{q},r^{(j)}_{c_{i}}) \\in \\mathbb {R}$ as above. Here we also experiment with using ESIM BIBREF47 to generate the contextual representations $r$, as well as a simpler VecSimilarity model that measures the average vector similarity between question and answer tokens: $\\alpha ^{(j)}_{q,i} = \\textsc {Sim}(\\textsc {embed}(q^{(j)}),\\textsc {embed}(c^{(j)}_{i}))$. In contrast to the models above, these sets of baselines are used to check for artifacts between questions and answers that are not captured in the partial-input baselines (see discussion in BIBREF49) and ensure that the overall MCQA tasks are sufficiently difficult for our transformer models." ], "highlighted_evidence": [ "When creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models.\n\nFollowing the notation from BIBREF0, for any given sequence $s$ of tokens in $\\lbrace q^{(j)}, a_{1}^{(j)},...,a_{N}^{(j)}\\rbrace $ in $D$, an encoding of $s$ is given as $h_{s}^{(j)} = \\textbf {BiLSTM}(\\textsc {embed}(s)) \\in \\mathbb {R}^{|s| \\times 2h}$ (where $h$ is the dimension of the hidden state in each directional network, and embed$(\\cdot )$ is an embedding function that assigns token-level embeddings to each token in $s$). A contextual representation for each $s$ is then built by applying an element-wise max operation over $h_{s}$ as follows:\n\nWith these contextual representations, different baseline models can be constructed. For example, a Choice-Only model, which is a variant of the well-known hypothesis-only baseline used in NLI BIBREF46, scores each choice $c_{i}$ in the following way:\n\nfor $\\textbf {W}^{T} \\in \\mathbb {R}^{2h}$ independently of the question and assigns a probability to each answer $p_{i}^{(j)} \\propto e^{\\alpha _{i}^{(j)}}$.\n\nA slight variant of this model, the Choice-to-choice model, tries to single out a given answer choice relative to other choices by scoring all choice pairs $\\alpha _{i,i^{\\prime }}^{(j)} = \\textsc {Att}(r^{(j)}_{c_{i}},r^{(j)}_{c_{i^{\\prime }}}) \\in \\mathbb {R}$ using a learned attention mechanism Att and finding the choice with the minimal similarity to other options (for full details, see their original paper). In using these partial-input baselines, which we train directly on each target probe, we can check whether systematic biases related to answer choices were introduced into the data creation process.\n\nA Question-to-choice model, in contrast, uses the contextual representations for each question and individual choice and an attention model Att model to get a score $\\alpha ^{(j)}_{q,i} = \\textsc {Att}(r^{(j)}_{q},r^{(j)}_{c_{i}}) \\in \\mathbb {R}$ as above. Here we also experiment with using ESIM BIBREF47 to generate the contextual representations $r$, as well as a simpler VecSimilarity model that measures the average vector similarity between question and answer tokens: $\\alpha ^{(j)}_{q,i} = \\textsc {Sim}(\\textsc {embed}(q^{(j)}),\\textsc {embed}(c^{(j)}_{i}))$. In contrast to the models above, these sets of baselines are used to check for artifacts between questions and answers that are not captured in the partial-input baselines (see discussion in BIBREF49) and ensure that the overall MCQA tasks are sufficiently difficult for our transformer models." ] } ] }, { "question": "Is WordNet useful for taxonomic reasoning for this task?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "75ff6e425ce304a35f18c0230c0d13d3913a31a9", "evidence": [ { "raw_evidence": [ "While our methodology is amenable to any knowledge source and set of models/benchmark tasks, we focus on probing state-of-the-art transformer models BIBREF7, BIBREF9 in the domain of science MCQA. For sources of expert knowledge, we use WordNet, a comprehensive lexical ontology, and other publicly available dictionary resources. We devise probes that measure model competence in definition and taxonomic knowledge in different settings (including hypernymy, hyponymy, and synonymy detection, and word sense disambiguation). This choice is motivated by fact that the science domain is considered particularly challenging for QA BIBREF10, BIBREF11, BIBREF12, and existing science benchmarks are known to involve widespread use of such knowledge (see BIBREF1, BIBREF13 for analysis), which is also arguably fundamental to more complex forms of reasoning." ], "highlighted_evidence": [ "For sources of expert knowledge, we use WordNet, a comprehensive lexical ontology, and other publicly available dictionary resources." ] } ] } ], "1809.01541": [ { "question": "How do they perform multilingual training?", "answers": [ { "answer": "Multilingual training is performed by randomly alternating between languages for every new minibatch", "type": "extractive" }, { "answer": "by randomly alternating between languages for every new minibatch", "type": "extractive" } ], "q_uid": "5cb610d3d5d7d447b4cd5736d6a7d8262140af58", "evidence": [ { "raw_evidence": [ "The parameters of the entire MSD (auxiliary-task) decoder are shared across languages.", "Since a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch. We do not pass any information to the auxiliary decoder as to the source language of the signal it is receiving, as we assume abstract morpho-syntactic features are shared across languages." ], "highlighted_evidence": [ "The parameters of the entire MSD (auxiliary-task) decoder are shared across languages.\n\nSince a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch." ] }, { "raw_evidence": [ "Since a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch. We do not pass any information to the auxiliary decoder as to the source language of the signal it is receiving, as we assume abstract morpho-syntactic features are shared across languages." ], "highlighted_evidence": [ "Multilingual training is performed by randomly alternating between languages for every new minibatch. " ] } ] }, { "question": "Does the model have attention?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "b9d168da5321a7d7b812c52bb102a05210fe45bd", "evidence": [ { "raw_evidence": [ "The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism." ], "highlighted_evidence": [ "Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism." ] }, { "raw_evidence": [ "The baseline provided by the shared task organisers is a seq2seq model with attention (similar to the winning system for reinflection in CoNLL\u2013SIGMORPHON 2016, BIBREF1 ), which receives information about context through an embedding of the two words immediately adjacent to the target form. We use this baseline implementation as a starting point and achieve the best overall accuracy of 49.87 on Task 2 by introducing three augmentations to the provided baseline system: (1) We use an LSTM to encode the entire available context; (2) We employ a multi-task learning approach with the auxiliary objective of MSD prediction; and (3) We train the auxiliary component in a multilingual fashion, over sets of two to three languages." ], "highlighted_evidence": [ "The baseline provided by the shared task organisers is a seq2seq model with attention (similar to the winning system for reinflection in CoNLL\u2013SIGMORPHON 2016, BIBREF1 ), which receives information about context through an embedding of the two words immediately adjacent to the target form. We use this baseline implementation as a starting point and achieve the best overall accuracy of 49.87 on Task 2 by introducing three augmentations to the provided baseline system: (1) We use an LSTM to encode the entire available context; (2) We employ a multi-task learning approach with the auxiliary objective of MSD prediction; and (3) We train the auxiliary component in a multilingual fashion, over sets of two to three languages." ] } ] }, { "question": "What architecture does the decoder have?", "answers": [ { "answer": "LSTM", "type": "extractive" }, { "answer": "LSTM", "type": "extractive" } ], "q_uid": "0c234db3b380c27c4c70579a5d6948e1e3b24ff1", "evidence": [ { "raw_evidence": [ "The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism." ], "highlighted_evidence": [ "Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism." ] }, { "raw_evidence": [ "The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism." ], "highlighted_evidence": [ "Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism." ] } ] }, { "question": "What architecture does the encoder have?", "answers": [ { "answer": "LSTM", "type": "extractive" }, { "answer": "LSTM", "type": "extractive" } ], "q_uid": "fa527becb8e2551f4fd2ae840dbd4a68971349e0", "evidence": [ { "raw_evidence": [ "The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism." ], "highlighted_evidence": [ "The resulting sequence of vectors is encoded using an LSTM encoder. " ] }, { "raw_evidence": [ "The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism." ], "highlighted_evidence": [ "The resulting sequence of vectors is encoded using an LSTM encoder." ] } ] } ], "1809.09194": [ { "question": "Do they use attention?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "45e9533586199bde19313cd43b3d0ecadcaf7a33", "evidence": [ { "raw_evidence": [ "Memory Generation Layer. In this layer, we generate a working memory by fusing information from both passages INLINEFORM0 and questions INLINEFORM1 . The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2" ], "highlighted_evidence": [ "The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2" ] }, { "raw_evidence": [ "Memory Generation Layer. In this layer, we generate a working memory by fusing information from both passages INLINEFORM0 and questions INLINEFORM1 . The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2" ], "highlighted_evidence": [ "Memory Generation Layer. In this layer, we generate a working memory by fusing information from both passages INLINEFORM0 and questions INLINEFORM1 . The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2" ] } ] }, { "question": "What is the architecture of the span detector?", "answers": [ { "answer": "adopt a multi-turn answer module for the span detector BIBREF1", "type": "extractive" } ], "q_uid": "a5e49cdb91d9fd0ca625cc1ede236d3d4672403c", "evidence": [ { "raw_evidence": [ "Span detector. We adopt a multi-turn answer module for the span detector BIBREF1 . Formally, at time step INLINEFORM0 in the range of INLINEFORM1 , the state is defined by INLINEFORM2 . The initial state INLINEFORM3 is the summary of the INLINEFORM4 : INLINEFORM5 , where INLINEFORM6 . Here, INLINEFORM7 is computed from the previous state INLINEFORM8 and memory INLINEFORM9 : INLINEFORM10 and INLINEFORM11 . Finally, a bilinear function is used to find the begin and end point of answer spans at each reasoning step INLINEFORM12 : DISPLAYFORM0 DISPLAYFORM1", "The final prediction is the average of each time step: INLINEFORM0 . We randomly apply dropout on the step level in each time step during training, as done in BIBREF1 ." ], "highlighted_evidence": [ "Span detector. We adopt a multi-turn answer module for the span detector BIBREF1 . Formally, at time step INLINEFORM0 in the range of INLINEFORM1 , the state is defined by INLINEFORM2 . The initial state INLINEFORM3 is the summary of the INLINEFORM4 : INLINEFORM5 , where INLINEFORM6 . Here, INLINEFORM7 is computed from the previous state INLINEFORM8 and memory INLINEFORM9 : INLINEFORM10 and INLINEFORM11 . Finally, a bilinear function is used to find the begin and end point of answer spans at each reasoning step INLINEFORM12 : DISPLAYFORM0 DISPLAYFORM1\n\nThe final prediction is the average of each time step: INLINEFORM0 . We randomly apply dropout on the step level in each time step during training, as done in BIBREF1 ." ] } ] } ], "1604.05372": [ { "question": "What evaluation metric do they use?", "answers": [ { "answer": "Accuracy", "type": "abstractive" }, { "answer": "ratio of correct `translations'", "type": "extractive" } ], "q_uid": "aefa333b2cf0a4000cd40566149816f5b36135e7", "evidence": [ { "raw_evidence": [ "To test all the possible combinations of parameters, we divided the bilingual dictionary into 4500 noun pairs used as a training set and 500 noun pairs used as a test set. We then learned transformation matrices on the training set using both training algorithms (CBOW and SkipGram) and several values of regularization $\\lambda $ from 0 to 5, with a step of 0.5. The resulting matrices were applied to the Ukrainian vectors from the test set and the corresponding Russian `translations' were calculated. The ratio of correct `translations' (matches) was used as an evaluation measure. It came out that regularization only worsened the results for both algorithms, so in the Table 1 we report the results without regularization." ], "highlighted_evidence": [ "The ratio of correct `translations' (matches) was used as an evaluation measure." ] }, { "raw_evidence": [ "To test all the possible combinations of parameters, we divided the bilingual dictionary into 4500 noun pairs used as a training set and 500 noun pairs used as a test set. We then learned transformation matrices on the training set using both training algorithms (CBOW and SkipGram) and several values of regularization $\\lambda $ from 0 to 5, with a step of 0.5. The resulting matrices were applied to the Ukrainian vectors from the test set and the corresponding Russian `translations' were calculated. The ratio of correct `translations' (matches) was used as an evaluation measure. It came out that regularization only worsened the results for both algorithms, so in the Table 1 we report the results without regularization." ], "highlighted_evidence": [ "The ratio of correct `translations' (matches) was used as an evaluation measure. " ] } ] } ], "2002.08795": [ { "question": "What are the baselines?", "answers": [ { "answer": "a score of 40", "type": "extractive" }, { "answer": "KG-A2C, A2C, A2C-chained, A2C-Explore", "type": "extractive" } ], "q_uid": "eb2d5edcdfe18bd708348283f92a32294bb193a5", "evidence": [ { "raw_evidence": [ "BIBREF6 introduce the KG-A2C, which uses a knowledge graph based state-representation to aid in the section of actions in a combinatorially-sized action-space\u2014specifically they use the knowledge graph to constrain the kinds of entities that can be filled in the blanks in the template action-space. They test their approach on Zork1, showing the combination of the knowledge graph and template action selection resulted in improvements over existing methods. They note that their approach reaches a score of 40 which corresponds to a bottleneck in Zork1 where the player is eaten by a \u201cgrue\u201d (resulting in negative reward) if the player has not first lit a lamp. The lamp must be lit many steps after first being encountered, in a different section of the game; this action is necessary to continue exploring but doesn\u2019t immediately produce any positive reward. That is, there is a long term dependency between actions that is not immediately rewarded, as seen in Figure FIGREF1. Others using artificially constrained action spaces also report an inability to pass through this bottleneck BIBREF7, BIBREF8. They pose a significant challenge for these methods because the agent does not see the correct action sequence to pass the bottleneck enough times. This is in part due to the fact that for that sequence to be reinforced, the agent needs to reach the next possible reward beyond the bottleneck." ], "highlighted_evidence": [ "BIBREF6 introduce the KG-A2C, which uses a knowledge graph based state-representation to aid in the section of actions in a combinatorially-sized action-space\u2014specifically they use the knowledge graph to constrain the kinds of entities that can be filled in the blanks in the template action-space. They test their approach on Zork1, showing the combination of the knowledge graph and template action selection resulted in improvements over existing methods. They note that their approach reaches a score of 40 which corresponds to a bottleneck in Zork1 where the player is eaten by a \u201cgrue\u201d (resulting in negative reward) if the player has not first lit a lamp." ] }, { "raw_evidence": [ "We compare our two exploration strategies to the following baselines and ablations:", "KG-A2C This is the exact same method presented in BIBREF6 with no modifications.", "A2C Represents the same approach as KG-A2C but with all the knowledge graph components removed. The state representation is text only encoded using recurrent networks.", "A2C-chained Is a variation on KG-A2C-chained where we use our policy chaining approach with the A2C method to train the agent instead of KG-A2C.", "A2C-Explore Uses A2C in addition to the exploration strategy seen in KG-A2C-Explore. The cell representations here are defined in terms of the recurrent network based encoding of the textual observation." ], "highlighted_evidence": [ "We compare our two exploration strategies to the following baselines and ablations:\n\nKG-A2C This is the exact same method presented in BIBREF6 with no modifications.\n\nA2C Represents the same approach as KG-A2C but with all the knowledge graph components removed. The state representation is text only encoded using recurrent networks.\n\nA2C-chained Is a variation on KG-A2C-chained where we use our policy chaining approach with the A2C method to train the agent instead of KG-A2C.\n\nA2C-Explore Uses A2C in addition to the exploration strategy seen in KG-A2C-Explore. The cell representations here are defined in terms of the recurrent network based encoding of the textual observation." ] } ] }, { "question": "What are the two new strategies?", "answers": [ { "answer": "a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state, to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-space", "type": "extractive" }, { "answer": "KG-A2C-chained, KG-A2C-Explore", "type": "extractive" } ], "q_uid": "88ab7811662157680144ed3fdd00939e36552672", "evidence": [ { "raw_evidence": [ "More efficient exploration strategies are required to pass bottlenecks. Our contributions are two-fold. We first introduce a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state. This method freezes the policy used to reach the bottleneck and restarts the training from there on out, additionally conducting a backtracking search to ensure that a sub-optimal policy has not been frozen. The second contribution explore how to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-spaces such as Go-Explore BIBREF9. We additionally present a comparative ablation study analyzing the performance of these methods on the popular text-game Zork1." ], "highlighted_evidence": [ "We first introduce a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state. This method freezes the policy used to reach the bottleneck and restarts the training from there on out, additionally conducting a backtracking search to ensure that a sub-optimal policy has not been frozen. The second contribution explore how to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-spaces such as Go-Explore BIBREF9. " ] }, { "raw_evidence": [ "KG-A2C-Explore Go-Explore BIBREF9 is an algorithm that is designed to keep track of sub-optimal and under-explored states in order to allow the agent to explore upon more optimal states that may be a result of sparse rewards. The Go-Explore algorithm consists of two phases, the first to continuously explore until a set of promising states and corresponding trajectories are found on the basis of total score, and the second to robustify this found policy against potential stochasticity in the game. Promising states are defined as those states when explored from will likely result in higher reward trajectories. Since the text games we are dealing with are mostly deterministic, with the exception of Zork in later stages, we only focus on using Phase 1 of the Go-Explore algorithm to find an optimal policy. BIBREF10 look at applying Go-Explore to text-games on a set of simpler games generated using the game generation framework TextWorld BIBREF1. Instead of training a policy network in parallel to generate actions used for exploration, they use a small set of \u201cadmissible actions\u201d\u2014actions guaranteed to change the world state at any given step during Phase 1\u2014to explore and find high reward trajectories. This space of actions is relatively small (of the order of $10^2$ per step) and so finding high reward trajectories in larger action-spaces such as in Zork would be infeasible", "Go-Explore maintains an archive of cells\u2014defined as a set of states that map to a single representation\u2014to keep track of promising states. BIBREF9 simply encodes each cell by keeping track of the agent's position and BIBREF10 use the textual observations encoded by recurrent neural network as a cell representation. We improve on this implementation by training the KG-A2C network in parallel, using the snapshot of the knowledge graph in conjunction with the game state to further encode the current state and use this as a cell representation. At each step, Go-Explore chooses a cell to explore at random (weighted by score to prefer more advanced cells). The KG-A2C will run for a number of steps, starting with the knowledge graph state and the last seen state of the game from the cell. This will generate a trajectory for the agent while further training the KG-A2C at each iteration, creating a new representation for the knowledge graph as well as a new game state for the cell. After expanding a cell, Go-Explore will continue to sample cells by weight to continue expanding its known states. At the same time, KG-A2C will benefit from the heuristics of selecting preferred cells and be trained on promising states more often." ], "highlighted_evidence": [ "KG-A2C-Explore Go-Explore BIBREF9 is an algorithm that is designed to keep track of sub-optimal and under-explored states in order to allow the agent to explore upon more optimal states that may be a result of sparse rewards.", "We improve on this implementation by training the KG-A2C network in parallel, using the snapshot of the knowledge graph in conjunction with the game state to further encode the current state and use this as a cell representation. At each step, Go-Explore chooses a cell to explore at random (weighted by score to prefer more advanced cells)." ] } ] } ], "1802.06024": [ { "question": "What baseline is used in the experiments?", "answers": [ { "answer": "versions of LiLi", "type": "extractive" }, { "answer": "various versions of LiLi as baselines, Single, Sep, F-th, BG, w/o PTS", "type": "extractive" } ], "q_uid": "8f16dc7d7be0d284069841e456ebb2c69575b32b", "evidence": [ { "raw_evidence": [ "Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.", "Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.", "Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.", "F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .", "BG: The missing or connecting links (when the user does not respond) are filled with \u201c@-RelatedTo-@\" blindly, no guessing mechanism.", "w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement." ], "highlighted_evidence": [ "Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.\n\nSingle: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.\n\nSep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.\n\nF-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .\n\nBG: The missing or connecting links (when the user does not respond) are filled with \u201c@-RelatedTo-@\" blindly, no guessing mechanism.\n\nw/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement." ] }, { "raw_evidence": [ "Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.", "Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.", "Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.", "F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .", "BG: The missing or connecting links (when the user does not respond) are filled with \u201c@-RelatedTo-@\" blindly, no guessing mechanism.", "w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement." ], "highlighted_evidence": [ "Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.\n\nSingle: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.\n\nSep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.\n\nF-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .\n\nBG: The missing or connecting links (when the user does not respond) are filled with \u201c@-RelatedTo-@\" blindly, no guessing mechanism.\n\nw/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement." ] } ] }, { "question": "In what way does LiLi imitate how humans acquire knowledge and perform inference during an interactive conversation?", "answers": [ { "answer": "newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning", "type": "extractive" }, { "answer": "Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. ", "type": "extractive" } ], "q_uid": "a7d020120a45c39bee624f65443e09b895c10533", "evidence": [ { "raw_evidence": [ "We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. A processing action can be the selection of related facts, deriving inference chain, etc., that advances the inference process. An interactive action can be deciding what to ask, formulating a suitable question, etc., that enable us to interact. The process helps grow the knowledge over time and the gained knowledge enables us to communicate better in the future. We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning. LiLi should have the following capabilities:" ], "highlighted_evidence": [ "We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning." ] }, { "raw_evidence": [ "We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. A processing action can be the selection of related facts, deriving inference chain, etc., that advances the inference process. An interactive action can be deciding what to ask, formulating a suitable question, etc., that enable us to interact. The process helps grow the knowledge over time and the gained knowledge enables us to communicate better in the future. We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning. LiLi should have the following capabilities:" ], "highlighted_evidence": [ "We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. " ] } ] }, { "question": "What metrics are used to establish that this makes chatbots more knowledgeable and better at learning and conversation? ", "answers": [ { "answer": "Coverage, Avg. MCC and avg. +ve F1 score", "type": "extractive" }, { "answer": "strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score", "type": "extractive" } ], "q_uid": "585626d18a20d304ae7df228c2128da542d248ff", "evidence": [ { "raw_evidence": [ "Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score." ], "highlighted_evidence": [ "Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score." ] }, { "raw_evidence": [ "Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score." ], "highlighted_evidence": [ "To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score." ] } ] }, { "question": "What are the components of the general knowledge learning engine?", "answers": [ { "answer": "Answer with content missing: (list)\nLiLi should have the following capabilities:\n1. to formulate an inference strategy for a given query that embeds processing and interactive actions.\n2. to learn interaction behaviors (deciding what to ask and when to ask the user).\n3. to leverage the acquired knowledge in the current and future inference process.\n4. to perform 1, 2 and 3 in a lifelong manner for continuous knowledge learning.", "type": "abstractive" }, { "answer": "Knowledge Store (KS) , Knowledge Graph ( INLINEFORM0 ), Relation-Entity Matrix ( INLINEFORM2 ), Task Experience Store ( INLINEFORM15 ), Incomplete Feature DB ( INLINEFORM29 )", "type": "extractive" } ], "q_uid": "bfc2dc913e7b78f3bd45e5449d71383d0aa4a890", "evidence": [ { "raw_evidence": [ "We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. A processing action can be the selection of related facts, deriving inference chain, etc., that advances the inference process. An interactive action can be deciding what to ask, formulating a suitable question, etc., that enable us to interact. The process helps grow the knowledge over time and the gained knowledge enables us to communicate better in the future. We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning. LiLi should have the following capabilities:" ], "highlighted_evidence": [ "LiLi should have the following capabilities:" ] }, { "raw_evidence": [ "As lifelong learning needs to retain knowledge learned from past tasks and use it to help future learning BIBREF31 , LiLi uses a Knowledge Store (KS) for knowledge retention. KS has four components: (i) Knowledge Graph ( INLINEFORM0 ): INLINEFORM1 (the KB) is initialized with base KB triples (see \u00a74) and gets updated over time with the acquired knowledge. (ii) Relation-Entity Matrix ( INLINEFORM2 ): INLINEFORM3 is a sparse matrix, with rows as relations and columns as entity-pairs and is used by the prediction model. Given a triple ( INLINEFORM4 , INLINEFORM5 , INLINEFORM6 ) INLINEFORM7 , we set INLINEFORM8 [ INLINEFORM9 , ( INLINEFORM10 , INLINEFORM11 )] = 1 indicating INLINEFORM12 occurs for pair ( INLINEFORM13 , INLINEFORM14 ). (iii) Task Experience Store ( INLINEFORM15 ): INLINEFORM16 stores the predictive performance of LiLi on past learned tasks in terms of Matthews correlation coefficient (MCC) that measures the quality of binary classification. So, for two tasks INLINEFORM17 and INLINEFORM18 (each relation is a task), if INLINEFORM19 [ INLINEFORM20 ] INLINEFORM21 INLINEFORM22 [ INLINEFORM23 ] [where INLINEFORM24 [ INLINEFORM25 ]=MCC( INLINEFORM26 )], we say C-PR has learned INLINEFORM27 well compared to INLINEFORM28 . (iv) Incomplete Feature DB ( INLINEFORM29 ): INLINEFORM30 stores the frequency of an incomplete path INLINEFORM31 in the form of a tuple ( INLINEFORM32 , INLINEFORM33 , INLINEFORM34 ) and is used in formulating MLQs. INLINEFORM35 [( INLINEFORM36 , INLINEFORM37 , INLINEFORM38 )] = INLINEFORM39 implies LiLi has extracted incomplete path INLINEFORM40 INLINEFORM41 times involving entity-pair INLINEFORM42 [( INLINEFORM43 , INLINEFORM44 )] for query relation INLINEFORM45 .", "The RL model learns even after training whenever it encounters an unseen state (in testing) and thus, gets updated over time. KS is updated continuously over time as a result of the execution of LiLi and takes part in future learning. The prediction model uses lifelong learning (LL), where we transfer knowledge (parameter values) from the model for a past most similar task to help learn for the current task. Similar tasks are identified by factorizing INLINEFORM0 and computing a task similarity matrix INLINEFORM1 . Besides LL, LiLi uses INLINEFORM2 to identify poorly learned past tasks and acquire more clues for them to improve its skillset over time.", "LiLi also uses a stack, called Inference Stack ( INLINEFORM0 ) to hold query and its state information for RL. LiLi always processes stack top ( INLINEFORM1 [top]). The clues from the user get stored in INLINEFORM2 on top of the query during strategy execution and processed first. Thus, the prediction model for INLINEFORM3 is learned before performing inference on query, transforming OKBC to a KBC problem. Table 1 shows the parameters of LiLi used in the following sections." ], "highlighted_evidence": [ "As lifelong learning needs to retain knowledge learned from past tasks and use it to help future learning BIBREF31 , LiLi uses a Knowledge Store (KS) for knowledge retention. KS has four components: (i) Knowledge Graph ( INLINEFORM0 ): INLINEFORM1 (the KB) is initialized with base KB triples (see \u00a74) and gets updated over time with the acquired knowledge. (ii) Relation-Entity Matrix ( INLINEFORM2 ): INLINEFORM3 is a sparse matrix, with rows as relations and columns as entity-pairs and is used by the prediction model. Given a triple ( INLINEFORM4 , INLINEFORM5 , INLINEFORM6 ) INLINEFORM7 , we set INLINEFORM8 [ INLINEFORM9 , ( INLINEFORM10 , INLINEFORM11 )] = 1 indicating INLINEFORM12 occurs for pair ( INLINEFORM13 , INLINEFORM14 ). (iii) Task Experience Store ( INLINEFORM15 ): INLINEFORM16 stores the predictive performance of LiLi on past learned tasks in terms of Matthews correlation coefficient (MCC) that measures the quality of binary classification. So, for two tasks INLINEFORM17 and INLINEFORM18 (each relation is a task), if INLINEFORM19 [ INLINEFORM20 ] INLINEFORM21 INLINEFORM22 [ INLINEFORM23 ] [where INLINEFORM24 [ INLINEFORM25 ]=MCC( INLINEFORM26 )], we say C-PR has learned INLINEFORM27 well compared to INLINEFORM28 . (iv) Incomplete Feature DB ( INLINEFORM29 ): INLINEFORM30 stores the frequency of an incomplete path INLINEFORM31 in the form of a tuple ( INLINEFORM32 , INLINEFORM33 , INLINEFORM34 ) and is used in formulating MLQs. INLINEFORM35 [( INLINEFORM36 , INLINEFORM37 , INLINEFORM38 )] = INLINEFORM39 implies LiLi has extracted incomplete path INLINEFORM40 INLINEFORM41 times involving entity-pair INLINEFORM42 [( INLINEFORM43 , INLINEFORM44 )] for query relation INLINEFORM45 .\n\nThe RL model learns even after training whenever it encounters an unseen state (in testing) and thus, gets updated over time. KS is updated continuously over time as a result of the execution of LiLi and takes part in future learning. The prediction model uses lifelong learning (LL), where we transfer knowledge (parameter values) from the model for a past most similar task to help learn for the current task. Similar tasks are identified by factorizing INLINEFORM0 and computing a task similarity matrix INLINEFORM1 . Besides LL, LiLi uses INLINEFORM2 to identify poorly learned past tasks and acquire more clues for them to improve its skillset over time.\n\nLiLi also uses a stack, called Inference Stack ( INLINEFORM0 ) to hold query and its state information for RL. LiLi always processes stack top ( INLINEFORM1 [top]). The clues from the user get stored in INLINEFORM2 on top of the query during strategy execution and processed first. Thus, the prediction model for INLINEFORM3 is learned before performing inference on query, transforming OKBC to a KBC problem. Table 1 shows the parameters of LiLi used in the following sections." ] } ] } ], "1809.00530": [ { "question": "What is the architecture of the model?", "answers": [ { "answer": "one-layer CNN structure from previous works BIBREF22 , BIBREF4", "type": "extractive" }, { "answer": " one-layer CNN", "type": "extractive" } ], "q_uid": "b46c0015a122ee5fb95c2a45691cb97f80de1bb6", "evidence": [ { "raw_evidence": [ "For the proposed model, we denote INLINEFORM0 parameterized by INLINEFORM1 as a neural-based feature encoder that maps documents from both domains to a shared feature space, and INLINEFORM2 parameterized by INLINEFORM3 as a fully connected layer with softmax activation serving as the sentiment classifier. We aim to learn feature representations that are domain-invariant and at the same time discriminative on both domains, thus we simultaneously consider three factors in our objective: (1) minimize the classification error on the labeled source examples; (2) minimize the domain discrepancy; and (3) leverage unlabeled data via semi-supervised learning.", "We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks. Given a review document INLINEFORM1 consisting of INLINEFORM2 words, we begin by associating each word with a continuous word embedding BIBREF23 INLINEFORM3 from an embedding matrix INLINEFORM4 , where INLINEFORM5 is the vocabulary size and INLINEFORM6 is the embedding dimension. INLINEFORM7 is jointly updated with other network parameters during training. Given a window of dense word embeddings INLINEFORM8 , the convolution layer first concatenates these vectors to form a vector INLINEFORM9 of length INLINEFORM10 and then the output vector is computed by Equation ( EQREF11 ): DISPLAYFORM0" ], "highlighted_evidence": [ "For the proposed model, we denote INLINEFORM0 parameterized by INLINEFORM1 as a neural-based feature encoder that maps documents from both domains to a shared feature space, and INLINEFORM2 parameterized by INLINEFORM3 as a fully connected layer with softmax activation serving as the sentiment classifier.", "We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks." ] }, { "raw_evidence": [ "We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks. Given a review document INLINEFORM1 consisting of INLINEFORM2 words, we begin by associating each word with a continuous word embedding BIBREF23 INLINEFORM3 from an embedding matrix INLINEFORM4 , where INLINEFORM5 is the vocabulary size and INLINEFORM6 is the embedding dimension. INLINEFORM7 is jointly updated with other network parameters during training. Given a window of dense word embeddings INLINEFORM8 , the convolution layer first concatenates these vectors to form a vector INLINEFORM9 of length INLINEFORM10 and then the output vector is computed by Equation ( EQREF11 ): DISPLAYFORM0" ], "highlighted_evidence": [ "We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks." ] } ] }, { "question": "What are the baseline methods?", "answers": [ { "answer": "(1) Naive, (2) mSDA BIBREF7, (3) NaiveNN, (4) AuxNN BIBREF4, (5) ADAN BIBREF16, (6) MMD", "type": "extractive" }, { "answer": "non-domain-adaptive baseline with bag-of-words representations and SVM classifier, mSDA, non-domain-adaptive CNN trained on source domain, neural model that exploits auxiliary tasks, adversarial training to reduce representation difference between domains, variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized", "type": "extractive" } ], "q_uid": "5b7a4994bfdbf8882f391adf1cd2218dbc2255a0", "evidence": [ { "raw_evidence": [ "We compare with the following baselines:", "(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.", "(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.", "(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.", "(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.", "(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.", "(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. In NLP, adding more layers of CNNs may not be very helpful and thus those models from image-related tasks can not be directly applied to our problem. To compare with MMD-based method, we train a model that jointly minimize the classification loss INLINEFORM0 on the source domain and MMD between INLINEFORM1 and INLINEFORM2 . For computing MMD, we use a Gaussian RBF which is a common choice for characteristic kernel." ], "highlighted_evidence": [ "We compare with the following baselines:\n\n(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.\n\n(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.\n\n(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.\n\n(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.\n\n(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.\n\n(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. In NLP, adding more layers of CNNs may not be very helpful and thus those models from image-related tasks can not be directly applied to our problem. To compare with MMD-based method, we train a model that jointly minimize the classification loss INLINEFORM0 on the source domain and MMD between INLINEFORM1 and INLINEFORM2 . For computing MMD, we use a Gaussian RBF which is a common choice for characteristic kernel." ] }, { "raw_evidence": [ "We compare with the following baselines:", "(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.", "(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.", "(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.", "(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.", "(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.", "(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. In NLP, adding more layers of CNNs may not be very helpful and thus those models from image-related tasks can not be directly applied to our problem. To compare with MMD-based method, we train a model that jointly minimize the classification loss INLINEFORM0 on the source domain and MMD between INLINEFORM1 and INLINEFORM2 . For computing MMD, we use a Gaussian RBF which is a common choice for characteristic kernel." ], "highlighted_evidence": [ "We compare with the following baselines:\n\n(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.\n\n(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.\n\n(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.\n\n(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.\n\n(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.\n\n(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. " ] } ] } ], "1710.07960": [ { "question": "Did they use a crowdsourcing platform for annotations?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "0ba3ea93eef5660a79ea3c26c6a270eac32dfa4c", "evidence": [ { "raw_evidence": [ "When it comes to feature selection, the most unexpected phenomenon observed in this study is low usefulness of the interpretation-based features. According to Table TABREF22 , adding interpretations of neighbouring words ( INLINEFORM0 ) yields very little improvement, while this type of information regarding replacements ( INLINEFORM1 ) even lowers the accuracy. This result could be attributed to two factors. Firstly, more developed replacement generation results in more occurrences, but also causes their tags to differ from the target word by gender or number. They may even not be available at all (in the case of multi-word replacements). The second reason is a difference in language: while in English a word interpretation is represented as one of several dozen part of speech identifiers, in Slavonic languages, such as Polish, we need to specify the values of several tags for each word, leading to thousands of possible interpretations. Obviously, the features based on these tags are very sparse. Finally, the morphosyntactic annotation was performed automatically, which may lead to errors, especially in the case of noisy web text." ], "highlighted_evidence": [ "Finally, the morphosyntactic annotation was performed automatically, which may lead to errors, especially in the case of noisy web text." ] } ] }, { "question": "How do they deal with unknown distribution senses?", "answers": [ { "answer": "The N\u00e4ive-Bayes classifier is corrected so it is not biased to most frequent classes", "type": "abstractive" }, { "answer": "Bayesian classifier has been modified, removing the bias towards frequent labels in the training data", "type": "extractive" } ], "q_uid": "5e324846a99a5573cd2e843d1657e87f4eb22fa6", "evidence": [ { "raw_evidence": [ "Monosemous relatives have been employed multiple times (see Section 2), but results remain unsatisfactory. The aim of my study is to explore the limitations of this technique by implementing and evaluating such a tool for Polish. Firstly, the method is expanded by waiving the requirement of monosemy and proposing several new sources of relatives. These previously unexplored sources are based on wordnet data and help gather many training cases from the corpus. Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a na\u00efve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data. Finally, a very large corpus (600 million documents), gathered from the web by a Polish search engine NEKST, is used to build models based on training corpora of different sizes. Those experiments show what amount of data is sufficient for such a task. The proposed solution is compared to baselines that use wordnet structure only, with no training corpora.", "The algorithm works as follows. First, a set of relatives is obtained for each sense of a target word using the Polish wordnet: plWordNet BIBREF18 . Some of the replacements may have multiple senses, however usually one of them covers most cases. Secondly, a set of context features is extracted from occurrences of relatives in the NEKST corpus. Finally, the aggregated feature values corresponding to target word senses are used to build a na\u00efve Bayesian classifier adjusted to a situation of unknown a priori probabilities." ], "highlighted_evidence": [ "Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a na\u00efve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data.", "Finally, the aggregated feature values corresponding to target word senses are used to build a na\u00efve Bayesian classifier adjusted to a situation of unknown a priori probabilities." ] }, { "raw_evidence": [ "Which could be rewritten as: INLINEFORM0", "The expression has been formulated as a product of two factors: INLINEFORM0 , independent from observed features and corresponding to empty word context, and INLINEFORM1 that depends on observed context. To weaken the influence of improper distribution of training cases, we omit INLINEFORM2 , so that when no context features are observed, every word sense is considered equally probable.", "In this paper the limitations and improvements of unsupervised word sense disambiguation have been investigated. The main problem \u2013 insufficient number and quality of replacements has been tackled by adding new rich sources of replacements. The quality of the models has indeed improved, especially thanks to replacements based on sense ordering in plWordNet. To deal with the problem of unknown sense distribution, the Bayesian classifier has been modified, removing the bias towards frequent labels in the training data. Finally, the experiments with very large corpus have shown the sufficient amount of training data for this task, which is only 6 million documents." ], "highlighted_evidence": [ "Which could be rewritten as: INLINEFORM0\n\nThe expression has been formulated as a product of two factors: INLINEFORM0 , independent from observed features and corresponding to empty word context, and INLINEFORM1 that depends on observed context. To weaken the influence of improper distribution of training cases, we omit INLINEFORM2 , so that when no context features are observed, every word sense is considered equally probable.", "To deal with the problem of unknown sense distribution, the Bayesian classifier has been modified, removing the bias towards frequent labels in the training data." ] } ] } ], "1912.03804": [ { "question": "Do they report results only on English data?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "2ccc26e11df4eb26fcccdd1f446dc749aff5d572", "evidence": [ { "raw_evidence": [ "With their ability to operate freely on social media now curtailed, ISIS recruiters and propagandists increased their attentiveness to another longstanding tool\u2013English language online magazines targeting western audiences. Al Hayat, the media wing of ISIS, published multiple online magazines in different languages including English. The English online magazine of ISIS was named Dabiq and first appeared on the dark web on July 2014 and continued publishing for 15 issues. This publication was followed by Rumiyah which produced 13 English language issues through September 2017. The content of these magazines provides a valuable but underutilized resource for understanding ISIS strategies and how they appeal to recruits, specifically English-speaking audiences. They also provide a way to compare ISIS' approach with other radical groups. Ingram compared Dabiq contents with Inspire (Al Qaeda publication) and suggested that Al Qaeda heavily emphasized identity-choice, while ISIS' messages were more balanced between identity-choice and rational-choice BIBREF7. In another research paper, Wignell et al. BIBREF8 compared Dabiq and Rumiah by examining their style and what both magazine messages emphasized. Despite the volume of research on these magazines, only a few researchers used lexical analysis and mostly relied on experts' opinions. BIBREF9 is one exception to this approach where they used word frequency on 11 issues of Dabiq publications and compared attributes such as anger, anxiety, power, motive, etc.", "Finding useful collections of texts where ISIS targets women is a challenging task. Most of the available material are not reflecting ISIS' official point of view or they do not talk specifically about women. However, ISIS' online magazines are valuable resources for understanding how the organization attempts to appeal to western audiences, particularly women. Looking through both Dabiq and Rumiyah, many issues of the magazines contain articles specifically addressing women, usually with \u201c to our sisters \u201d incorporated into the title. Seven out of fifteen Dabiq issues and all thirteen issues of Rumiyah contain articles targeting women, clearly suggesting an increase in attention to women over time." ], "highlighted_evidence": [ "The English online magazine of ISIS was named Dabiq and first appeared on the dark web on July 2014 and continued publishing for 15 issues. This publication was followed by Rumiyah which produced 13 English language issues through September 2017.", "Looking through both Dabiq and Rumiyah, many issues of the magazines contain articles specifically addressing women, usually with \u201c to our sisters \u201d incorporated into the title." ] }, { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "What conclusions do the authors draw from their finding that the emotional appeal of ISIS and Catholic materials are similar?", "answers": [ { "answer": "both corpuses used words that aim to inspire readers while avoiding fear, actual words that lead to these effects are very different in the two contexts, our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda", "type": "extractive" }, { "answer": "By comparing scores for each word calculated using Depechemood dictionary and normalize emotional score for each article, they found Catholic and ISIS materials show similar scores", "type": "abstractive" } ], "q_uid": "f318a2851d7061f05a5b32b94251f943480fbd15", "evidence": [ { "raw_evidence": [ "Comparing these topics with those that appeared on a Catholic women forum, it seems that both ISIS and non-violent groups use topics about motherhood, spousal relationship, and marriage/divorce when they address women. Moreover, we used Depechemood methods to analyze the emotions that these materials are likely to elicit in readers. The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. However, the actual words that lead to these effects are very different in the two contexts. Overall, our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda that can assist the counterterrorism community." ], "highlighted_evidence": [ "Comparing these topics with those that appeared on a Catholic women forum, it seems that both ISIS and non-violent groups use topics about motherhood, spousal relationship, and marriage/divorce when they address women. Moreover, we used Depechemood methods to analyze the emotions that these materials are likely to elicit in readers. The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. However, the actual words that lead to these effects are very different in the two contexts. Overall, our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda that can assist the counterterrorism community." ] }, { "raw_evidence": [ "We rely on Depechemood dictionaries to analyze emotions in both corpora. These dictionaries are freely available and come in multiple arrangements. We used a version that includes words with their part of speech (POS) tags. Only words that exist in the Depechemood dictionary with the same POS tag are considered for our analysis. We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource which is available in an NLTK python library. Figure FIGREF22 shows the aggregated score for different feelings in our corpora.", "Both Catholic and ISIS related materials score the highest in \u201cinspired\u201d category. Furthermore, in both cases, being afraid has the lowest score. However, this is not the case for random news material such as the Reuters corpus, which are not that inspiring and, according to this method, seems to cause more fear in their audience. We investigate these results further by looking at the most inspiring words detected in these two corpora. Table TABREF24 presents 10 words that are among the most inspiring in both corpora. The comparison of the two lists indicate that the method picks very different words in each corpus to reach to the same conclusion. Also, we looked at separate articles in each of the issues of ISIS material addressing women. Figure FIGREF23 shows emotion scores in each of the 20 issues of ISIS propaganda. As demonstrated, in every separate article, this method gives the highest score to evoking inspirations in the reader. Also, in most of these issues the method scored \u201cbeing afraid\u201d as the lowest score in each issue.", "Comparing these topics with those that appeared on a Catholic women forum, it seems that both ISIS and non-violent groups use topics about motherhood, spousal relationship, and marriage/divorce when they address women. Moreover, we used Depechemood methods to analyze the emotions that these materials are likely to elicit in readers. The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. However, the actual words that lead to these effects are very different in the two contexts. Overall, our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda that can assist the counterterrorism community." ], "highlighted_evidence": [ "We rely on Depechemood dictionaries to analyze emotions in both corpora.", "We aggregated the score for each word and normalized each article by emotions.", "Both Catholic and ISIS related materials score the highest in \u201cinspired\u201d category. Furthermore, in both cases, being afraid has the lowest score. ", "The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. " ] } ] }, { "question": "How id Depechemood trained?", "answers": [ { "answer": "By multiplying crowd-annotated document-emotion matrix with emotion-word matrix. ", "type": "abstractive" }, { "answer": "researchers asked subjects to report their emotions after reading each article, multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words, Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories", "type": "extractive" } ], "q_uid": "6bbbb9933aab97ce2342200447c6322527427061", "evidence": [ { "raw_evidence": [ "Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. Due to limitations of their experiment setup, the emotion categories that they present does not exactly match the emotions from the Plutchik wheel categories. However, they still provide a good sense of the general feeling of an individual after reading an article. The emotion categories of Depechemood are: AFRAID, AMUSED, ANGRY, ANNOYED, DON'T CARE, HAPPY, INSPIRED, SAD. Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories. We present our finding using this approach in the result section." ], "highlighted_evidence": [ "Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. " ] }, { "raw_evidence": [ "Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. Due to limitations of their experiment setup, the emotion categories that they present does not exactly match the emotions from the Plutchik wheel categories. However, they still provide a good sense of the general feeling of an individual after reading an article. The emotion categories of Depechemood are: AFRAID, AMUSED, ANGRY, ANNOYED, DON'T CARE, HAPPY, INSPIRED, SAD. Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories. We present our finding using this approach in the result section." ], "highlighted_evidence": [ "Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. Due to limitations of their experiment setup, the emotion categories that they present does not exactly match the emotions from the Plutchik wheel categories. However, they still provide a good sense of the general feeling of an individual after reading an article. The emotion categories of Depechemood are: AFRAID, AMUSED, ANGRY, ANNOYED, DON'T CARE, HAPPY, INSPIRED, SAD. Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories. We present our finding using this approach in the result section." ] } ] }, { "question": "How are similarities and differences between the texts from violent and non-violent religious groups analyzed?", "answers": [ { "answer": "By using topic modeling and unsupervised emotion detection on ISIS materials and articles from Catholic women forum", "type": "abstractive" }, { "answer": "A comparison of common words, We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource", "type": "extractive" } ], "q_uid": "2007bfb8f66e88a235c3a8d8c0a3b3dd88734706", "evidence": [ { "raw_evidence": [ "What similarities and/or differences do these topics have with non-violent, non-Islamic religious material addressed specifically to women?", "As these questions suggest, to understand what, if anything, makes extremist appeals distinctive, we need a point of comparison in terms of the outreach efforts to women from a mainstream, non-violent religious group. For this purpose, we rely on an online Catholic women's forum. Comparison between Catholic material and the content of ISIS' online magazines allows for novel insight into the distinctiveness of extremist rhetoric when targeted towards the female population. To accomplish this task, we employ topic modeling and an unsupervised emotion detection method." ], "highlighted_evidence": [ "What similarities and/or differences do these topics have with non-violent, non-Islamic religious material addressed specifically to women?", "As these questions suggest, to understand what, if anything, makes extremist appeals distinctive, we need a point of comparison in terms of the outreach efforts to women from a mainstream, non-violent religious group. For this purpose, we rely on an online Catholic women's forum. Comparison between Catholic material and the content of ISIS' online magazines allows for novel insight into the distinctiveness of extremist rhetoric when targeted towards the female population. To accomplish this task, we employ topic modeling and an unsupervised emotion detection method." ] }, { "raw_evidence": [ "Results ::: Emotion Analysis", "We rely on Depechemood dictionaries to analyze emotions in both corpora. These dictionaries are freely available and come in multiple arrangements. We used a version that includes words with their part of speech (POS) tags. Only words that exist in the Depechemood dictionary with the same POS tag are considered for our analysis. We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource which is available in an NLTK python library. Figure FIGREF22 shows the aggregated score for different feelings in our corpora.", "Results ::: Content Analysis", "After pre-processing the text, both corpora were analyzed for word frequencies. These word frequencies have been normalized by the number of words in each corpus. Figure FIGREF17 shows the most common words in each of these corpora.", "A comparison of common words suggests that those related to marital relationships ( husband, wife, etc.) appear in both corpora, but the religious theme of ISIS material appears to be stronger. A stronger comparison can be made using topic modeling techniques to discover main topics of these documents. Although we used LDA, our results by using NMF outperform LDA topics, due to the nature of these corpora. Also, fewer numbers of ISIS documents might contribute to the comparatively worse performance. Therefore, we present only NMF results. Based on their coherence, we selected 10 topics for analyzing within both corporas. Table TABREF18 and Table TABREF19 show the most important words in each topic with a general label that we assigned to the topic manually. Based on the NMF output, ISIS articles that address women include topics mainly about Islam, women's role in early Islam, hijrah (moving to another land), spousal relations, marriage, and motherhood." ], "highlighted_evidence": [ "Results ::: Emotion Analysis\nWe rely on Depechemood dictionaries to analyze emotions in both corpora. These dictionaries are freely available and come in multiple arrangements. We used a version that includes words with their part of speech (POS) tags. Only words that exist in the Depechemood dictionary with the same POS tag are considered for our analysis. We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource which is available in an NLTK python library.", "Results ::: Content Analysis\nAfter pre-processing the text, both corpora were analyzed for word frequencies. These word frequencies have been normalized by the number of words in each corpus. Figure FIGREF17 shows the most common words in each of these corpora.\n\nA comparison of common words suggests that those related to marital relationships ( husband, wife, etc.) appear in both corpora, but the religious theme of ISIS material appears to be stronger." ] } ] }, { "question": "How are prominent topics idenified in Dabiq and Rumiyah?", "answers": [ { "answer": "LDA, non-negative matrix factorization (NMF)", "type": "extractive" }, { "answer": "Using NMF based topic modeling and their coherence prominent topics are identified", "type": "abstractive" } ], "q_uid": "d859cc37799a508bbbe4270ed291ca6394afce2c", "evidence": [ { "raw_evidence": [ "Topic modeling methods are the more powerful technique for understanding the contents of a corpus. These methods try to discover abstract topics in a corpus and reveal hidden semantic structures in a collection of documents. The most popular topic modeling methods use probabilistic approaches such as probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA). LDA is a generalization of pLSA where documents are considered as a mixture of topics and the distribution of topics is governed by a Dirichlet prior ($\\alpha $). Figure FIGREF12 shows plate notation of general LDA structure where $\\beta $ represents prior of word distribution per topic and $\\theta $ refers to topics distribution for documents BIBREF19. Since LDA is among the most widely utilized algorithms for topic modeling, we applied it to our data. However, the coherence of the topics produced by LDA is poorer than expected.", "To address this lack of coherence, we applied non-negative matrix factorization (NMF). This method decomposes the term-document matrix into two non-negative matrices as shown in Figure FIGREF13. The resulting non-negative matrices are such that their product closely approximate the original data. Mathematically speaking, given an input matrix of document-terms $V$, NMF finds two matrices by solving the following equation BIBREF20:" ], "highlighted_evidence": [ "However, the coherence of the topics produced by LDA is poorer than expected.\n\nTo address this lack of coherence, we applied non-negative matrix factorization (NMF)." ] }, { "raw_evidence": [ "A comparison of common words suggests that those related to marital relationships ( husband, wife, etc.) appear in both corpora, but the religious theme of ISIS material appears to be stronger. A stronger comparison can be made using topic modeling techniques to discover main topics of these documents. Although we used LDA, our results by using NMF outperform LDA topics, due to the nature of these corpora. Also, fewer numbers of ISIS documents might contribute to the comparatively worse performance. Therefore, we present only NMF results. Based on their coherence, we selected 10 topics for analyzing within both corporas. Table TABREF18 and Table TABREF19 show the most important words in each topic with a general label that we assigned to the topic manually. Based on the NMF output, ISIS articles that address women include topics mainly about Islam, women's role in early Islam, hijrah (moving to another land), spousal relations, marriage, and motherhood." ], "highlighted_evidence": [ "Therefore, we present only NMF results. Based on their coherence, we selected 10 topics for analyzing within both corporas. ", "A stronger comparison can be made using topic modeling techniques to discover main topics of these documents. " ] } ] } ], "1912.08960": [ { "question": "Are the images from a specific domain?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "50e80cfa84200717921840fddcf3b051a9216ad8", "evidence": [ { "raw_evidence": [ "Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE. We empirically demonstrate that the existing metrics BLEU and SPICE do not capture true caption-image agreement in all scenarios, while the GTD framework allows a fine-grained investigation of how well existing models cope with varied visual situations and linguistic constructions." ], "highlighted_evidence": [ "Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE." ] }, { "raw_evidence": [ "In this work, we develop the evaluation datasets within the ShapeWorld framework. ShapeWorld is a controlled data generation framework consisting of abstract colored shapes (see Figure FIGREF1 for an example). We use ShapeWorld to generate training and evaluation data for two major reasons. ShapeWorld supports customized data generation according to user specification, which enables a variety of model inspections in terms of language construction, visual complexity and reasoning ability. Another benefit is that each training and test instance generated in ShapeWorld is returned as a triplet of $<$image, caption, world model$>$. The world model stores information about the underlying microworld used to generate an image and a descriptive caption, internally represented as a list of entities with their attributes, such as shape, color, position. During data generation, ShapeWorld randomly samples a world model from a set of available entities and attributes. The generated world model is then used to realize a corresponding instance consisting of image and caption. The world model gives the actual semantic information contained in an image, which allows evaluation of caption truthfulness." ], "highlighted_evidence": [ "In this work, we develop the evaluation datasets within the ShapeWorld framework. ShapeWorld is a controlled data generation framework consisting of abstract colored shapes (see Figure FIGREF1 for an example)." ] } ] }, { "question": "Which existing models are evaluated?", "answers": [ { "answer": "Show&Tell and LRCN1u", "type": "extractive" }, { "answer": "Show&Tell model, LRCN1u", "type": "extractive" } ], "q_uid": "63a1cbe66fd58ff0ead895a8bac1198c38c008aa", "evidence": [ { "raw_evidence": [ "We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1. Both models follow the basic encoder-decoder architecture design that uses a CNN encoder to condense the visual information into an image embedding, which in turn conditions an LSTM decoder to generate a natural language caption. The main difference between the two models is the way they condition the decoder. The Show&Tell model feeds the image embedding as the \u201cpredecessor word embedding\u201d to the first produced word, while the LRCN1u model concatenates the image features with the embedded previous word as the input to the sequence model at each time step." ], "highlighted_evidence": [ "We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1. Both models follow the basic encoder-decoder architecture design that uses a CNN encoder to condense the visual information into an image embedding, which in turn conditions an LSTM decoder to generate a natural language caption. The main difference between the two models is the way they condition the decoder. The Show&Tell model feeds the image embedding as the \u201cpredecessor word embedding\u201d to the first produced word, while the LRCN1u model concatenates the image features with the embedded previous word as the input to the sequence model at each time step." ] }, { "raw_evidence": [ "We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1. Both models follow the basic encoder-decoder architecture design that uses a CNN encoder to condense the visual information into an image embedding, which in turn conditions an LSTM decoder to generate a natural language caption. The main difference between the two models is the way they condition the decoder. The Show&Tell model feeds the image embedding as the \u201cpredecessor word embedding\u201d to the first produced word, while the LRCN1u model concatenates the image features with the embedded previous word as the input to the sequence model at each time step." ], "highlighted_evidence": [ "We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1." ] } ] }, { "question": "How is diversity measured?", "answers": [ { "answer": "diversity score as the ratio of observed number versus optimal number", "type": "extractive" }, { "answer": " we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number", "type": "extractive" } ], "q_uid": "509af1f11bd6f3db59284258e18fdfebe86cae47", "evidence": [ { "raw_evidence": [ "As ShapeWorldICE exploits a limited size of open-class words, we emphasize the diversity in ShapeWorldICE at the sentence level rather than the word level. Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number:" ], "highlighted_evidence": [ "Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number" ] }, { "raw_evidence": [ "As ShapeWorldICE exploits a limited size of open-class words, we emphasize the diversity in ShapeWorldICE at the sentence level rather than the word level. Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number:" ], "highlighted_evidence": [ "As ShapeWorldICE exploits a limited size of open-class words, we emphasize the diversity in ShapeWorldICE at the sentence level rather than the word level. Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number" ] } ] } ], "2002.11910": [ { "question": "What state-of-the-art deep neural network is used?", "answers": [ { "answer": "LSTM model", "type": "extractive" }, { "answer": "BIBREF15, BIBREF19, BIBREF20 ", "type": "extractive" } ], "q_uid": "23e16c1173b7def2c5cb56053b57047c9971e3bb", "evidence": [ { "raw_evidence": [ "Inspired by BIBREF12, we integrate in this paper a boundary assembling step into the state-of-the-art LSTM model for Chinese word segmentation, and feed the output into a CRF model for NER, resulting in a 2% absolute improvement on the overall F1 score over current state-of-the-art methods." ], "highlighted_evidence": [ "Inspired by BIBREF12, we integrate in this paper a boundary assembling step into the state-of-the-art LSTM model for Chinese word segmentation, and feed the output into a CRF model for NER, resulting in a 2% absolute improvement on the overall F1 score over current state-of-the-art methods." ] }, { "raw_evidence": [ "Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. This best model performance is achieved with a dropout rate of 0.1, and a learning rate of 0.05. Our results are compared with state-of-the-art models BIBREF15, BIBREF19, BIBREF20 on the same Sina Weibo training and test datasets. Our model shows an absolute improvement of 2% for the overall F1 score." ], "highlighted_evidence": [ "Our results are compared with state-of-the-art models BIBREF15, BIBREF19, BIBREF20 on the same Sina Weibo training and test datasets. Our model shows an absolute improvement of 2% for the overall F1 score." ] } ] }, { "question": "What boundary assembling method is used?", "answers": [ { "answer": "This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries. If two words segmented in a sentence are identified as nouns, and one word is immediately before the other, we assemble their boundaries, creating a new word candidate for entity recognition.", "type": "extractive" }, { "answer": "backward greedy search over each sentence's label sequence to identify word boundaries", "type": "extractive" } ], "q_uid": "d78f7f84a76a07b777d4092cb58161528ca3803c", "evidence": [ { "raw_evidence": [ "In each sentence, Chinese characters are labeled as either Begin, Inside, End, or Singleton (BIES labeling). The likelihood of individual Chinese characters being labeled as each type is calculated by the LSTM module described in the previous section. BIBREF12 found in a Chinese corpus that the word label \"End\" has a better performance than \"Begin\". This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries. If two words segmented in a sentence are identified as nouns, and one word is immediately before the other, we assemble their boundaries, creating a new word candidate for entity recognition. This strategy has the advantage to find named entities with long word length. It also reduces the influence caused by different segmentation criteria." ], "highlighted_evidence": [ "BIBREF12 found in a Chinese corpus that the word label \"End\" has a better performance than \"Begin\". This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries. If two words segmented in a sentence are identified as nouns, and one word is immediately before the other, we assemble their boundaries, creating a new word candidate for entity recognition. This strategy has the advantage to find named entities with long word length. It also reduces the influence caused by different segmentation criteria." ] }, { "raw_evidence": [ "In each sentence, Chinese characters are labeled as either Begin, Inside, End, or Singleton (BIES labeling). The likelihood of individual Chinese characters being labeled as each type is calculated by the LSTM module described in the previous section. BIBREF12 found in a Chinese corpus that the word label \"End\" has a better performance than \"Begin\". This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries. If two words segmented in a sentence are identified as nouns, and one word is immediately before the other, we assemble their boundaries, creating a new word candidate for entity recognition. This strategy has the advantage to find named entities with long word length. It also reduces the influence caused by different segmentation criteria." ], "highlighted_evidence": [ "This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries." ] } ] } ], "1909.09587": [ { "question": "What model is used as a baseline? ", "answers": [ { "answer": "pre-trained multi-BERT", "type": "extractive" }, { "answer": "QANet , BIBREF14, fine-tuned a BERT model", "type": "extractive" } ], "q_uid": "009ce6f2bea67e7df911b3f93443b23467c9f4a1", "evidence": [ { "raw_evidence": [ "Multi-BERT has showcased its ability to enable cross-lingual zero-shot learning on the natural language understanding tasks including XNLI BIBREF19, NER, POS, Dependency Parsing, and so on. We now seek to know if a pre-trained multi-BERT has ability to solve RC tasks in the zero-shot setting." ], "highlighted_evidence": [ "We now seek to know if a pre-trained multi-BERT has ability to solve RC tasks in the zero-shot setting." ] }, { "raw_evidence": [ "Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "Reading Comprehension (RC) has become a central task in natural language processing, with great practical value in various industries. In recent years, many large-scale RC datasets in English BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 have nourished the development of numerous powerful and diverse RC models BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. The state-of-the-art model BIBREF12 on SQuAD, one of the most widely used RC benchmarks, even surpasses human-level performance. Nonetheless, RC on languages other than English has been limited due to the absence of sufficient training data. Although some efforts have been made to create RC datasets for Chinese BIBREF13, BIBREF14 and Korean BIBREF15, it is not feasible to collect RC datasets for every language since annotation efforts to collect a new RC dataset are often far from trivial. Therefore, the setup of transfer learning, especially zero-shot learning, is of extraordinary importance." ], "highlighted_evidence": [ "Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese.", "BIBREF14", " In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. " ] } ] }, { "question": "what does the model learn in zero-shot setting?", "answers": [ { "answer": "we simply adopted the official training script of BERT, with default hyperparameters, to fine-tune each model until training loss converged", "type": "extractive" } ], "q_uid": "55569d0a4586d20c01268a80a7e31a17a18198e2", "evidence": [ { "raw_evidence": [ "We have training and testing sets in three different languages: English, Chinese and Korean. The English dataset is SQuAD BIBREF2. The Chinese dataset is DRCD BIBREF14, a Chinese RC dataset with 30,000+ examples in the training set and 10,000+ examples in the development set. The Korean dataset is KorQuAD BIBREF15, a Korean RC dataset with 60,000+ examples in the training set and 10,000+ examples in the development set, created in exactly the same procedure as SQuAD. We always use the development sets of SQuAD, DRCD and KorQuAD for testing since the testing sets of the corpora have not been released yet.", "The pre-trained multi-BERT is the official released one. This multi-lingual version of BERT were pre-trained on corpus in 104 languages. Data in different languages were simply mixed in batches while pre-training, without additional effort to align between languages. When fine-tuning, we simply adopted the official training script of BERT, with default hyperparameters, to fine-tune each model until training loss converged." ], "highlighted_evidence": [ "We have training and testing sets in three different languages: English, Chinese and Korean.", "When fine-tuning, we simply adopted the official training script of BERT, with default hyperparameters, to fine-tune each model until training loss converged." ] } ] } ], "1802.07862": [ { "question": "Do they inspect their model to see if their model learned to associate image parts with words related to entities?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "7cd22ca9e107d2b13a7cc94252aaa9007976b338", "evidence": [ { "raw_evidence": [ "For the image-aided model (W+C+V; upper row in Figure FIGREF19 ), we confirm that the modality attention successfully attenuates irrelevant signals (selfies, etc.) and amplifies relevant modality-based contexts in prediction of a given token. In the example of \u201cdisney word essential = coffee\" with visual tags selfie, phone, person, the modality attention successfully attenuates distracting visual signals and focuses on textual modalities, consequently making correct predictions. The named entities in the examples of \u201cBeautiful night atop The Space Needle\" and \u201cSplash Mountain\" are challenging to predict because they are composed of common nouns (space, needle, splash, mountain), and thus they often need additional contexts to correctly predict. In the training data, visual contexts make stronger indicators for these named entities (space needle, splash mountain), and the modality attention module successfully attends more to stronger signals." ], "highlighted_evidence": [ "For the image-aided model (W+C+V; upper row in Figure FIGREF19 ), we confirm that the modality attention successfully attenuates irrelevant signals (selfies, etc.) and amplifies relevant modality-based contexts in prediction of a given token." ] }, { "raw_evidence": [ "Error Analysis: Table TABREF17 shows example cases where incorporation of visual contexts affects prediction of named entities. For example, the token `curry' in the caption \u201cThe curry's \" is polysemous and may refer to either a type of food or a famous basketball player `Stephen Curry', and the surrounding textual contexts do not provide enough information to disambiguate it. On the other hand, visual contexts (visual tags: `parade', `urban area', ...) provide similarities to the token's distributional semantics from other training examples (snaps from \u201cNBA Championship Parade Story\"), and thus the model successfully predicts the token as a named entity. Similarly, while the text-only model erroneously predicts `Apple' in the caption \u201cGrandma w dat lit Apple Crisp\" as an organization (Apple Inc.), the visual contexts (describing objects related to food) help disambiguate the token, making the model predict it correctly as a non-named entity (a fruit). Trending entities (musicians or DJs such as `CID', `Duke Dumont', `Marshmello', etc.) are also recognized correctly with strengthened contexts from visual information (describing concert scenes) despite lack of surrounding textual contexts. A few cases where visual contexts harmed the performance mostly include visual tags that are unrelated to a token or its surrounding textual contexts." ], "highlighted_evidence": [ "On the other hand, visual contexts (visual tags: `parade', `urban area', ...) provide similarities to the token's distributional semantics from other training examples (snaps from \u201cNBA Championship Parade Story\"), and thus the model successfully predicts the token as a named entity." ] } ] }, { "question": "Does their NER model learn NER from both text and images?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "adbf33c6144b2f5c40d0c6a328a92687a476f371", "evidence": [ { "raw_evidence": [ "(proposed) Bi-LSTM/CRF + Bi-CharLSTM with modality attention (W+C): uses the modality attention to merge word and character embeddings.", "(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception (W+C+V): takes as input visual contexts extracted from InceptionNet as well, concatenated with word and char vectors.", "(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception with modality attention (W+C+V): uses the modality attention to merge word, character, and visual embeddings as input to entity LSTM." ], "highlighted_evidence": [ "(proposed) Bi-LSTM/CRF + Bi-CharLSTM with modality attention (W+C): uses the modality attention to merge word and character embeddings.\n\n(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception (W+C+V): takes as input visual contexts extracted from InceptionNet as well, concatenated with word and char vectors.\n\n(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception with modality attention (W+C+V): uses the modality attention to merge word, character, and visual embeddings as input to entity LSTM." ] }, { "raw_evidence": [ "Our contributions are three-fold: we propose (1) an LSTM-CNN hybrid multimodal NER network that takes as input both image and text for recognition of a named entity in text input. To the best of our knowledge, our approach is the first work to incorporate visual contexts for named entity recognition tasks. (2) We propose a general modality attention module that selectively chooses modalities to extract primary context from, maximizing information gain and suppressing irrelevant contexts from each modality (we treat words, characters, and images as separate modalities). (3) We show that the proposed approaches outperform the state-of-the-art NER models (both with and without using additional visual contexts) on our new MNER dataset SnapCaptions, a large collection of informal and extremely short social media posts paired with unique images." ], "highlighted_evidence": [ "Our contributions are three-fold: we propose (1) an LSTM-CNN hybrid multimodal NER network that takes as input both image and text for recognition of a named entity in text input." ] } ] }, { "question": "Which types of named entities do they recognize?", "answers": [ { "answer": "PER, LOC, ORG, MISC", "type": "extractive" }, { "answer": "PER, LOC, ORG, MISC", "type": "extractive" } ], "q_uid": "f7a89b9cd2792f23f2cb43d50a01b8218a6fbb24", "evidence": [ { "raw_evidence": [ "The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are \u201cNew York Story\u201d or \u201cThanksgiving Story\u201d, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities." ], "highlighted_evidence": [ "The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC)." ] }, { "raw_evidence": [ "The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are \u201cNew York Story\u201d or \u201cThanksgiving Story\u201d, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities." ], "highlighted_evidence": [ "The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). " ] } ] }, { "question": "Can named entities in SnapCaptions be discontigious?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "a0543b4afda15ea47c1e623c7f00d4aaca045be0", "evidence": [ { "raw_evidence": [ "Task: given a caption and a paired image (if used), the goal is to label every token in a caption in BIO scheme (B: beginning, I: inside, O: outside) BIBREF27 . We report the performance of the following state-of-the-art NER models as baselines, as well as several configurations of our proposed approach to examine contributions of each component (W: word, C: char, V: visual)." ], "highlighted_evidence": [ "Task: given a caption and a paired image (if used), the goal is to label every token in a caption in BIO scheme (B: beginning, I: inside, O: outside) BIBREF27 . " ] } ] }, { "question": "How large is their MNER SnapCaptions dataset?", "answers": [ { "answer": "10K user-generated image (snap) and textual caption pairs", "type": "extractive" }, { "answer": "10000", "type": "abstractive" } ], "q_uid": "1591068b747c94f45b948e12edafe74b5e721047", "evidence": [ { "raw_evidence": [ "The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are \u201cNew York Story\u201d or \u201cThanksgiving Story\u201d, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities." ], "highlighted_evidence": [ "The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC)." ] }, { "raw_evidence": [ "The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are \u201cNew York Story\u201d or \u201cThanksgiving Story\u201d, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities." ], "highlighted_evidence": [ "The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). " ] } ] } ], "2004.01853": [ { "question": "What is masked document generation?", "answers": [ { "answer": "A task for seq2seq model pra-training that recovers a masked document to its original form.", "type": "abstractive" }, { "answer": "recovers a masked document to its original form", "type": "extractive" } ], "q_uid": "193ee49ae0f8827a6e67388a10da59e137e7769f", "evidence": [ { "raw_evidence": [ "Based on the above observations, we propose Step (as shorthand for Sequence-to-Sequence TransformEr Pre-training), which can be pre-trained on large scale unlabeled documents. Specifically, we design three tasks for seq2seq model pre-training, namely Sentence Reordering (SR), Next Sentence Generation (NSG), and Masked Document Generation (MDG). SR learns to recover a document with randomly shuffled sentences. NSG generates the next segment of a document based on its preceding segment. MDG recovers a masked document to its original form. After pre-trianing Step using the three tasks on unlabeled documents, we fine-tune it on supervised summarization datasets." ], "highlighted_evidence": [ "Specifically, we design three tasks for seq2seq model pre-training, namely Sentence Reordering (SR), Next Sentence Generation (NSG), and Masked Document Generation (MDG). " ] }, { "raw_evidence": [ "Based on the above observations, we propose Step (as shorthand for Sequence-to-Sequence TransformEr Pre-training), which can be pre-trained on large scale unlabeled documents. Specifically, we design three tasks for seq2seq model pre-training, namely Sentence Reordering (SR), Next Sentence Generation (NSG), and Masked Document Generation (MDG). SR learns to recover a document with randomly shuffled sentences. NSG generates the next segment of a document based on its preceding segment. MDG recovers a masked document to its original form. After pre-trianing Step using the three tasks on unlabeled documents, we fine-tune it on supervised summarization datasets." ], "highlighted_evidence": [ "MDG recovers a masked document to its original form. " ] } ] }, { "question": "Which of the three pretraining tasks is the most helpful?", "answers": [ { "answer": "SR", "type": "extractive" }, { "answer": "SR", "type": "extractive" } ], "q_uid": "ed2eb4e54b641b7670ab5a7060c7b16c628699ab", "evidence": [ { "raw_evidence": [ "Among all three pre-training tasks, SR works slightly better than the other two tasks (i.e., NSG and MDG). We also tried to randomly use all the three tasks during training with 1/3 probability each (indicated as ALL). Interesting, we observed that, in general, All outperforms all three tasks when employing unlabeled documents of training splits of CNNDM or NYT, which might be due to limited number of unlabeled documents of the training splits. After adding more data (i.e., GIAG-CM) to pre-training, SR consistently achieves highest ROUGE-2 on both CNNDM and NYT. We conclude that SR is the most effective task for pre-training since sentence reordering task requires comprehensively understanding a document in a wide coverage, going beyond individual words and sentences, which is highly close to the essense of abstractive document summarization." ], "highlighted_evidence": [ "Among all three pre-training tasks, SR works slightly better than the other two tasks (i.e., NSG and MDG)." ] }, { "raw_evidence": [ "Among all three pre-training tasks, SR works slightly better than the other two tasks (i.e., NSG and MDG). We also tried to randomly use all the three tasks during training with 1/3 probability each (indicated as ALL). Interesting, we observed that, in general, All outperforms all three tasks when employing unlabeled documents of training splits of CNNDM or NYT, which might be due to limited number of unlabeled documents of the training splits. After adding more data (i.e., GIAG-CM) to pre-training, SR consistently achieves highest ROUGE-2 on both CNNDM and NYT. We conclude that SR is the most effective task for pre-training since sentence reordering task requires comprehensively understanding a document in a wide coverage, going beyond individual words and sentences, which is highly close to the essense of abstractive document summarization." ], "highlighted_evidence": [ "Among all three pre-training tasks, SR works slightly better than the other two tasks (i.e., NSG and MDG)." ] } ] } ], "1710.03348": [ { "question": "What useful information does attention capture?", "answers": [ { "answer": "it captures other information rather than only the translational equivalent in the case of verbs", "type": "extractive" }, { "answer": "Alignment points of the POS tags.", "type": "abstractive" } ], "q_uid": "beac555c4aea76c88f19db7cc901fa638765c250", "evidence": [ { "raw_evidence": [ "Our analysis shows that attention models traditional alignment in some cases more closely while it captures information beyond alignment in others. For instance, attention agrees with traditional alignments to a high degree in the case of nouns. However, it captures other information rather than only the translational equivalent in the case of verbs.", "To better understand how attention accuracy affects translation quality, we analyse the relationship between attention loss and word prediction loss for individual part-of-speech classes. Figure FIGREF22 shows how attention loss differs when generating different POS tags. One can see that attention loss varies substantially across different POS tags. In particular, we focus on the cases of NOUN and VERB which are the most frequent POS tags in the dataset. As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN." ], "highlighted_evidence": [ "Our analysis shows that attention models traditional alignment in some cases more closely while it captures information beyond alignment in others. For instance, attention agrees with traditional alignments to a high degree in the case of nouns. However, it captures other information rather than only the translational equivalent in the case of verbs.", "One can see that attention loss varies substantially across different POS tags. In particular, we focus on the cases of NOUN and VERB which are the most frequent POS tags in the dataset. As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN." ] }, { "raw_evidence": [ "One can notice that less than half of the attention is paid to alignment points for most of the POS tags. To examine how the rest of attention in each case has been distributed over the source sentence we measure the attention distribution over dependency roles in the source side. We first parse the source side of RWTH data using the ParZu parser BIBREF16 . Then we compute how the attention probability mass given to the words other than the alignment points, is distributed over dependency roles. Table TABREF33 gives the most attended roles for each POS tag. Here, we focus on POS tags discussed earlier. One can see that the most attended roles when translating to nouns include adjectives and determiners and in the case of translating to verbs, it includes auxiliary verbs, adverbs (including negation), subjects, and objects." ], "highlighted_evidence": [ "One can notice that less than half of the attention is paid to alignment points for most of the POS tags. " ] } ] }, { "question": "What datasets are used?", "answers": [ { "answer": "WMT15 German-to-English, RWTH German-English dataset", "type": "extractive" }, { "answer": "RWTH German-English dataset", "type": "extractive" } ], "q_uid": "91e326fde8b0a538bc34d419541b5990d8aae14b", "evidence": [ { "raw_evidence": [ "We train both of the systems on the WMT15 German-to-English training data, see Table TABREF18 for some statistics. Table TABREF17 shows the BLEU scores BIBREF12 for both systems on different test sets.", "In order to compare attentions of multiple systems as well as to measure the difference between attention and word alignment, we convert the hard word alignments into soft ones and use cross entropy between attention and soft alignment as a loss function. For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments. The statistics of the data are given in Table TABREF8 . We convert the hard alignments to soft alignments using Equation EQREF10 . For unaligned words, we first assume that they have been aligned to all the words in the source side and then do the conversion. DISPLAYFORM0" ], "highlighted_evidence": [ "We train both of the systems on the WMT15 German-to-English training data, see Table TABREF18 for some statistics.", "For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments." ] }, { "raw_evidence": [ "In order to compare attentions of multiple systems as well as to measure the difference between attention and word alignment, we convert the hard word alignments into soft ones and use cross entropy between attention and soft alignment as a loss function. For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments. The statistics of the data are given in Table TABREF8 . We convert the hard alignments to soft alignments using Equation EQREF10 . For unaligned words, we first assume that they have been aligned to all the words in the source side and then do the conversion. DISPLAYFORM0" ], "highlighted_evidence": [ "For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments." ] } ] } ], "1612.03226": [ { "question": "How do they calculate variance from the model outputs?", "answers": [ { "answer": "reducing the variance of an estimator, EGL method in BIBREF3 is almost the same as Eq. ( EQREF8 ), except the gradient's norm is not squared in BIBREF3", "type": "extractive" }, { "answer": " Fisher Information Ratio", "type": "extractive" } ], "q_uid": "f94b53db307685d572aefad52cd55f53d23769c2", "evidence": [ { "raw_evidence": [ "Intuitively, an instance can be considered informative if it results in large changes in model parameters. A natural measure of the change is gradient length, INLINEFORM0 . Motivated by this intuition, Expected Gradient Length (EGL) BIBREF3 picks the instances expected to have the largest gradient length. Since labels are unknown on INLINEFORM1 , EGL computes the expectation of the gradient norm over all possible labelings. BIBREF3 interprets EGL as \u201cexpected model change\u201d. In the following section, we formalize the intuition for EGL and show that it follows naturally from reducing the variance of an estimator.", "Eq. ( EQREF7 ) indicates that to reduce INLINEFORM0 on test data, we need to minimize the expected variance INLINEFORM1 over the test set. This is called Fisher Information Ratio criteria in BIBREF6 , which itself is hard to optimize. An easier surrogate is to maximize INLINEFORM2 . Substituting Eq. ( EQREF5 ) into INLINEFORM3 , we have INLINEFORM4", "A practical issue is that we do not know INLINEFORM0 in advance. We could instead substitute an estimate INLINEFORM1 from a pre-trained model, where it is reasonable to assume the INLINEFORM2 to be close to the true INLINEFORM3 . The batch selection then works by taking the samples that have largest gradient norms, DISPLAYFORM0", "For RNNs, the gradients for each potential label can be obtained by back-propagation. Another practical issue is that EGL marginalizes over all possible labelings, but in speech recognition, the number of labelings scales exponentially in the number of timesteps. Therefore, we only marginalize over the INLINEFORM0 most probable labelings. They are obtained by beam search decoding, as in BIBREF7 . The EGL method in BIBREF3 is almost the same as Eq. ( EQREF8 ), except the gradient's norm is not squared in BIBREF3 ." ], "highlighted_evidence": [ "Since labels are unknown on INLINEFORM1 , EGL computes the expectation of the gradient norm over all possible labelings. BIBREF3 interprets EGL as \u201cexpected model change\u201d. In the following section, we formalize the intuition for EGL and show that it follows naturally from reducing the variance of an estimator.", "Eq. ( EQREF7 ) indicates that to reduce INLINEFORM0 on test data, we need to minimize the expected variance INLINEFORM1 over the test set.", "A practical issue is that we do not know INLINEFORM0 in advance. We could instead substitute an estimate INLINEFORM1 from a pre-trained model, where it is reasonable to assume the INLINEFORM2 to be close to the true INLINEFORM3 . The batch selection then works by taking the samples that have largest gradient norms, DISPLAYFORM0\n\nFor RNNs, the gradients for each potential label can be obtained by back-propagation. Another practical issue is that EGL marginalizes over all possible labelings, but in speech recognition, the number of labelings scales exponentially in the number of timesteps. Therefore, we only marginalize over the INLINEFORM0 most probable labelings. They are obtained by beam search decoding, as in BIBREF7 . The EGL method in BIBREF3 is almost the same as Eq. ( EQREF8 ), except the gradient's norm is not squared in BIBREF3 ." ] }, { "raw_evidence": [ "Statistical signal processing theory BIBREF5 states the following asymptotic distribution of INLINEFORM0 , DISPLAYFORM0", "where INLINEFORM0 is the Fisher Information Matrix with respect to INLINEFORM1 . Using first order approximation at INLINEFORM2 , we have asymptotically, DISPLAYFORM0", "Eq. ( EQREF7 ) indicates that to reduce INLINEFORM0 on test data, we need to minimize the expected variance INLINEFORM1 over the test set. This is called Fisher Information Ratio criteria in BIBREF6 , which itself is hard to optimize. An easier surrogate is to maximize INLINEFORM2 . Substituting Eq. ( EQREF5 ) into INLINEFORM3 , we have INLINEFORM4", "which is equivalent to INLINEFORM0" ], "highlighted_evidence": [ "Statistical signal processing theory BIBREF5 states the following asymptotic distribution of INLINEFORM0 , DISPLAYFORM0\n\nwhere INLINEFORM0 is the Fisher Information Matrix with respect to INLINEFORM1 . Using first order approximation at INLINEFORM2 , we have asymptotically, DISPLAYFORM0\n\nEq. ( EQREF7 ) indicates that to reduce INLINEFORM0 on test data, we need to minimize the expected variance INLINEFORM1 over the test set. This is called Fisher Information Ratio criteria in BIBREF6 , which itself is hard to optimize. An easier surrogate is to maximize INLINEFORM2 . Substituting Eq. ( EQREF5 ) into INLINEFORM3 , we have INLINEFORM4\n\nwhich is equivalent to INLINEFORM0" ] } ] }, { "question": "How much data samples do they start with before obtaining the initial model labels?", "answers": [ { "answer": "1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset", "type": "extractive" }, { "answer": "INLINEFORM2 is queried for the \u201cmost informative\u201d instance(s) INLINEFORM3", "type": "extractive" } ], "q_uid": "aa7d327ef98f9f9847b447d4def04889b4508d7a", "evidence": [ { "raw_evidence": [ "A base model, INLINEFORM0 , is trained on 190 hours ( INLINEFORM1 100K instances) of transcribed speech data. Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset. We query labels for the selected subset and incorporate them into training. Learning rates are tuned on a small validation set of 2048 instances. The trained model is then tested on a 156-hour ( INLINEFORM3 100K instances) test set and we report CTC loss, Character Error Rate (CER) and Word Error Rate (WER)." ], "highlighted_evidence": [ "Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset." ] }, { "raw_evidence": [ "Active learning seeks to augment the training set with a new set of utterances and labels INLINEFORM0 in order to achieve good generalization on a held-out test dataset. In many applications, there is an unlabeled pool INLINEFORM1 which is costly to label in its entirety. INLINEFORM2 is queried for the \u201cmost informative\u201d instance(s) INLINEFORM3 , for which the label(s) INLINEFORM4 are then obtained. We discuss several such query strategies below." ], "highlighted_evidence": [ "Active learning seeks to augment the training set with a new set of utterances and labels INLINEFORM0 in order to achieve good generalization on a held-out test dataset. In many applications, there is an unlabeled pool INLINEFORM1 which is costly to label in its entirety. INLINEFORM2 is queried for the \u201cmost informative\u201d instance(s) INLINEFORM3 , for which the label(s) INLINEFORM4 are then obtained. We discuss several such query strategies below." ] } ] }, { "question": "Which model do they use for end-to-end speech recognition?", "answers": [ { "answer": "RNN", "type": "extractive" }, { "answer": " Recurrent Neural Network (RNN)", "type": "extractive" } ], "q_uid": "b8d7d055ddb94f5826a9aad7479b4a92a9c8a2f0", "evidence": [ { "raw_evidence": [ "We empirically validate EGL on speech recognition tasks. In our experiments, the RNN takes in spectrograms of utterances, passing them through two 2D-convolutional layers, followed by seven bi-directional recurrent layers and a fully-connected layer with softmax activation. All recurrent layers are batch normalized. At each timestep, the softmax activations give a probability distribution over the characters. CTC loss BIBREF8 is then computed from the timestep-wise probabilities." ], "highlighted_evidence": [ "In our experiments, the RNN takes in spectrograms of utterances, passing them through two 2D-convolutional layers, followed by seven bi-directional recurrent layers and a fully-connected layer with softmax activation. All recurrent layers are batch normalized. At each timestep, the softmax activations give a probability distribution over the characters. CTC loss BIBREF8 is then computed from the timestep-wise probabilities." ] }, { "raw_evidence": [ "Denote INLINEFORM0 as an utterance and INLINEFORM1 the corresponding label (transcription). A speech recognition system models the conditional distribution INLINEFORM2 , where INLINEFORM3 are the parameters in the model, and INLINEFORM4 is typically implemented by a Recurrent Neural Network (RNN). A training set is a collection of INLINEFORM5 pairs, denoted as INLINEFORM6 . The parameters of the model are estimated by minimizing the negative log-likelihood on the training set: DISPLAYFORM0", "We empirically validate EGL on speech recognition tasks. In our experiments, the RNN takes in spectrograms of utterances, passing them through two 2D-convolutional layers, followed by seven bi-directional recurrent layers and a fully-connected layer with softmax activation. All recurrent layers are batch normalized. At each timestep, the softmax activations give a probability distribution over the characters. CTC loss BIBREF8 is then computed from the timestep-wise probabilities." ], "highlighted_evidence": [ "A speech recognition system models the conditional distribution INLINEFORM2 , where INLINEFORM3 are the parameters in the model, and INLINEFORM4 is typically implemented by a Recurrent Neural Network (RNN). A training set is a collection of INLINEFORM5 pairs, denoted as INLINEFORM6 . The parameters of the model are estimated by minimizing the negative log-likelihood on the training set: DISPLAYFORM0", "We empirically validate EGL on speech recognition tasks. In our experiments, the RNN takes in spectrograms of utterances, passing them through two 2D-convolutional layers, followed by seven bi-directional recurrent layers and a fully-connected layer with softmax activation. All recurrent layers are batch normalized. At each timestep, the softmax activations give a probability distribution over the characters. CTC loss BIBREF8 is then computed from the timestep-wise probabilities." ] } ] }, { "question": "Which dataset do they use?", "answers": [ { "answer": "190 hours ( INLINEFORM1 100K instances)", "type": "extractive" }, { "answer": "trained on 190 hours ( INLINEFORM1 100K instances) of transcribed speech data, selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset", "type": "extractive" } ], "q_uid": "551457ed34ca7fc0878c85bc664b135c21059b58", "evidence": [ { "raw_evidence": [ "A base model, INLINEFORM0 , is trained on 190 hours ( INLINEFORM1 100K instances) of transcribed speech data. Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset. We query labels for the selected subset and incorporate them into training. Learning rates are tuned on a small validation set of 2048 instances. The trained model is then tested on a 156-hour ( INLINEFORM3 100K instances) test set and we report CTC loss, Character Error Rate (CER) and Word Error Rate (WER)." ], "highlighted_evidence": [ "A base model, INLINEFORM0 , is trained on 190 hours ( INLINEFORM1 100K instances) of transcribed speech data. Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset. We query labels for the selected subset and incorporate them into training. Learning rates are tuned on a small validation set of 2048 instances. The trained model is then tested on a 156-hour ( INLINEFORM3 100K instances) test set and we report CTC loss, Character Error Rate (CER) and Word Error Rate (WER)." ] }, { "raw_evidence": [ "A base model, INLINEFORM0 , is trained on 190 hours ( INLINEFORM1 100K instances) of transcribed speech data. Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset. We query labels for the selected subset and incorporate them into training. Learning rates are tuned on a small validation set of 2048 instances. The trained model is then tested on a 156-hour ( INLINEFORM3 100K instances) test set and we report CTC loss, Character Error Rate (CER) and Word Error Rate (WER)." ], "highlighted_evidence": [ "A base model, INLINEFORM0 , is trained on 190 hours ( INLINEFORM1 100K instances) of transcribed speech data. Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset. We query labels for the selected subset and incorporate them into training. Learning rates are tuned on a small validation set of 2048 instances. The trained model is then tested on a 156-hour ( INLINEFORM3 100K instances) test set and we report CTC loss, Character Error Rate (CER) and Word Error Rate (WER)." ] } ] } ], "1809.01202": [ { "question": "What types of social media did they consider?", "answers": [ { "answer": "Facebook status update messages", "type": "extractive" }, { "answer": "Facebook status update messages", "type": "extractive" } ], "q_uid": "a4d115220438c0ded06a91ad62337061389a6747", "evidence": [ { "raw_evidence": [ "We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages. Three well-trained annotators manually labeled whether or not each message contains the causal explanation and obtained 1,598 causality messages with substantial agreement ( $\\kappa =0.61$ ). We used the majority vote for our gold standard. Then, on each causality message, annotators identified which text spans are causal explanations." ], "highlighted_evidence": [ "We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages." ] }, { "raw_evidence": [ "We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages. Three well-trained annotators manually labeled whether or not each message contains the causal explanation and obtained 1,598 causality messages with substantial agreement ( $\\kappa =0.61$ ). We used the majority vote for our gold standard. Then, on each causality message, annotators identified which text spans are causal explanations." ], "highlighted_evidence": [ "We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages." ] } ] } ], "1909.02027": [ { "question": "How was the dataset annotated?", "answers": [ { "answer": "intents are annotated manually with guidance from queries collected using a scoping crowdsourcing task", "type": "abstractive" }, { "answer": "manually ", "type": "extractive" } ], "q_uid": "2c7e94a65f5f532aa31d3e538dcab0468a43b264", "evidence": [ { "raw_evidence": [ "We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent." ], "highlighted_evidence": [ "We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. " ] }, { "raw_evidence": [ "We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent." ], "highlighted_evidence": [ " We manually grouped data generated by scoping tasks into intents. " ] } ] }, { "question": "Which classifiers are evaluated?", "answers": [ { "answer": "SVM, MLP, FastText, CNN, BERT, Google's DialogFlow, Rasa NLU", "type": "extractive" }, { "answer": "SVM, MLP, FastText, CNN, BERT, DialogFlow, Rasa NLU", "type": "extractive" } ], "q_uid": "149da739b1c19a157880d9d4827f0b692006aa2c", "evidence": [ { "raw_evidence": [ "Benchmark Evaluation ::: Classifier Models", "SVM: A linear support vector machine with bag-of-words sentence representations.", "MLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.", "FastText: A shallow neural network that averages embeddings of n-grams BIBREF5.", "CNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.", "BERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.", "Platforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn." ], "highlighted_evidence": [ "Benchmark Evaluation ::: Classifier Models\nSVM: A linear support vector machine with bag-of-words sentence representations.\n\nMLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.\n\nFastText: A shallow neural network that averages embeddings of n-grams BIBREF5.\n\nCNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.\n\nBERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.\n\nPlatforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn." ] }, { "raw_evidence": [ "Benchmark Evaluation ::: Classifier Models", "SVM: A linear support vector machine with bag-of-words sentence representations.", "MLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.", "FastText: A shallow neural network that averages embeddings of n-grams BIBREF5.", "CNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.", "BERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.", "Platforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn." ], "highlighted_evidence": [ " Classifier Models\nSVM: A linear support vector machine with bag-of-words sentence representations.\n\nMLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.\n\nFastText: A shallow neural network that averages embeddings of n-grams BIBREF5.\n\nCNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.\n\nBERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.\n\nPlatforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn." ] } ] }, { "question": "What is the size of this dataset?", "answers": [ { "answer": "23,700 ", "type": "extractive" }, { "answer": " 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains and 1,200 out-of-scope queries.", "type": "abstractive" } ], "q_uid": "27de1d499348e17fec324d0ef00361a490659988", "evidence": [ { "raw_evidence": [ "This paper fills this gap by analyzing intent classification performance with a focus on out-of-scope handling. To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. The queries cover 150 intents, plus out-of-scope queries that do not fall within any of the 150 in-scope intents." ], "highlighted_evidence": [ "To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. " ] }, { "raw_evidence": [ "We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data." ], "highlighted_evidence": [ "We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries." ] } ] }, { "question": "Where does the data come from?", "answers": [ { "answer": "crowsourcing platform", "type": "abstractive" }, { "answer": "For ins scope data collection:crowd workers which provide questions and commands related to topic domains and additional data the rephrase and scenario crowdsourcing tasks proposed by BIBREF2 is used. \nFor out of scope data collection: from workers mistakes-queries written for one of the 150 intents that did not actually match any of the intents and using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere.", "type": "abstractive" } ], "q_uid": "cfcdd73e712caf552ba44d0aa264d8dace65a589", "evidence": [ { "raw_evidence": [ "We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data." ], "highlighted_evidence": [ "We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. " ] }, { "raw_evidence": [ "We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent.", "Out-of-scope queries were collected in two ways. First, using worker mistakes: queries written for one of the 150 intents that did not actually match any of the intents. Second, using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere. To help ensure the richness of this additional out-of-scope data, each of these task prompts contributed to at most four queries. Since we use the same crowdsourcing method for collecting out-of-scope data, these queries are similar in style to their in-scope counterparts." ], "highlighted_evidence": [ "We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. ", "Out-of-scope queries were collected in two ways. First, using worker mistakes: queries written for one of the 150 intents that did not actually match any of the intents. Second, using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere." ] } ] } ], "1911.02855": [ { "question": "What are method improvements of F1 for paraphrase identification?", "answers": [ { "answer": "Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP", "type": "extractive" }, { "answer": "+0.58", "type": "extractive" } ], "q_uid": "23b2901264bda91045258b5d4120879ae292e950", "evidence": [ { "raw_evidence": [ "Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP." ], "highlighted_evidence": [ "Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP." ] }, { "raw_evidence": [ "Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset.", "Experiments ::: Paraphrase Identification ::: Results", "Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP." ], "highlighted_evidence": [ "Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset.", "Experiments ::: Paraphrase Identification ::: Results\nTable shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP." ] } ] }, { "question": "What are method's improvements of F1 for NER task for English and Chinese datasets?", "answers": [ { "answer": "English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively, Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively", "type": "extractive" }, { "answer": "For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively., huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively", "type": "extractive" } ], "q_uid": "b5bc34e1e381dbf972d0b594fe8c66ff75305d71", "evidence": [ { "raw_evidence": [ "For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37.", "Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets." ], "highlighted_evidence": [ "For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37.", "Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets." ] }, { "raw_evidence": [ "Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets." ], "highlighted_evidence": [ "For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively." ] } ] }, { "question": "What are method's improvements of F1 w.r.t. baseline BERT tagger for Chinese POS datasets?", "answers": [ { "answer": "+1.86 in terms of F1 score on CTB5, +1.80 on CTB6, +2.19 on UD1.4", "type": "extractive" }, { "answer": " +1.86", "type": "extractive" } ], "q_uid": "72f7ef55e150e16dcf97fe443aff9971a32414ef", "evidence": [ { "raw_evidence": [ "Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets." ], "highlighted_evidence": [ "As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4." ] }, { "raw_evidence": [ "Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets." ], "highlighted_evidence": [ "Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets." ] } ] }, { "question": "How are weights dynamically adjusted?", "answers": [ { "answer": "One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified.", "type": "extractive" }, { "answer": "associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds", "type": "extractive" } ], "q_uid": "20e38438471266ce021817c6364f6a46d01564f2", "evidence": [ { "raw_evidence": [ "Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance.", "To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form:", "One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified.", "A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\\beta }$ factor, leading the final loss to be $(1-p)^{\\beta }\\log p$." ], "highlighted_evidence": [ "Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance.\n\nTo address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form:\n\nOne can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified.\n\nA close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\\beta }$ factor, leading the final loss to be $(1-p)^{\\beta }\\log p$." ] }, { "raw_evidence": [ "Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a hard version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples." ], "highlighted_evidence": [ "Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples." ] } ] } ], "1906.01081": [ { "question": "Ngrams of which length are aligned using PARENT?", "answers": [ { "answer": "Answer with content missing: (Parent subsections) combine precisions for n-gram orders 1-4", "type": "abstractive" } ], "q_uid": "28067da818e3f61f8b5152c0d42a531bf0f987d4", "evidence": [ { "raw_evidence": [ "We show that existing automatic metrics, including BLEU, correlate poorly with human judgments when the evaluation sets contain divergent references (\u00a7 SECREF36 ). For many table-to-text generation tasks, the tables themselves are in a pseudo-natural language format (e.g., WikiBio, WebNLG BIBREF6 , and E2E-NLG BIBREF10 ). In such cases we propose to compare the generated text to the underlying table as well to improve evaluation. We develop a new metric, PARENT (Precision And Recall of Entailed N-grams from the Table) (\u00a7 SECREF3 ). When computing precision, PARENT effectively uses a union of the reference and the table, to reward correct information missing from the reference. When computing recall, it uses an intersection of the reference and the table, to ignore extra incorrect information in the reference. The union and intersection are computed with the help of an entailment model to decide if a text n-gram is entailed by the table. We show that this method is more effective than using the table as an additional reference. Our main contributions are:", "PARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 ." ], "highlighted_evidence": [ "PARENT\nPARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 ." ] } ] }, { "question": "How many people participated in their evaluation study of table-to-text models?", "answers": [ { "answer": "about 500", "type": "abstractive" } ], "q_uid": "bf3b27a4f4be1f9ae31319877fd0c75c03126fd5", "evidence": [ { "raw_evidence": [ "The data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio." ], "highlighted_evidence": [ "In total around 500 different workers were involved in the annotation.", "about 500" ] } ] } ], "1611.01576": [ { "question": "What languages pairs are used in machine translation?", "answers": [ { "answer": "German\u2013English", "type": "extractive" }, { "answer": "German\u2013English", "type": "extractive" } ], "q_uid": "2f901dab6b757e12763b23ae8b37ae2e517a2271", "evidence": [ { "raw_evidence": [ "We evaluate the sequence-to-sequence QRNN architecture described in SECREF5 on a challenging neural machine translation task, IWSLT German\u2013English spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters in English or German, and use a unified vocabulary of 187 Unicode code points." ], "highlighted_evidence": [ "We evaluate the sequence-to-sequence QRNN architecture described in SECREF5 on a challenging neural machine translation task, IWSLT German\u2013English spoken-domain translation, applying fully character-level segmentation. " ] }, { "raw_evidence": [ "We evaluate the sequence-to-sequence QRNN architecture described in SECREF5 on a challenging neural machine translation task, IWSLT German\u2013English spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters in English or German, and use a unified vocabulary of 187 Unicode code points." ], "highlighted_evidence": [ "We evaluate the sequence-to-sequence QRNN architecture described in SECREF5 on a challenging neural machine translation task, IWSLT German\u2013English spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters in English or German, and use a unified vocabulary of 187 Unicode code points." ] } ] }, { "question": "What sentiment classification dataset is used?", "answers": [ { "answer": "the IMDb movie review dataset BIBREF17", "type": "extractive" }, { "answer": "IMDb movie review", "type": "extractive" } ], "q_uid": "b591853e938984e6069d738371500ebdec50d256", "evidence": [ { "raw_evidence": [ "We evaluate the QRNN architecture on a popular document-level sentiment classification benchmark, the IMDb movie review dataset BIBREF17 . The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words BIBREF18 . We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., BIBREF19 )." ], "highlighted_evidence": [ "We evaluate the QRNN architecture on a popular document-level sentiment classification benchmark, the IMDb movie review dataset BIBREF17 . The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words BIBREF18 . We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., BIBREF19 )." ] }, { "raw_evidence": [ "We evaluate the QRNN architecture on a popular document-level sentiment classification benchmark, the IMDb movie review dataset BIBREF17 . The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words BIBREF18 . We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., BIBREF19 )." ], "highlighted_evidence": [ "We evaluate the QRNN architecture on a popular document-level sentiment classification benchmark, the IMDb movie review dataset BIBREF17 . " ] } ] }, { "question": "What pooling function is used?", "answers": [ { "answer": "dynamic average pooling", "type": "extractive" }, { "answer": " f-pooling, fo-pooling, and ifo-pooling ", "type": "extractive" } ], "q_uid": "a130306c6662ff489df13fb3f8faa7cba8c52a21", "evidence": [ { "raw_evidence": [ "Suitable functions for the pooling subcomponent can be constructed from the familiar elementwise gates of the traditional LSTM cell. We seek a function controlled by gates that can mix states across timesteps, but which acts independently on each channel of the state vector. The simplest option, which BIBREF12 term \u201cdynamic average pooling\u201d, uses only a forget gate: DISPLAYFORM0" ], "highlighted_evidence": [ "We seek a function controlled by gates that can mix states across timesteps, but which acts independently on each channel of the state vector. The simplest option, which BIBREF12 term \u201cdynamic average pooling\u201d, uses only a forget gate: DISPLAYFORM0" ] }, { "raw_evidence": [ "We term these three options f-pooling, fo-pooling, and ifo-pooling respectively; in each case we initialize INLINEFORM0 or INLINEFORM1 to zero. Although the recurrent parts of these functions must be calculated for each timestep in sequence, their simplicity and parallelism along feature dimensions means that, in practice, evaluating them over even long sequences requires a negligible amount of computation time." ], "highlighted_evidence": [ "We term these three options f-pooling, fo-pooling, and ifo-pooling respectively; in each case we initialize INLINEFORM0 or INLINEFORM1 to zero. Although the recurrent parts of these functions must be calculated for each timestep in sequence, their simplicity and parallelism along feature dimensions means that, in practice, evaluating them over even long sequences requires a negligible amount of computation time." ] } ] } ], "1904.09535": [ { "question": "Do they report results only on English?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "b1cf5739467ba90059add58d11b73d075a11ec86", "evidence": [ { "raw_evidence": [ "To verify the performance of NeuronBlocks, we conducted extensive experiments for common NLP tasks on public data sets including CoNLL-2003 BIBREF14 , GLUE benchmark BIBREF13 , and WikiQA corpus BIBREF15 . The experimental results showed that the models built with NeuronBlocks can achieve reliable and competitive results on various tasks, with productivity greatly improved.", "For sequence labeling task, we evaluated NeuronBlocks on CoNLL-2003 BIBREF14 English NER dataset, following most works on the same task. This dataset includes four types of named entities, namely, PERSON, LOCATION, ORGANIZATION, and MISC. We adopted the BIOES tagging scheme instead of IOB, as many previous works indicated meaningful improvement with BIOES scheme BIBREF16 , BIBREF17 . Table TABREF28 shows the results on CoNLL-2003 Englist testb dataset, with 12 different combinations of network layers/blocks, such as word/character embedding, CNN/LSTM and CRF. The results suggest that the flexible combination of layers/blocks in NeuronBlocks can easily reproduce the performance of original models, with comparative or slightly better performance." ], "highlighted_evidence": [ "To verify the performance of NeuronBlocks, we conducted extensive experiments for common NLP tasks on public data sets including CoNLL-2003 BIBREF14 , GLUE benchmark BIBREF13 , and WikiQA corpus BIBREF15 .", "For sequence labeling task, we evaluated NeuronBlocks on CoNLL-2003 BIBREF14 English NER dataset, following most works on the same task." ] } ] }, { "question": "What neural network modules are included in NeuronBlocks?", "answers": [ { "answer": "Embedding Layer, Neural Network Layers, Loss Function, Metrics", "type": "extractive" }, { "answer": "Embedding Layer, Neural Network Layers, Loss Function, Metrics", "type": "extractive" } ], "q_uid": "2ea4347f1992b0b3958c4844681ff0fe4d0dd1dd", "evidence": [ { "raw_evidence": [ "We recognize the following major functional categories of neural network components. Each category covers as many commonly used modules as possible. The Block Zoo is an open framework, and more modules can be added in the future.", "Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported.", "Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability.", "Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 .", "Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported." ], "highlighted_evidence": [ "The Block Zoo is an open framework, and more modules can be added in the future.", "Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported.", "Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability.", "Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 .\n\nMetrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported." ] }, { "raw_evidence": [ "The Neuronblocks is built on PyTorch. The overall framework is illustrated in Figure FIGREF16 . It consists of two layers: the Block Zoo and the Model Zoo. In Block Zoo, the most commonly used components of deep neural networks are categorized into several groups according to their functions. Within each category, several alternative components are encapsulated into standard and reusable blocks with a consistent interface. These blocks serve as basic and exchangeable units to construct complex network architectures for different NLP tasks. In Model Zoo, the most popular NLP tasks are identified. For each task, several end-to-end network templates are provided in the form of JSON configuration files. Users can simply browse these configurations and choose one or more to instantiate. The whole task can be completed without any coding efforts.", "We recognize the following major functional categories of neural network components. Each category covers as many commonly used modules as possible. The Block Zoo is an open framework, and more modules can be added in the future.", "Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported.", "Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability.", "Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 .", "Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported." ], "highlighted_evidence": [ "The Neuronblocks is built on PyTorch.", "It consists of two layers: the Block Zoo and the Model Zoo.", "Block Zoo\nWe recognize the following major functional categories of neural network components.", "Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported.", "Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability.", "Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 .\n\n", "Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported." ] } ] }, { "question": "How do the authors evidence the claim that many engineers find it a big overhead to choose from multiple frameworks, models and optimization techniques?", "answers": [ { "answer": "By conducting a survey among engineers", "type": "abstractive" } ], "q_uid": "4f253dfced6a749bf57a1b4984dc962ce9550184", "evidence": [ { "raw_evidence": [ "The above challenges often hinder the productivity of engineers, and result in less optimal solutions to their given tasks. This motivates us to develop an NLP toolkit for DNN models, which facilitates engineers to develop DNN approaches. Before designing this NLP toolkit, we conducted a survey among engineers and identified a spectrum of three typical personas." ], "highlighted_evidence": [ "Before designing this NLP toolkit, we conducted a survey among engineers and identified a spectrum of three typical personas." ] } ] } ], "1911.03059": [ { "question": "what datasets did they use?", "answers": [ { "answer": "Dataset of total 3500 questions from the Internet and other sources such as books of general knowledge questions, history, etc.", "type": "abstractive" }, { "answer": "3500 questions collected from the internet and books.", "type": "abstractive" } ], "q_uid": "dc1cec824507fc85ac1ba87882fe1e422ff6cffb", "evidence": [ { "raw_evidence": [ "Though Bengali is the seventh most spoken language in terms of number of native speakers BIBREF23, there is no standard corpus of questions available BIBREF0. We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to." ], "highlighted_evidence": [ "We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to." ] }, { "raw_evidence": [ "Though Bengali is the seventh most spoken language in terms of number of native speakers BIBREF23, there is no standard corpus of questions available BIBREF0. We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to." ], "highlighted_evidence": [ "We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. " ] } ] }, { "question": "what ml based approaches were compared?", "answers": [ { "answer": "Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)", "type": "extractive" }, { "answer": "Multi-Layer Perceptron, Naive Bayes Classifier, Support Vector Machine, Gradient Boosting Classifier, Stochastic Gradient Descent, K Nearest Neighbour, Random Forest", "type": "extractive" } ], "q_uid": "f428618ca9c017e0c9c2a23515dab30a7660f65f", "evidence": [ { "raw_evidence": [ "In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. Bengali questions have flexible inquiring ways, so there are many difficulties associated with Bengali QC BIBREF0. As there is no rich corpus of questions in Bengali Language available, collecting questions is an additional challenge. Different difficulties in building a QA System are mentioned in the literature BIBREF2 BIBREF3. The first work on a machine learning based approach towards Bengali question classification is presented in BIBREF0 that employ the Stochastic Gradient Descent (SGD)." ], "highlighted_evidence": [ "In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers." ] }, { "raw_evidence": [ "In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. Bengali questions have flexible inquiring ways, so there are many difficulties associated with Bengali QC BIBREF0. As there is no rich corpus of questions in Bengali Language available, collecting questions is an additional challenge. Different difficulties in building a QA System are mentioned in the literature BIBREF2 BIBREF3. The first work on a machine learning based approach towards Bengali question classification is presented in BIBREF0 that employ the Stochastic Gradient Descent (SGD)." ], "highlighted_evidence": [ "In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. " ] } ] } ], "1706.08198": [ { "question": "Is pre-training effective in their evaluation?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "8ce11515634236165cdb06ba80b9a36a8b9099a2", "evidence": [ { "raw_evidence": [ "In this paper, we evaluated the encoder-decoder-reconstructor on English-Japanese and Japanese-English translation tasks. In addition, we evaluate the effectiveness of pre-training by comparing it with a jointly-trained model of forward translation and back-translation. Experimental results show that the encoder-decoder-reconstructor offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, and the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training." ], "highlighted_evidence": [ "In addition, we evaluate the effectiveness of pre-training by comparing it with a jointly-trained model of forward translation and back-translation. Experimental results show that the encoder-decoder-reconstructor offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, and the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training." ] }, { "raw_evidence": [ "In addition, we jointly train a model of forward translation and back-translation without pre-training, and then evaluate this model. As a result, the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training." ], "highlighted_evidence": [ "In addition, we jointly train a model of forward translation and back-translation without pre-training, and then evaluate this model. As a result, the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training." ] } ] }, { "question": "What parallel corpus did they use?", "answers": [ { "answer": "Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF0, NTCIR PatentMT Parallel Corpus BIBREF1", "type": "extractive" }, { "answer": "Asian Scientific Paper Excerpt Corpus, NTCIR PatentMT Parallel Corpus ", "type": "extractive" } ], "q_uid": "6024039bbd1118c5dab86c41cce1175d99f10a25", "evidence": [ { "raw_evidence": [ "We used two parallel corpora: Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF0 and NTCIR PatentMT Parallel Corpus BIBREF1 . Regarding the training data of ASPEC, we used only the first 1 million sentences sorted by sentence-alignment similarity. Japanese sentences were segmented by the morphological analyzer MeCab (version 0.996, IPADIC), and English sentences were tokenized by tokenizer.perl of Moses. Table TABREF14 shows the numbers of the sentences in each corpus. Note that sentences with more than 40 words were excluded from the training data." ], "highlighted_evidence": [ "We used two parallel corpora: Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF0 and NTCIR PatentMT Parallel Corpus BIBREF1 ." ] }, { "raw_evidence": [ "We used two parallel corpora: Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF0 and NTCIR PatentMT Parallel Corpus BIBREF1 . Regarding the training data of ASPEC, we used only the first 1 million sentences sorted by sentence-alignment similarity. Japanese sentences were segmented by the morphological analyzer MeCab (version 0.996, IPADIC), and English sentences were tokenized by tokenizer.perl of Moses. Table TABREF14 shows the numbers of the sentences in each corpus. Note that sentences with more than 40 words were excluded from the training data." ], "highlighted_evidence": [ "We used two parallel corpora: Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF0 and NTCIR PatentMT Parallel Corpus BIBREF1 ." ] } ] } ], "1909.08089": [ { "question": "What do they mean by global and local context?", "answers": [ { "answer": "global (the whole document), local context (e.g., the section/topic)", "type": "extractive" }, { "answer": "global (the whole document) and the local context (e.g., the section/topic) ", "type": "extractive" } ], "q_uid": "b66c9a4021b6c8529cac1a2b54dacd8ec79afa5f", "evidence": [ { "raw_evidence": [ "In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary" ], "highlighted_evidence": [ "In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary" ] }, { "raw_evidence": [ "In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary" ], "highlighted_evidence": [ "In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary" ] } ] } ], "1910.09982": [ { "question": "What are the 18 propaganda techniques?", "answers": [ { "answer": "Loaded language, Name calling or labeling, Repetition, Exaggeration or minimization, Doubt, Appeal to fear/prejudice, Flag-waving, Causal oversimplification, Slogans, Appeal to authority, Black-and-white fallacy, dictatorship, Thought-terminating clich\u00e9, Whataboutism, Reductio ad Hitlerum, Red herring, Bandwagon, Obfuscation, intentional vagueness, confusion, Straw man", "type": "extractive" }, { "answer": "1. Loaded language, 2. Name calling or labeling, 3. Repetition, 4. Exaggeration or minimization, 5. Doubt, 6. Appeal to fear/prejudice, 7. Flag-waving, 8. Causal oversimplification, 9. Slogans, 10. Appeal to authority, 11. Black-and-white fallacy, dictatorship, 12. Thought-terminating clich\u00e9, 13. Whataboutism, 14. Reductio ad Hitlerum, 15. Red herring, 16. Bandwagon, 17. Obfuscation, intentional vagueness, confusion, 18. Straw man", "type": "extractive" } ], "q_uid": "6bfba3ddca5101ed15256fca75fcdc95a53cece7", "evidence": [ { "raw_evidence": [ "Propaganda uses psychological and rhetorical techniques to achieve its objective. Such techniques include the use of logical fallacies and appeal to emotions. For the shared task, we use 18 techniques that can be found in news articles and can be judged intrinsically, without the need to retrieve supporting information from external resources. We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:", "Propaganda Techniques ::: 1. Loaded language.", "Using words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11.", "Propaganda Techniques ::: 2. Name calling or labeling.", "Labeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12.", "Propaganda Techniques ::: 3. Repetition.", "Repeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12.", "Propaganda Techniques ::: 4. Exaggeration or minimization.", "Either representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke.", "Propaganda Techniques ::: 5. Doubt.", "Questioning the credibility of someone or something.", "Propaganda Techniques ::: 6. Appeal to fear/prejudice.", "Seeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments.", "Propaganda Techniques ::: 7. Flag-waving.", "Playing on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15.", "Propaganda Techniques ::: 8. Causal oversimplification.", "Assuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue.", "Propaganda Techniques ::: 9. Slogans.", "A brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16.", "Propaganda Techniques ::: 10. Appeal to authority.", "Stating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14.", "Propaganda Techniques ::: 11. Black-and-white fallacy, dictatorship.", "Presenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship).", "Propaganda Techniques ::: 12. Thought-terminating clich\u00e9.", "Words or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18.", "Propaganda Techniques ::: 13. Whataboutism.", "Discredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19.", "Propaganda Techniques ::: 14. Reductio ad Hitlerum.", "Persuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20.", "Propaganda Techniques ::: 15. Red herring.", "Introducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20.", "Propaganda Techniques ::: 16. Bandwagon.", "Attempting to persuade the target audience to join in and take the course of action because \u201ceveryone else is taking the same action\u201d BIBREF15.", "Propaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion.", "Using deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion.", "Propaganda Techniques ::: 18. Straw man.", "When an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22." ], "highlighted_evidence": [ " We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:\n\nPropaganda Techniques ::: 1. Loaded language.\nUsing words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11.\n\nPropaganda Techniques ::: 2. Name calling or labeling.\nLabeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12.\n\nPropaganda Techniques ::: 3. Repetition.\nRepeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12.\n\nPropaganda Techniques ::: 4. Exaggeration or minimization.\nEither representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke.\n\nPropaganda Techniques ::: 5. Doubt.\nQuestioning the credibility of someone or something.\n\nPropaganda Techniques ::: 6. Appeal to fear/prejudice.\nSeeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments.\n\nPropaganda Techniques ::: 7. Flag-waving.\nPlaying on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15.\n\nPropaganda Techniques ::: 8. Causal oversimplification.\nAssuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue.\n\nPropaganda Techniques ::: 9. Slogans.\nA brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16.\n\nPropaganda Techniques ::: 10. Appeal to authority.\nStating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14.\n\nPropaganda Techniques ::: 11. Black-and-white fallacy, dictatorship.\nPresenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship).\n\nPropaganda Techniques ::: 12. Thought-terminating clich\u00e9.\nWords or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18.\n\nPropaganda Techniques ::: 13. Whataboutism.\nDiscredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19.\n\nPropaganda Techniques ::: 14. Reductio ad Hitlerum.\nPersuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20.\n\nPropaganda Techniques ::: 15. Red herring.\nIntroducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20.\n\nPropaganda Techniques ::: 16. Bandwagon.\nAttempting to persuade the target audience to join in and take the course of action because \u201ceveryone else is taking the same action\u201d BIBREF15.\n\nPropaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion.\nUsing deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion.\n\nPropaganda Techniques ::: 18. Straw man.\nWhen an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22." ] }, { "raw_evidence": [ "Propaganda uses psychological and rhetorical techniques to achieve its objective. Such techniques include the use of logical fallacies and appeal to emotions. For the shared task, we use 18 techniques that can be found in news articles and can be judged intrinsically, without the need to retrieve supporting information from external resources. We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:", "Propaganda Techniques ::: 1. Loaded language.", "Using words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11.", "Propaganda Techniques ::: 2. Name calling or labeling.", "Labeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12.", "Propaganda Techniques ::: 3. Repetition.", "Repeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12.", "Propaganda Techniques ::: 4. Exaggeration or minimization.", "Either representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke.", "Propaganda Techniques ::: 5. Doubt.", "Questioning the credibility of someone or something.", "Propaganda Techniques ::: 6. Appeal to fear/prejudice.", "Seeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments.", "Propaganda Techniques ::: 7. Flag-waving.", "Playing on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15.", "Propaganda Techniques ::: 8. Causal oversimplification.", "Assuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue.", "Propaganda Techniques ::: 9. Slogans.", "A brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16.", "Propaganda Techniques ::: 10. Appeal to authority.", "Stating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14.", "Propaganda Techniques ::: 11. Black-and-white fallacy, dictatorship.", "Presenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship).", "Propaganda Techniques ::: 12. Thought-terminating clich\u00e9.", "Words or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18.", "Propaganda Techniques ::: 13. Whataboutism.", "Discredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19.", "Propaganda Techniques ::: 14. Reductio ad Hitlerum.", "Persuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20.", "Propaganda Techniques ::: 15. Red herring.", "Introducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20.", "Propaganda Techniques ::: 16. Bandwagon.", "Attempting to persuade the target audience to join in and take the course of action because \u201ceveryone else is taking the same action\u201d BIBREF15.", "Propaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion.", "Using deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion.", "Propaganda Techniques ::: 18. Straw man.", "When an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22." ], "highlighted_evidence": [ "We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:\n\nPropaganda Techniques ::: 1. Loaded language.\nUsing words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11.\n\nPropaganda Techniques ::: 2. Name calling or labeling.\nLabeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12.\n\nPropaganda Techniques ::: 3. Repetition.\nRepeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12.\n\nPropaganda Techniques ::: 4. Exaggeration or minimization.\nEither representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke.\n\nPropaganda Techniques ::: 5. Doubt.\nQuestioning the credibility of someone or something.\n\nPropaganda Techniques ::: 6. Appeal to fear/prejudice.\nSeeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments.\n\nPropaganda Techniques ::: 7. Flag-waving.\nPlaying on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15.\n\nPropaganda Techniques ::: 8. Causal oversimplification.\nAssuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue.\n\nPropaganda Techniques ::: 9. Slogans.\nA brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16.\n\nPropaganda Techniques ::: 10. Appeal to authority.\nStating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14.\n\nPropaganda Techniques ::: 11. Black-and-white fallacy, dictatorship.\nPresenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship).\n\nPropaganda Techniques ::: 12. Thought-terminating clich\u00e9.\nWords or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18.\n\nPropaganda Techniques ::: 13. Whataboutism.\nDiscredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19.\n\nPropaganda Techniques ::: 14. Reductio ad Hitlerum.\nPersuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20.\n\nPropaganda Techniques ::: 15. Red herring.\nIntroducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20.\n\nPropaganda Techniques ::: 16. Bandwagon.\nAttempting to persuade the target audience to join in and take the course of action because \u201ceveryone else is taking the same action\u201d BIBREF15.\n\nPropaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion.\nUsing deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion.\n\nPropaganda Techniques ::: 18. Straw man.\nWhen an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22." ] } ] }, { "question": "What dataset was used?", "answers": [ { "answer": " news articles in free-text format", "type": "extractive" }, { "answer": "collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators", "type": "extractive" } ], "q_uid": "df5a4505edccc0ee11349ed6e7958cf6b84c9ed4", "evidence": [ { "raw_evidence": [ "The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. More details about the data collection and the annotation, as well as statistics about the corpus can be found in BIBREF10, where an earlier version of the corpus is described, which includes 450 news articles. We further annotated 47 additional articles for the purpose of the shared task using the same protocol and the same annotators." ], "highlighted_evidence": [ "The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. " ] }, { "raw_evidence": [ "The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. More details about the data collection and the annotation, as well as statistics about the corpus can be found in BIBREF10, where an earlier version of the corpus is described, which includes 450 news articles. We further annotated 47 additional articles for the purpose of the shared task using the same protocol and the same annotators.", "The training, the development, and the test partitions of the corpus used for the shared task consist of 350, 61, and 86 articles and of 16,965, 2,235, and 3,526 sentences, respectively. Figure FIGREF15 shows an annotated example, which contains several propaganda techniques. For example, the fragment babies on line 1 is an instance of both Name_Calling and Labeling. Note that the fragment not looking as though Trump killed his grandma on line 4 is an instance of Exaggeration_or_Minimisation and it overlaps with the fragment killed his grandma, which is an instance of Loaded_Language." ], "highlighted_evidence": [ "The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. More details about the data collection and the annotation, as well as statistics about the corpus can be found in BIBREF10, where an earlier version of the corpus is described, which includes 450 news articles. We further annotated 47 additional articles for the purpose of the shared task using the same protocol and the same annotators.", "The training, the development, and the test partitions of the corpus used for the shared task consist of 350, 61, and 86 articles and of 16,965, 2,235, and 3,526 sentences, respectively." ] } ] }, { "question": "What was the baseline for this task?", "answers": [ { "answer": "The baseline system for the SLC task is a very simple logistic regression classifier with default parameters. The baseline for the FLC task generates spans and selects one of the 18 techniques randomly.", "type": "abstractive" }, { "answer": "SLC task is a very simple logistic regression classifier, FLC task generates spans and selects one of the 18 techniques randomly", "type": "extractive" } ], "q_uid": "fd753ab5177d7bd27db0e0afc12411876ee607df", "evidence": [ { "raw_evidence": [ "The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence. The performance of this baseline on the SLC task is shown in Tables TABREF33 and TABREF34.", "The baseline for the FLC task generates spans and selects one of the 18 techniques randomly. The inefficacy of such a simple random baseline is illustrated in Tables TABREF36 and TABREF41." ], "highlighted_evidence": [ "The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence. ", "The baseline for the FLC task generates spans and selects one of the 18 techniques randomly. " ] }, { "raw_evidence": [ "The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence. The performance of this baseline on the SLC task is shown in Tables TABREF33 and TABREF34.", "The baseline for the FLC task generates spans and selects one of the 18 techniques randomly. The inefficacy of such a simple random baseline is illustrated in Tables TABREF36 and TABREF41." ], "highlighted_evidence": [ "The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence.", "The baseline for the FLC task generates spans and selects one of the 18 techniques randomly." ] } ] } ], "1609.00559": [ { "question": "What is a second order co-ocurrence matrix?", "answers": [ { "answer": "frequencies of the other words which occur with both of them (i.e., second order co\u2013occurrences)", "type": "extractive" }, { "answer": "The matrix containing co-occurrences of the words which occur with the both words of every given pair of words.", "type": "abstractive" } ], "q_uid": "88e62ea7a4d1d2921624b8480b5c6b50cfa5ad42", "evidence": [ { "raw_evidence": [ "However, despite these successes distributional methods do not perform well when data is very sparse (which is common). One possible solution is to use second\u2013order co\u2013occurrence vectors BIBREF10 , BIBREF11 . In this approach the similarity between two words is not strictly based on their co\u2013occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co\u2013occurrences). This approach has been shown to be successful in quantifying semantic relatedness BIBREF12 , BIBREF13 . However, while more robust in the face of sparsity, second\u2013order methods can result in significant amounts of noise, where contextual information that is overly general is included and does not contribute to quantifying the semantic relatedness between the two concepts." ], "highlighted_evidence": [ "In this approach the similarity between two words is not strictly based on their co\u2013occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co\u2013occurrences). This approach has been shown to be successful in quantifying semantic relatedness BIBREF12 , BIBREF13 ." ] }, { "raw_evidence": [ "However, despite these successes distributional methods do not perform well when data is very sparse (which is common). One possible solution is to use second\u2013order co\u2013occurrence vectors BIBREF10 , BIBREF11 . In this approach the similarity between two words is not strictly based on their co\u2013occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co\u2013occurrences). This approach has been shown to be successful in quantifying semantic relatedness BIBREF12 , BIBREF13 . However, while more robust in the face of sparsity, second\u2013order methods can result in significant amounts of noise, where contextual information that is overly general is included and does not contribute to quantifying the semantic relatedness between the two concepts." ], "highlighted_evidence": [ "In this approach the similarity between two words is not strictly based on their co\u2013occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co\u2013occurrences)." ] } ] }, { "question": "How many humans participated?", "answers": [ { "answer": "16", "type": "abstractive" } ], "q_uid": "4dcf67b5e7bd1422e7e70c657f6eacccd8de06d3", "evidence": [ { "raw_evidence": [ "MiniMayoSRS: The MayoSRS, developed by PakhomovPMMRC10, consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic. The relatedness of each term pair was assessed based on a four point scale: (4.0) practically synonymous, (3.0) related, (2.0) marginally related and (1.0) unrelated. MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter\u2013annotator agreement was achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78. We evaluate our method on the mean of the physician scores, and the mean of the coders scores in this subset in the same manner as reported by PedersenPPC07.", "UMNSRS: The University of Minnesota Semantic Relatedness Set (UMNSRS) was developed by PakhomovMALPM10, and consists of 725 clinical term pairs whose semantic similarity and relatedness was determined independently by four medical residents from the University of Minnesota Medical School. The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness. The Intraclass Correlation Coefficient (ICC) for the reference standard tagged for similarity was 0.47, and 0.50 for relatedness. Therefore, as suggested by Pakhomov and colleagues,we use a subset of the ratings consisting of 401 pairs for the similarity set and 430 pairs for the relatedness set which each have an ICC of 0.73." ], "highlighted_evidence": [ "MiniMayoSRS: The MayoSRS, developed by PakhomovPMMRC10, consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic. ", "UMNSRS: The University of Minnesota Semantic Relatedness Set (UMNSRS) was developed by PakhomovMALPM10, and consists of 725 clinical term pairs whose semantic similarity and relatedness was determined independently by four medical residents from the University of Minnesota Medical School. " ] } ] } ], "1604.00727": [ { "question": "Do the authors also try the model on other datasets?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "784ce5a983c5f2cc95a2c60ce66f2a8a50f3636f", "evidence": [ { "raw_evidence": [ "In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. In contrast, our models are trained only on the 76K questions in the training set." ], "highlighted_evidence": [ "In contrast, our models are trained only on the 76K questions in the training set." ] }, { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "What word level and character level model baselines are used?", "answers": [ { "answer": "None", "type": "abstractive" }, { "answer": "Word-level Memory Neural Networks (MemNNs) proposed in Bordes et al. (2015)", "type": "abstractive" } ], "q_uid": "7705dd04acedaefee30d8b2c9978537afb2040dc", "evidence": [ { "raw_evidence": [ "In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. In contrast, our models are trained only on the 76K questions in the training set." ], "highlighted_evidence": [ "In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines." ] }, { "raw_evidence": [ "In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. In contrast, our models are trained only on the 76K questions in the training set.", "We evaluate the proposed model on the SimpleQuestions dataset BIBREF0 . The dataset consists of 108,442 single-relation questions and their corresponding (topic entity, predicate, answer entity) triples from Freebase. It is split into 75,910 train, 10,845 validation, and 21,687 test questions. Only 10,843 of the 45,335 unique words in entity aliases and 886 out of 1,034 unique predicates in the test set were present in the train set. For the proposed dataset, there are two evaluation settings, called FB2M and FB5M, respectively. The former uses a KB for candidate generation which is a subset of Freebase and contains 2M entities, while the latter uses subset of Freebase with 5M entities." ], "highlighted_evidence": [ "In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. ", "For the proposed dataset, there are two evaluation settings, called FB2M and FB5M, respectively. The former uses a KB for candidate generation which is a subset of Freebase and contains 2M entities, while the latter uses subset of Freebase with 5M entities." ] } ] } ], "1612.02482": [ { "question": "How were the human judgements assembled?", "answers": [ { "answer": "50 human annotators ranked a random sample of 100 translations by Adequacy, Fluency and overall ranking on a 5-point scale.", "type": "abstractive" }, { "answer": "adequacy, precision and ranking values", "type": "extractive" } ], "q_uid": "0ee73909ac638903da4a0e5565c8571fc794ab96", "evidence": [ { "raw_evidence": [ "To ensure that the increase in BLEU score correlated to actual increase in performance of translation, human evaluation metrics like adequacy, precision and ranking values (between RNNSearch and RNNMorph outputs) were estimated in Table TABREF30 . A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive). For the comparison process, the RNNMorph and the RNNSearch + Word2Vec models\u2019 sentence level translations were individually ranked between each other, permitting the two translations to have ties in the ranking. The intra-annotator values were computed for these metrics and the scores are shown in Table TABREF32 BIBREF12 , BIBREF13 ." ], "highlighted_evidence": [ "A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive). For the comparison process, the RNNMorph and the RNNSearch + Word2Vec models\u2019 sentence level translations were individually ranked between each other, permitting the two translations to have ties in the ranking." ] }, { "raw_evidence": [ "To ensure that the increase in BLEU score correlated to actual increase in performance of translation, human evaluation metrics like adequacy, precision and ranking values (between RNNSearch and RNNMorph outputs) were estimated in Table TABREF30 . A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive). For the comparison process, the RNNMorph and the RNNSearch + Word2Vec models\u2019 sentence level translations were individually ranked between each other, permitting the two translations to have ties in the ranking. The intra-annotator values were computed for these metrics and the scores are shown in Table TABREF32 BIBREF12 , BIBREF13 ." ], "highlighted_evidence": [ "To ensure that the increase in BLEU score correlated to actual increase in performance of translation, human evaluation metrics like adequacy, precision and ranking values (between RNNSearch and RNNMorph outputs) were estimated in Table TABREF30 . A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive)." ] } ] } ], "1608.01084": [ { "question": "Did they only experiment with one language pair?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "1f07e837574519f2b696f3d6fa3230af0b931e5d", "evidence": [ { "raw_evidence": [ "We built a phrase-based Chinese-to-English SMT system by using Moses BIBREF18 . Our parallel training text is a collection of parallel corpora from LDC, which we divide into older corpora and newer corpora. Due to the dominant older data, we duplicate the newer corpora of various domains by 10 times to achieve better domain balance. To reduce the possibility of alignment errors, parallel sentences in the corpora that are longer than 85 words in either Chinese (after word segmentation) or English are discarded. In the end, the final parallel text consists of around 8.8M sentence pairs, 228M Chinese tokens, and 254M English tokens (a token can be a word or punctuation symbol). We also added two dictionaries by concatenating them to our training parallel text. The total number of words in these two corpora is 1.81M for Chinese and 2.03M for English." ], "highlighted_evidence": [ "We built a phrase-based Chinese-to-English SMT system by using Moses BIBREF18 . ", "In the end, the final parallel text consists of around 8.8M sentence pairs, 228M Chinese tokens, and 254M English tokens (a token can be a word or punctuation symbol). ", "The total number of words in these two corpora is 1.81M for Chinese and 2.03M for English." ] }, { "raw_evidence": [ "We built a phrase-based Chinese-to-English SMT system by using Moses BIBREF18 . Our parallel training text is a collection of parallel corpora from LDC, which we divide into older corpora and newer corpora. Due to the dominant older data, we duplicate the newer corpora of various domains by 10 times to achieve better domain balance. To reduce the possibility of alignment errors, parallel sentences in the corpora that are longer than 85 words in either Chinese (after word segmentation) or English are discarded. In the end, the final parallel text consists of around 8.8M sentence pairs, 228M Chinese tokens, and 254M English tokens (a token can be a word or punctuation symbol). We also added two dictionaries by concatenating them to our training parallel text. The total number of words in these two corpora is 1.81M for Chinese and 2.03M for English." ], "highlighted_evidence": [ "We built a phrase-based Chinese-to-English SMT system by using Moses BIBREF18 ." ] } ] } ], "1904.10503": [ { "question": "What results do they achieve using their proposed approach?", "answers": [ { "answer": "F-1 score on the OntoNotes is 88%, and it is 53% on Wiki (gold).", "type": "abstractive" }, { "answer": " total F-1 score on the OntoNotes dataset is 88%, total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%", "type": "extractive" } ], "q_uid": "729694a9fe1e05d329b7a4078a596fe606bc5a95", "evidence": [ { "raw_evidence": [ "The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)" ], "highlighted_evidence": [ "The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. " ] }, { "raw_evidence": [ "The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)" ], "highlighted_evidence": [ "The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%." ] } ] }, { "question": "How do they combine a deep learning model with a knowledge base?", "answers": [ { "answer": "Entities from a deep learning model are linked to the related entities from a knowledge base by a lookup.", "type": "abstractive" }, { "answer": "ELMo embeddings are then used with a residual LSTM to learn informative morphological representations from the character sequence of each token", "type": "extractive" } ], "q_uid": "1c997c268c68149ae6fb43d83ffcd53f0e7fe57e", "evidence": [ { "raw_evidence": [ "While these knowledge bases provide semantically rich and fine-granular classes and relationship types, the task of entity classification often requires associating coarse-grained classes with discovered surface forms of entities. Most existing studies consider NER and entity linking as two separate tasks, whereas we try to combine the two. It has been shown that one can significantly increase the semantic information carried by a NER system when we successfully linking entities from a deep learning method to the related entities from a knowledge base BIBREF26 , BIBREF27 .", "Redirection: For the Wikidata linking element, we recognize that the lookup will be constrained by the most common lookup name for each entity. Consider the utterance (referring to the NBA basketball player) from Figure FIGREF12 \u201cMichael Jeffrey Jordan in San Jose\u201d as an example. The lookup for this entity in Wikidata is \u201cMichael Jordan\u201d and consequently will not be picked up if we were to use an exact string match. A simple method to circumvent such a problem is the usage of a redirection list. Such a list is provided on an entity by entity basis in the \u201cAlso known as\u201d section in Wikidata. Using this redirection list, when we do not find an exact string match improves the recall of our model by 5-10%. Moreover, with the example of Michael Jordan (person), using our current framework, we will always refer to the retired basketball player (Q41421). We will never, for instance, pick up Michael Jordan (Q27069141) the American football cornerback. Or in fact any other Michael Jordan, famous or otherwise. One possible method to overcome this is to add a disambiguation layer, which seeks to use context from earlier parts of the text. This is, however, work for future improvement and we only consider the most common version of that entity." ], "highlighted_evidence": [ "It has been shown that one can significantly increase the semantic information carried by a NER system when we successfully linking entities from a deep learning method to the related entities from a knowledge base BIBREF26 , BIBREF27 .", "Redirection: For the Wikidata linking element, we recognize that the lookup will be constrained by the most common lookup name for each entity. " ] }, { "raw_evidence": [ "The architecture of our proposed model is shown in Figure FIGREF12 . The input is a list of tokens and the output are the predicted entity types. The ELMo embeddings are then used with a residual LSTM to learn informative morphological representations from the character sequence of each token. We then pass this to a softmax layer as a tag decoder to predict the entity types." ], "highlighted_evidence": [ "The input is a list of tokens and the output are the predicted entity types. The ELMo embeddings are then used with a residual LSTM to learn informative morphological representations from the character sequence of each token. We then pass this to a softmax layer as a tag decoder to predict the entity types." ] } ] } ], "1912.01772": [ { "question": "What are the models used for the baseline of the three NLP tasks?", "answers": [ { "answer": "state-of-the-art Transformer architecture, Kaldi, speech clustergen statistical speech synthesizer", "type": "extractive" }, { "answer": "For speech synthesis, they build a speech clustergen statistical speech synthesizer BIBREF9. For speech recognition, they use Kaldi BIBREF11. For Machine Translation, they use a Transformer architecture from BIBREF15.", "type": "abstractive" } ], "q_uid": "5cc2daca2a84ddccba9cdd9449e51bb3f64b3dde", "evidence": [ { "raw_evidence": [ "Baseline Results ::: Speech Synthesis", "In our previous work on building speech systems on found data in 700 languages, BIBREF7, we addressed alignment issues (when audio is not segmented into turn/sentence sized chunks) and correctness issues (when the audio does not match the transcription). We used the same techniques here, as described above.", "For the best quality speech synthesis we need a few hours of phonetically-balanced, single-speaker, read speech. Our first step was to use the start and end points for each turn in the dialogues, and select those of the most frequent speaker, nmlch. This gave us around 18250 segments. We further automatically removed excessive silence from the start, middle and end of these turns (based on occurrence of F0). This gave us 13 hours and 48 minutes of speech.", "We phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data. We resynthesized all of the data and measured the difference between the synthesized data and the original data using Mel Cepstral Distortion, a standard method for automatically measuring quality of speech generation BIBREF10. We then ordered the segments by their generation score and took the top 2000 turns to build a new synthesizer, assuming the better scores corresponded to better alignments, following the techniques of BIBREF7.", "For speech recognition (ASR) we used Kaldi BIBREF11. As we do not have access to pronunciation lexica for Mapudungun, we had to approximate them with two settings. In the first setting, we make the simple assumption that each character corresponds to a pronunced phoneme. In the second setting, we instead used the generated phonetic lexicon also used in the above-mentioned speech synthesis techniques. The train/dev/test splits are across conversations, as described above.", "We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15. We train our systems at the subword level using Byte-Pair Encoding BIBREF16 with a vocabulary of 5000 subwords, shared between the source and target languages. We use five layers for each of the encoder and the decoder, an embedding size of 512, feed forward transformation size of 2048, and eight attention heads. We use dropout BIBREF17 with $0.4$ probability as well as label smoothing set to $0.1$. We train with the Adam optimizer BIBREF18 for up to 200 epochs using learning decay with a patience of six epochs." ], "highlighted_evidence": [ "Baseline Results ::: Speech Synthesis\nIn our previous work on building speech systems on found data in 700 languages, BIBREF7, we addressed alignment issues (when audio is not segmented into turn/sentence sized chunks) and correctness issues (when the audio does not match the transcription). We used the same techniques here, as described above.\n\nFor the best quality speech synthesis we need a few hours of phonetically-balanced, single-speaker, read speech. Our first step was to use the start and end points for each turn in the dialogues, and select those of the most frequent speaker, nmlch. This gave us around 18250 segments. We further automatically removed excessive silence from the start, middle and end of these turns (based on occurrence of F0). This gave us 13 hours and 48 minutes of speech.\n\nWe phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data.", "For speech recognition (ASR) we used Kaldi BIBREF11.", "We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15." ] }, { "raw_evidence": [ "We phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data. We resynthesized all of the data and measured the difference between the synthesized data and the original data using Mel Cepstral Distortion, a standard method for automatically measuring quality of speech generation BIBREF10. We then ordered the segments by their generation score and took the top 2000 turns to build a new synthesizer, assuming the better scores corresponded to better alignments, following the techniques of BIBREF7.", "For speech recognition (ASR) we used Kaldi BIBREF11. As we do not have access to pronunciation lexica for Mapudungun, we had to approximate them with two settings. In the first setting, we make the simple assumption that each character corresponds to a pronunced phoneme. In the second setting, we instead used the generated phonetic lexicon also used in the above-mentioned speech synthesis techniques. The train/dev/test splits are across conversations, as described above.", "We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15. We train our systems at the subword level using Byte-Pair Encoding BIBREF16 with a vocabulary of 5000 subwords, shared between the source and target languages. We use five layers for each of the encoder and the decoder, an embedding size of 512, feed forward transformation size of 2048, and eight attention heads. We use dropout BIBREF17 with $0.4$ probability as well as label smoothing set to $0.1$. We train with the Adam optimizer BIBREF18 for up to 200 epochs using learning decay with a patience of six epochs." ], "highlighted_evidence": [ "We phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data.", "For speech recognition (ASR) we used Kaldi BIBREF11", "We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15." ] } ] } ], "1603.01987": [ { "question": "Is it valid to presume a bad medical wikipedia article should not contain much domain-specific jargon?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "6a633811019e9323dc8549ad540550d27aa6d972", "evidence": [ { "raw_evidence": [ "The idea of considering infoboxes is not novel: for example, in BIBREF7 the authors noticed that the presence of an infobox is a characteristic featured by good articles. However, in the specific case of the Medicine Portal, the presence of an infobox does not seem strictly related to the quality class the article belongs to (according to the manual labelling). Indeed, it is recurrent that articles, spanning all classes, have an infobox, containing a schematic synthesis of the article. In particular, pages with descriptions of diseases usually have an infobox with the medical standard code of the disease (i.e., IDC-9 and IDC-10), as in Figure 2 ." ], "highlighted_evidence": [ "The idea of considering infoboxes is not novel: for example, in BIBREF7 the authors noticed that the presence of an infobox is a characteristic featured by good articles. However, in the specific case of the Medicine Portal, the presence of an infobox does not seem strictly related to the quality class the article belongs to (according to the manual labelling). Indeed, it is recurrent that articles, spanning all classes, have an infobox, containing a schematic synthesis of the article. In particular, pages with descriptions of diseases usually have an infobox with the medical standard code of the disease (i.e., IDC-9 and IDC-10)" ] } ] } ], "1908.06941": [ { "question": "What novel PMI variants are introduced?", "answers": [ { "answer": "clipped PMI; NNEGPMI", "type": "abstractive" }, { "answer": "clipped $\\mathit {PMI}$, $\\mathit {NNEGPMI}$", "type": "extractive" } ], "q_uid": "6b9b9e5d154cb963f6d921093539490daa5ebbae", "evidence": [ { "raw_evidence": [ "where * denotes summation over the corresponding index. To deal with negative values, we propose clipped $\\mathit {PMI}$,", "which is equivalent to $\\mathit {PPMI}$ when $z = 0$.", "such that $NPMI(w,c) = -1$ when $(w,c)$ never cooccur, $NPMI(w,c) = 0$ when they are independent, and $NPMI(w,c) = 1$ when they always cooccur together. This effectively captures the entire negative spectrum, but has the downside of normalization which discards scale information. In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:" ], "highlighted_evidence": [ "To deal with negative values, we propose clipped $\\mathit {PMI}$,\n\nwhich is equivalent to $\\mathit {PPMI}$ when $z = 0$.", "In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:" ] }, { "raw_evidence": [ "where * denotes summation over the corresponding index. To deal with negative values, we propose clipped $\\mathit {PMI}$,", "which is equivalent to $\\mathit {PPMI}$ when $z = 0$.", "Normalization: We also experiment with normalized $\\mathit {PMI}$ ($\\mathit {NPMI}$) BIBREF7:", "such that $NPMI(w,c) = -1$ when $(w,c)$ never cooccur, $NPMI(w,c) = 0$ when they are independent, and $NPMI(w,c) = 1$ when they always cooccur together. This effectively captures the entire negative spectrum, but has the downside of normalization which discards scale information. In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:" ], "highlighted_evidence": [ "To deal with negative values, we propose clipped $\\mathit {PMI}$,\n\nwhich is equivalent to $\\mathit {PPMI}$ when $z = 0$.", "Normalization: We also experiment with normalized $\\mathit {PMI}$ ($\\mathit {NPMI}$) BIBREF7:\n\nsuch that $NPMI(w,c) = -1$ when $(w,c)$ never cooccur, $NPMI(w,c) = 0$ when they are independent, and $NPMI(w,c) = 1$ when they always cooccur together. This effectively captures the entire negative spectrum, but has the downside of normalization which discards scale information. In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:" ] } ] }, { "question": "What semantic and syntactic tasks are used as probes?", "answers": [ { "answer": "Word Content (WC) probing task, Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks", "type": "extractive" }, { "answer": "SimLex, Rare Word, Google Semantic, Semantic Textual Similarity, Word Content (WC) probing, Google Syntactic analogies, Depth, Top Constituent, part-of-speech (POS) tagging", "type": "extractive" } ], "q_uid": "bc4dca3e1e83f3b4bbb53a31557fc5d8971603b2", "evidence": [ { "raw_evidence": [ "Semantics: To evaluate word-level semantics, we use the SimLex BIBREF19 and Rare Word (RW) BIBREF20 word similarity datasets, and the Google Semantic (GSem) analogies BIBREF9. We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.", "Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax. Classifiers for all SentEval probing tasks are multilayer perceptrons with a single hidden layer of 100 units and dropout of $.1$. Our final syntactic task is part-of-speech (POS) tagging using the same BiLSTM-CRF setup as BIBREF23 but using only word embeddings (no hand-engineered features) as input, trained on the WSJ section of the Penn Treebank BIBREF24." ], "highlighted_evidence": [ "We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.", "Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax." ] }, { "raw_evidence": [ "Semantics: To evaluate word-level semantics, we use the SimLex BIBREF19 and Rare Word (RW) BIBREF20 word similarity datasets, and the Google Semantic (GSem) analogies BIBREF9. We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.", "Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax. Classifiers for all SentEval probing tasks are multilayer perceptrons with a single hidden layer of 100 units and dropout of $.1$. Our final syntactic task is part-of-speech (POS) tagging using the same BiLSTM-CRF setup as BIBREF23 but using only word embeddings (no hand-engineered features) as input, trained on the WSJ section of the Penn Treebank BIBREF24." ], "highlighted_evidence": [ "Semantics: To evaluate word-level semantics, we use the SimLex BIBREF19 and Rare Word (RW) BIBREF20 word similarity datasets, and the Google Semantic (GSem) analogies BIBREF9. We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.", "Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax.", "Our final syntactic task is part-of-speech (POS) tagging using the same BiLSTM-CRF setup as BIBREF23 but using only word embeddings (no hand-engineered features) as input, trained on the WSJ section of the Penn Treebank BIBREF24." ] } ] }, { "question": "What are the disadvantages to clipping negative PMI?", "answers": [ { "answer": "It may lead to poor rare word representations and word analogies.", "type": "abstractive" } ], "q_uid": "d46c0ea1ba68c649cc64d2ebb6af20202a74a3c7", "evidence": [ { "raw_evidence": [ "Why incorporate -PMI? $\\mathit {\\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\\mathit {\\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies. Analogies are solved using nearest neighbor lookups in the vector space, and so accounting for negative cooccurrence effectively repels words with which no positive cooccurrence was observed. In future work, we will explore incorporating $\\mathit {\\texttt {-}PMI}$ only for rare words (where it is most needed)." ], "highlighted_evidence": [ "$\\mathit {\\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\\mathit {\\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies." ] } ] }, { "question": "Why are statistics from finite corpora unreliable?", "answers": [ { "answer": "$\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus", "type": "extractive" }, { "answer": "A finite corpora may entirely omit rare word combinations", "type": "abstractive" } ], "q_uid": "6844683935d0d8f588fa06530f5068bf3e1ed0c0", "evidence": [ { "raw_evidence": [ "Unfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus. Due to unreliable statistics, this happens very frequently in finite corpora. Many models work around this issue by clipping negative $\\mathit {PMI}$ values at 0, a measure known as Positive $\\mathit {PMI}$ ($\\mathit {PPMI}$), which works very well in practice. An unanswered question is: \u201cWhat is lost/gained by collapsing the negative $\\mathit {PMI}$ spectrum to 0?\u201d. Understanding which type of information is captured by $\\mathit {\\texttt {-}PMI}$ can help in tailoring models for optimal performance." ], "highlighted_evidence": [ "Unfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus. Due to unreliable statistics, this happens very frequently in finite corpora. " ] }, { "raw_evidence": [ "Unfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus. Due to unreliable statistics, this happens very frequently in finite corpora. Many models work around this issue by clipping negative $\\mathit {PMI}$ values at 0, a measure known as Positive $\\mathit {PMI}$ ($\\mathit {PPMI}$), which works very well in practice. An unanswered question is: \u201cWhat is lost/gained by collapsing the negative $\\mathit {PMI}$ spectrum to 0?\u201d. Understanding which type of information is captured by $\\mathit {\\texttt {-}PMI}$ can help in tailoring models for optimal performance." ], "highlighted_evidence": [ "Unfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus." ] } ] } ], "1702.03856": [ { "question": "what is the domain of the corpus?", "answers": [ { "answer": "telephone calls", "type": "extractive" } ], "q_uid": "8acab64ba72831633e8cc174d5469afecccf3ae9", "evidence": [ { "raw_evidence": [ "Our simple system (\u00a7 SECREF2 ) builds on unsupervised speech processing BIBREF5 , BIBREF6 , BIBREF7 , and in particular on unsupervised term discovery (UTD), which creates hard clusters of repeated word-like units in raw speech BIBREF8 , BIBREF9 . The clusters do not account for all of the audio, but we can use them to simulate a partial, noisy transcription, or pseudotext, which we pair with translations to learn a bag-of-words translation model. We test our system on the CALLHOME Spanish-English speech translation corpus BIBREF10 , a noisy multi-speaker corpus of telephone calls in a variety of Spanish dialects (\u00a7 SECREF3 ). Using the Spanish speech as the source and English text translations as the target, we identify several challenges in the use of UTD, including low coverage of audio and difficulty in cross-speaker clustering (\u00a7 SECREF4 ). Despite these difficulties, we demonstrate that the system learns to translate some content words (\u00a7 SECREF5 )." ], "highlighted_evidence": [ "We test our system on the CALLHOME Spanish-English speech translation corpus BIBREF10 , a noisy multi-speaker corpus of telephone calls in a variety of Spanish dialects (\u00a7 SECREF3 ). " ] } ] }, { "question": "what challenges are identified?", "answers": [ { "answer": "Assigning wrong words to a cluster, Splitting words across different clusters, sparse, giving low coverage", "type": "extractive" }, { "answer": "low coverage of audio, difficulty in cross-speaker clustering", "type": "extractive" } ], "q_uid": "53aa07cc4cc4e7107789ae637dbda8c9f6c1e6aa", "evidence": [ { "raw_evidence": [ "Analysis of challenges from UTD", "Our system relies on the pseudotext produced by ZRTools (the only freely available UTD system we are aware of), which presents several challenges for MT. We used the default ZRTools parameters, and it might be possible to tune them to our task, but we leave this to future work.", "Assigning wrong words to a cluster", "Since UTD is unsupervised, the discovered clusters are noisy. Fig. FIGREF4 shows an example of an incorrect match between the acoustically similar \u201cqu\u00e9 tal vas con\u201d and \u201cte trabajo y\u201d in utterances B and C, leading to a common assignment to c2. Such inconsistencies in turn affect the translation distribution conditioned on c2.", "Splitting words across different clusters", "Although most UTD matches are across speakers, recall of cross-speaker matches is lower than for same-speaker matches. As a result, the same word from different speakers often appears in multiple clusters, preventing the model from learning good translations. ZRTools discovers 15,089 clusters in our data, though there are only 10,674 word types. Only 1,614 of the clusters map one-to-one to a unique word type, while a many-to-one mapping of the rest covers only 1,819 gold types (leaving 7,241 gold types with no corresponding cluster).", "UTD is sparse, giving low coverage", "UTD is most reliable on long and frequently-repeated patterns, so many spoken words are not represented in the pseudotext, as in Fig. FIGREF4 . We found that the patterns discovered by ZRTools match only 28% of the audio. This low coverage reduces training data size, affects alignment quality, and adversely affects translation, which is only possible when pseudoterms are present. For almost half the utterances, UTD fails to produce any pseudoterm at all." ], "highlighted_evidence": [ "Analysis of challenges from UTD\nOur system relies on the pseudotext produced by ZRTools (the only freely available UTD system we are aware of), which presents several challenges for MT. ", "Assigning wrong words to a cluster\nSince UTD is unsupervised, the discovered clusters are noisy. ", "Splitting words across different clusters\nAlthough most UTD matches are across speakers, recall of cross-speaker matches is lower than for same-speaker matches. As a result, the same word from different speakers often appears in multiple clusters, preventing the model from learning good translations.", "UTD is sparse, giving low coverage", "We found that the patterns discovered by ZRTools match only 28% of the audio. This low coverage reduces training data size, affects alignment quality, and adversely affects translation, which is only possible when pseudoterms are present." ] }, { "raw_evidence": [ "Our simple system (\u00a7 SECREF2 ) builds on unsupervised speech processing BIBREF5 , BIBREF6 , BIBREF7 , and in particular on unsupervised term discovery (UTD), which creates hard clusters of repeated word-like units in raw speech BIBREF8 , BIBREF9 . The clusters do not account for all of the audio, but we can use them to simulate a partial, noisy transcription, or pseudotext, which we pair with translations to learn a bag-of-words translation model. We test our system on the CALLHOME Spanish-English speech translation corpus BIBREF10 , a noisy multi-speaker corpus of telephone calls in a variety of Spanish dialects (\u00a7 SECREF3 ). Using the Spanish speech as the source and English text translations as the target, we identify several challenges in the use of UTD, including low coverage of audio and difficulty in cross-speaker clustering (\u00a7 SECREF4 ). Despite these difficulties, we demonstrate that the system learns to translate some content words (\u00a7 SECREF5 )." ], "highlighted_evidence": [ "Using the Spanish speech as the source and English text translations as the target, we identify several challenges in the use of UTD, including low coverage of audio and difficulty in cross-speaker clustering (\u00a7 SECREF4 )." ] } ] }, { "question": "what is the size of the speech corpus?", "answers": [ { "answer": "104 telephone calls, transcripts contain 168,195 Spanish word tokens, translations contain 159,777 English word tokens", "type": "extractive" }, { "answer": "104 telephone calls, which pair 11 hours of audio", "type": "extractive" } ], "q_uid": "72755c2d79210857cfff60bfbcb55f83c71ada51", "evidence": [ { "raw_evidence": [ "Although we did not have access to a low-resource dataset, there is a corpus of noisy multi-speaker speech that simulates many of the conditions we expect to find in our motivating applications: the CALLHOME Spanish\u2013English speech translation dataset (LDC2014T23; Post el al., 2013). We ran UTD over all 104 telephone calls, which pair 11 hours of audio with Spanish transcripts and their crowdsourced English translations. The transcripts contain 168,195 Spanish word tokens (10,674 types), and the translations contain 159,777 English word tokens (6,723 types). Though our system does not require Spanish transcripts, we use them to evaluate UTD and to simulate a perfect UTD system, called the oracle." ], "highlighted_evidence": [ "We ran UTD over all 104 telephone calls, which pair 11 hours of audio with Spanish transcripts and their crowdsourced English translations. The transcripts contain 168,195 Spanish word tokens (10,674 types), and the translations contain 159,777 English word tokens (6,723 types)." ] }, { "raw_evidence": [ "Although we did not have access to a low-resource dataset, there is a corpus of noisy multi-speaker speech that simulates many of the conditions we expect to find in our motivating applications: the CALLHOME Spanish\u2013English speech translation dataset (LDC2014T23; Post el al., 2013). We ran UTD over all 104 telephone calls, which pair 11 hours of audio with Spanish transcripts and their crowdsourced English translations. The transcripts contain 168,195 Spanish word tokens (10,674 types), and the translations contain 159,777 English word tokens (6,723 types). Though our system does not require Spanish transcripts, we use them to evaluate UTD and to simulate a perfect UTD system, called the oracle." ], "highlighted_evidence": [ "We ran UTD over all 104 telephone calls, which pair 11 hours of audio with Spanish transcripts and their crowdsourced English translations. " ] } ] } ], "1904.01548": [ { "question": "Which two pairs of ERPs from the literature benefit from joint training?", "answers": [ { "answer": "Answer with content missing: (Whole Method and Results sections) Self-paced reading times widely benefit ERP prediction, while eye-tracking data seems to have more limited benefit to just the ELAN, LAN, and PNP ERP components.\nSelect:\n- ELAN, LAN\n- PNP ERP", "type": "abstractive" } ], "q_uid": "7d2f812cb345bb3ab91eb8cbbdeefd4b58f65569", "evidence": [ { "raw_evidence": [ "This work is most closely related to the paper from which we get the ERP data: BIBREF0 . In that work, the authors relate the surprisal of a word, i.e. the (negative log) probability of the word appearing in its context, to each of the ERP signals we consider here. The authors do not directly train a model to predict ERPs. Instead, models of the probability distribution of each word in context are used to compute a surprisal for each word, which is input into a mixed effects regression along with word frequency, word length, word position in the sentence, and sentence position in the experiment. The effect of the surprisal is assessed using a likelihood-ratio test. In BIBREF7 , the authors take an approach similar to BIBREF0 . The authors compare the explanatory power of surprisal (as computed by an LSTM or a Recurrent Neural Network Grammar (RNNG) language model) to a measure of syntactic complexity they call \u201cdistance\" that counts the number of parser actions in the RNNG language model. The authors find that surprisal (as predicted by the RNNG) and distance are both significant factors in a mixed effects regression which predicts the P600, while the surprisal as computed by an LSTM is not. Unlike BIBREF0 and BIBREF7 , we do not use a linking function (e.g. surprisal) to relate a language model to ERPs. We thus lose the interpretability provided by the linking function, but we are able to predict a significant proportion of the variance for all of the ERP components, where prior work could not. We interpret our results through characterization of the ERPs in terms of how they relate to each other and to eye-tracking data rather than through a linking function. The authors in BIBREF8 also use a recurrent neural network to predict neural activity directly. In that work the authors predict magnetoencephalography (MEG) activity, a close cousin to EEG, recorded while participants read a chapter of Harry Potter and the Sorcerer\u2019s Stone BIBREF9 . Their approach to characterization of processing at each MEG sensor location is to determine whether it is best predicted by the context vector of the recurrent network (prior to the current word being processed), the embedding of the current word, or the probability of the current word given the context. In future work we also intend to add these types of studies to the ERP predictions.", "Discussion" ], "highlighted_evidence": [ "In future work we also intend to add these types of studies to the ERP predictions.\n\nDiscussion" ] } ] }, { "question": "What datasets are used?", "answers": [ { "answer": "Answer with content missing: (Whole Method and Results sections) The primary dataset we use is the ERP data collected and computed by Frank et al. (2015), and we also use behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013) which were collected on the same set of 205 sentences.\nSelect:\n- ERP data collected and computed by Frank et al. (2015)\n- behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013)", "type": "abstractive" }, { "answer": "the ERP data: BIBREF0", "type": "extractive" } ], "q_uid": "bd6dc38a9ac8d329114172194b0820766458dacc", "evidence": [ { "raw_evidence": [ "This work is most closely related to the paper from which we get the ERP data: BIBREF0 . In that work, the authors relate the surprisal of a word, i.e. the (negative log) probability of the word appearing in its context, to each of the ERP signals we consider here. The authors do not directly train a model to predict ERPs. Instead, models of the probability distribution of each word in context are used to compute a surprisal for each word, which is input into a mixed effects regression along with word frequency, word length, word position in the sentence, and sentence position in the experiment. The effect of the surprisal is assessed using a likelihood-ratio test. In BIBREF7 , the authors take an approach similar to BIBREF0 . The authors compare the explanatory power of surprisal (as computed by an LSTM or a Recurrent Neural Network Grammar (RNNG) language model) to a measure of syntactic complexity they call \u201cdistance\" that counts the number of parser actions in the RNNG language model. The authors find that surprisal (as predicted by the RNNG) and distance are both significant factors in a mixed effects regression which predicts the P600, while the surprisal as computed by an LSTM is not. Unlike BIBREF0 and BIBREF7 , we do not use a linking function (e.g. surprisal) to relate a language model to ERPs. We thus lose the interpretability provided by the linking function, but we are able to predict a significant proportion of the variance for all of the ERP components, where prior work could not. We interpret our results through characterization of the ERPs in terms of how they relate to each other and to eye-tracking data rather than through a linking function. The authors in BIBREF8 also use a recurrent neural network to predict neural activity directly. In that work the authors predict magnetoencephalography (MEG) activity, a close cousin to EEG, recorded while participants read a chapter of Harry Potter and the Sorcerer\u2019s Stone BIBREF9 . Their approach to characterization of processing at each MEG sensor location is to determine whether it is best predicted by the context vector of the recurrent network (prior to the current word being processed), the embedding of the current word, or the probability of the current word given the context. In future work we also intend to add these types of studies to the ERP predictions.", "Discussion" ], "highlighted_evidence": [ "In future work we also intend to add these types of studies to the ERP predictions.\n\nDiscussion" ] }, { "raw_evidence": [ "This work is most closely related to the paper from which we get the ERP data: BIBREF0 . In that work, the authors relate the surprisal of a word, i.e. the (negative log) probability of the word appearing in its context, to each of the ERP signals we consider here. The authors do not directly train a model to predict ERPs. Instead, models of the probability distribution of each word in context are used to compute a surprisal for each word, which is input into a mixed effects regression along with word frequency, word length, word position in the sentence, and sentence position in the experiment. The effect of the surprisal is assessed using a likelihood-ratio test. In BIBREF7 , the authors take an approach similar to BIBREF0 . The authors compare the explanatory power of surprisal (as computed by an LSTM or a Recurrent Neural Network Grammar (RNNG) language model) to a measure of syntactic complexity they call \u201cdistance\" that counts the number of parser actions in the RNNG language model. The authors find that surprisal (as predicted by the RNNG) and distance are both significant factors in a mixed effects regression which predicts the P600, while the surprisal as computed by an LSTM is not. Unlike BIBREF0 and BIBREF7 , we do not use a linking function (e.g. surprisal) to relate a language model to ERPs. We thus lose the interpretability provided by the linking function, but we are able to predict a significant proportion of the variance for all of the ERP components, where prior work could not. We interpret our results through characterization of the ERPs in terms of how they relate to each other and to eye-tracking data rather than through a linking function. The authors in BIBREF8 also use a recurrent neural network to predict neural activity directly. In that work the authors predict magnetoencephalography (MEG) activity, a close cousin to EEG, recorded while participants read a chapter of Harry Potter and the Sorcerer\u2019s Stone BIBREF9 . Their approach to characterization of processing at each MEG sensor location is to determine whether it is best predicted by the context vector of the recurrent network (prior to the current word being processed), the embedding of the current word, or the probability of the current word given the context. In future work we also intend to add these types of studies to the ERP predictions." ], "highlighted_evidence": [ "This work is most closely related to the paper from which we get the ERP data: BIBREF0 . " ] } ] } ], "1606.03676": [ { "question": "which datasets did they experiment with?", "answers": [ { "answer": "Universal Dependencies v1.2 treebanks for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German,\nIndonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish, and Swedish", "type": "abstractive" }, { "answer": "Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2", "type": "extractive" } ], "q_uid": "3ddff6b707767c3dd54d7104fe88b628765cae58", "evidence": [ { "raw_evidence": [ "We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 .", "As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse." ], "highlighted_evidence": [ "We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. ", "We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. " ] }, { "raw_evidence": [ "We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 ." ], "highlighted_evidence": [ "We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. " ] } ] }, { "question": "which languages are explored?", "answers": [ { "answer": "Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish", "type": "extractive" }, { "answer": "Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish , Swedish", "type": "extractive" } ], "q_uid": "0a5ffe4697913a57fda1fd5a188cd5ed59bdc5c7", "evidence": [ { "raw_evidence": [ "As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse." ], "highlighted_evidence": [ "We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish." ] }, { "raw_evidence": [ "As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse." ], "highlighted_evidence": [ " We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish." ] } ] } ], "1911.03343": [ { "question": "How did they extend LAMA evaluation framework to focus on negation?", "answers": [ { "answer": "To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., \u201cnot\u201d) in LAMA cloze statement", "type": "extractive" }, { "answer": "Create the negated LAMA dataset and query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions.", "type": "abstractive" } ], "q_uid": "78292bc57ee68fdb93ed45430d80acca25a9e916", "evidence": [ { "raw_evidence": [ "This work analyzes the understanding of pretrained language models of factual and commonsense knowledge stored in negated statements. To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., \u201cnot\u201d) in LAMA cloze statement (e.g., \u201cThe theory of relativity was not developed by [MASK].\u201d). In our experiments, we query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions in terms of rank correlation and overlap of top predictions. We find that the predicted filler words often have high overlap. Thus, negating a cloze statement does not change the predictions in many cases \u2013 but of course it should as our example \u201cbirds can fly\u201d vs. \u201cbirds cannot fly\u201d shows. We identify and analyze a subset of cloze statements where predictions are different. We find that BERT handles negation best among pretrained language models, but it still fails badly on most negated statements." ], "highlighted_evidence": [ "This work analyzes the understanding of pretrained language models of factual and commonsense knowledge stored in negated statements. To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., \u201cnot\u201d) in LAMA cloze statement (e.g., \u201cThe theory of relativity was not developed by [MASK].\u201d)." ] }, { "raw_evidence": [ "This work analyzes the understanding of pretrained language models of factual and commonsense knowledge stored in negated statements. To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., \u201cnot\u201d) in LAMA cloze statement (e.g., \u201cThe theory of relativity was not developed by [MASK].\u201d). In our experiments, we query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions in terms of rank correlation and overlap of top predictions. We find that the predicted filler words often have high overlap. Thus, negating a cloze statement does not change the predictions in many cases \u2013 but of course it should as our example \u201cbirds can fly\u201d vs. \u201cbirds cannot fly\u201d shows. We identify and analyze a subset of cloze statements where predictions are different. We find that BERT handles negation best among pretrained language models, but it still fails badly on most negated statements." ], "highlighted_evidence": [ ". To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., \u201cnot\u201d) in LAMA cloze statement (e.g., \u201cThe theory of relativity was not developed by [MASK].\u201d). In our experiments, we query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions in terms of rank correlation and overlap of top predictions." ] } ] } ], "1712.00991": [ { "question": "What evaluation metrics were used for the summarization task?", "answers": [ { "answer": "ROUGE BIBREF22 unigram score", "type": "extractive" }, { "answer": "ROUGE", "type": "extractive" } ], "q_uid": "aa6d956c2860f58fc9baea74c353c9d985b05605", "evidence": [ { "raw_evidence": [ "We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries." ], "highlighted_evidence": [ "The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score." ] }, { "raw_evidence": [ "We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries." ], "highlighted_evidence": [ "The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. " ] } ] }, { "question": "What clustering algorithms were used?", "answers": [ { "answer": "CLUTO, Carrot2 Lingo", "type": "extractive" }, { "answer": "simple clustering algorithm which uses the cosine similarity between word embeddings", "type": "extractive" } ], "q_uid": "4c18081ae3b676cc7831403d11bc070c10120f8e", "evidence": [ { "raw_evidence": [ "After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters." ], "highlighted_evidence": [ "We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. " ] }, { "raw_evidence": [ "After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters." ], "highlighted_evidence": [ "From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns." ] } ] }, { "question": "What is the average length of the sentences?", "answers": [ { "answer": "15.5", "type": "extractive" }, { "answer": "average:15.5", "type": "extractive" } ], "q_uid": "e025061e199b121f2ac8f3d9637d9bf987d65cd5", "evidence": [ { "raw_evidence": [ "In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19." ], "highlighted_evidence": [ "The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19." ] }, { "raw_evidence": [ "In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19." ], "highlighted_evidence": [ "The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19." ] } ] }, { "question": "What is the size of the real-life dataset?", "answers": [ { "answer": "26972", "type": "extractive" }, { "answer": "26972 sentences", "type": "extractive" } ], "q_uid": "61652a3da85196564401d616d251084a25ab4596", "evidence": [ { "raw_evidence": [ "In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19." ], "highlighted_evidence": [ "The corpus of supervisor assessment has 26972 sentences. " ] }, { "raw_evidence": [ "In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19." ], "highlighted_evidence": [ "The corpus of supervisor assessment has 26972 sentences." ] } ] } ], "1805.08241": [ { "question": "What are the language pairs explored in this paper?", "answers": [ { "answer": "De-En, Ja-En, Ro-En", "type": "extractive" }, { "answer": "De-En, Ja-En, Ro-En", "type": "extractive" } ], "q_uid": "14b74ad5a6f5b0506511c9b454e9c464371ef8c4", "evidence": [ { "raw_evidence": [ "We evaluated our attention transformations on three language pairs. We focused on small datasets, as they are the most affected by coverage mistakes. We use the IWSLT 2014 corpus for De-En, the KFTT corpus for Ja-En BIBREF19 , and the WMT 2016 dataset for Ro-En. The training sets have 153,326, 329,882, and 560,767 parallel sentences, respectively. Our reason to prefer smaller datasets is that this regime is what brings more adequacy issues and demands more structural biases, hence it is a good test bed for our methods. We tokenized the data using the Moses scripts and preprocessed it with subword units BIBREF20 with a joint vocabulary and 32k merge operations. Our implementation was done on a fork of the OpenNMT-py toolkit BIBREF21 with the default parameters . We used a validation set to tune hyperparameters introduced by our model. Even though our attention implementations are CPU-based using NumPy (unlike the rest of the computation which is done on the GPU), we did not observe any noticeable slowdown using multiple devices." ], "highlighted_evidence": [ "We evaluated our attention transformations on three language pairs.", "We use the IWSLT 2014 corpus for De-En, the KFTT corpus for Ja-En BIBREF19 , and the WMT 2016 dataset for Ro-En." ] }, { "raw_evidence": [ "We evaluated our attention transformations on three language pairs. We focused on small datasets, as they are the most affected by coverage mistakes. We use the IWSLT 2014 corpus for De-En, the KFTT corpus for Ja-En BIBREF19 , and the WMT 2016 dataset for Ro-En. The training sets have 153,326, 329,882, and 560,767 parallel sentences, respectively. Our reason to prefer smaller datasets is that this regime is what brings more adequacy issues and demands more structural biases, hence it is a good test bed for our methods. We tokenized the data using the Moses scripts and preprocessed it with subword units BIBREF20 with a joint vocabulary and 32k merge operations. Our implementation was done on a fork of the OpenNMT-py toolkit BIBREF21 with the default parameters . We used a validation set to tune hyperparameters introduced by our model. Even though our attention implementations are CPU-based using NumPy (unlike the rest of the computation which is done on the GPU), we did not observe any noticeable slowdown using multiple devices." ], "highlighted_evidence": [ "We evaluated our attention transformations on three language pairs. We focused on small datasets, as they are the most affected by coverage mistakes. We use the IWSLT 2014 corpus for De-En, the KFTT corpus for Ja-En BIBREF19 , and the WMT 2016 dataset for Ro-En. " ] } ] } ], "1907.04433": [ { "question": "Do they experiment with the toolkits?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "5f2bade0881c719ab026bc2e2962e2ada96cdb25", "evidence": [ { "raw_evidence": [ "We demonstrate the performance of GluonCV/NLP models in various computer vision and natural language processing tasks. Specifically, we evaluate popular or state-of-the-art models on standard benchmark data sets. In the experiments, we compare model performance between GluonCV/NLP and other open source implementations with Caffe, Caffe2, Theano, and TensorFlow, including ResNet BIBREF8 and MobileNet BIBREF9 for image classification (ImageNet), Faster R-CNN BIBREF10 for object detection (COCO), Mask R-CNN BIBREF11 for instance segmentation, Simple Pose BIBREF12 for pose estimation (COCO), textCNN BIBREF13 for sentiment analysis (TREC), and BERT BIBREF14 for question answering (SQuAD 1.1), sentiment analysis (SST-2), natural langauge inference (MNLI-m), and paraphrasing (MRPC). Table TABREF5 shows that the GluonCV/GluonNLP implementation matches or outperforms the compared open source implementation for the same model evaluated on the same data set." ], "highlighted_evidence": [ "In the experiments, we compare model performance between GluonCV/NLP and other open source implementations with Caffe, Caffe2, Theano, and TensorFlow, including ResNet BIBREF8 and MobileNet BIBREF9 for image classification (ImageNet), Faster R-CNN BIBREF10 for object detection (COCO), Mask R-CNN BIBREF11 for instance segmentation, Simple Pose BIBREF12 for pose estimation (COCO), textCNN BIBREF13 for sentiment analysis (TREC), and BERT BIBREF14 for question answering (SQuAD 1.1), sentiment analysis (SST-2), natural langauge inference (MNLI-m), and paraphrasing (MRPC)." ] }, { "raw_evidence": [ "We demonstrate the performance of GluonCV/NLP models in various computer vision and natural language processing tasks. Specifically, we evaluate popular or state-of-the-art models on standard benchmark data sets. In the experiments, we compare model performance between GluonCV/NLP and other open source implementations with Caffe, Caffe2, Theano, and TensorFlow, including ResNet BIBREF8 and MobileNet BIBREF9 for image classification (ImageNet), Faster R-CNN BIBREF10 for object detection (COCO), Mask R-CNN BIBREF11 for instance segmentation, Simple Pose BIBREF12 for pose estimation (COCO), textCNN BIBREF13 for sentiment analysis (TREC), and BERT BIBREF14 for question answering (SQuAD 1.1), sentiment analysis (SST-2), natural langauge inference (MNLI-m), and paraphrasing (MRPC). Table TABREF5 shows that the GluonCV/GluonNLP implementation matches or outperforms the compared open source implementation for the same model evaluated on the same data set." ], "highlighted_evidence": [ "In the experiments, we compare model performance between GluonCV/NLP and other open source implementations with Caffe, Caffe2, Theano, and TensorFlow, including ResNet BIBREF8 and MobileNet BIBREF9 for image classification (ImageNet), Faster R-CNN BIBREF10 for object detection (COCO), Mask R-CNN BIBREF11 for instance segmentation, Simple Pose BIBREF12 for pose estimation (COCO), textCNN BIBREF13 for sentiment analysis (TREC), and BERT BIBREF14 for question answering (SQuAD 1.1), sentiment analysis (SST-2), natural langauge inference (MNLI-m), and paraphrasing (MRPC)." ] } ] } ], "2003.04642": [ { "question": "Have they made any attempt to correct MRC gold standards according to their findings? ", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "5c88d601e8fca96bffebfa9ef22331ecf31c6d75", "evidence": [ { "raw_evidence": [ "In this paper, we introduce a novel framework to characterise machine reading comprehension gold standards. This framework has potential applications when comparing different gold standards, considering the design choices for a new gold standard and performing qualitative error analyses for a proposed approach." ], "highlighted_evidence": [ "This framework has potential applications when comparing different gold standards, considering the design choices for a new gold standard and performing qualitative error analyses for a proposed approach." ] }, { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "What features are absent from MRC gold standards that can result in potential lexical ambiguity?", "answers": [ { "answer": "Restrictivity , Factivity , Coreference ", "type": "extractive" }, { "answer": "semantics-altering grammatical modifiers", "type": "extractive" } ], "q_uid": "71bd5db79635d48a0730163a9f2e8ef19a86cd66", "evidence": [ { "raw_evidence": [ "We recognise features that add ambiguity to the supporting facts, for example when information is only expressed implicitly by using an Ellipsis. As opposed to redundant words, we annotate Restrictivity and Factivity modifiers, words and phrases whose presence does change the meaning of a sentence with regard to the expected answer, and occurrences of intra- or inter-sentence Coreference in supporting facts (that is relevant to the question). Lastly, we mark ambiguous syntactic features, when their resolution is required in order to obtain the answer. Concretely, we mark argument collection with con- and disjunctions (Listing) and ambiguous Prepositions, Coordination Scope and Relative clauses/Adverbial phrases/Appositions." ], "highlighted_evidence": [ "We recognise features that add ambiguity to the supporting facts, for example when information is only expressed implicitly by using an Ellipsis. As opposed to redundant words, we annotate Restrictivity and Factivity modifiers, words and phrases whose presence does change the meaning of a sentence with regard to the expected answer, and occurrences of intra- or inter-sentence Coreference in supporting facts (that is relevant to the question). Lastly, we mark ambiguous syntactic features, when their resolution is required in order to obtain the answer. Concretely, we mark argument collection with con- and disjunctions (Listing) and ambiguous Prepositions, Coordination Scope and Relative clauses/Adverbial phrases/Appositions.", "We recognise features that add ambiguity to the supporting facts, for example when information is only expressed implicitly by using an Ellipsis. As opposed to redundant words, we annotate Restrictivity and Factivity modifiers, words and phrases whose presence does change the meaning of a sentence with regard to the expected answer, and occurrences of intra- or inter-sentence Coreference in supporting facts (that is relevant to the question). Lastly, we mark ambiguous syntactic features, when their resolution is required in order to obtain the answer. Concretely, we mark argument collection with con- and disjunctions (Listing) and ambiguous Prepositions, Coordination Scope and Relative clauses/Adverbial phrases/Appositions." ] }, { "raw_evidence": [ "Furthermore we applied the framework to analyse popular state-of-the-art gold standards for machine reading comprehension: We reveal issues with their factual correctness, show the presence of lexical cues and we observe that semantics-altering grammatical modifiers are missing in all of the investigated gold standards. Studying how to introduce those modifiers into gold standards and observing whether state-of-the-art MRC models are capable of performing reading comprehension on text containing them, is a future research goal." ], "highlighted_evidence": [ "We reveal issues with their factual correctness, show the presence of lexical cues and we observe that semantics-altering grammatical modifiers are missing in all of the investigated gold standards." ] } ] } ], "2001.09215": [ { "question": "How many tweets were collected?", "answers": [ { "answer": "$19,300$, added 2500 randomly sampled tweets", "type": "extractive" }, { "answer": "$19,300$ tweets", "type": "extractive" } ], "q_uid": "bcc0cd4e262f2db4270429ab520971bcf39414cf", "evidence": [ { "raw_evidence": [ "We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues." ], "highlighted_evidence": [ "Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset." ] }, { "raw_evidence": [ "We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues." ], "highlighted_evidence": [ "We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues." ] } ] }, { "question": "What language is explored in this paper?", "answers": [ { "answer": "English language", "type": "abstractive" } ], "q_uid": "f641f561ad2ea2794a52e4e4bdd62e1f353ab797", "evidence": [ { "raw_evidence": [ "Figure FIGREF4 pictorially represents our methodology. Our approach required an initial set of informative tweets for which we employed two human annotators annotating a random sub-sample of the original dataset. From the 1500 samples, 326 were marked as informative and 1174 as non informative ($\\kappa =0.81$), discriminated on this criteria: Is the tweet addressing any complaint or raising grievances about modes of transport or services/ events associated with transportation such as traffic; public or private transport?. An example tweet marked as informative: No, metro fares will be reduced ???, but proper fare structure needs to presented right, it's bad !!!." ], "highlighted_evidence": [ "An example tweet marked as informative: No, metro fares will be reduced ???, but proper fare structure needs to presented right, it's bad !!!." ] } ] } ], "1909.07575": [ { "question": "What are the baselines?", "answers": [ { "answer": "Vanilla ST baseline, encoder pre-training, in which the ST encoder is initialized from an ASR model, decoder pre-training, in which the ST decoder is initialized from an MT model, encoder-decoder pre-training, where both the encoder and decoder are pre-trained, many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models, Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation", "type": "extractive" }, { "answer": "Vanilla ST baseline, Pre-training baselines, Multi-task baselines, Many-to-many+pre-training, Triangle+pre-train", "type": "extractive" }, { "answer": "Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.\n\nPre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.\n\nMulti-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\\alpha _{st}=0.75$ while $\\alpha _{asr}=0.25$ or $\\alpha _{mt}=0.25$. For many-to-many setting, we use $\\alpha _{st}=0.6, \\alpha _{asr}=0.2$ and $\\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.\n\nMany-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. ", "type": "extractive" } ], "q_uid": "af34051bf3e628c1e2a00b110bb84e5f018b419f", "evidence": [ { "raw_evidence": [ "Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.", "Pre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.", "Multi-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\\alpha _{st}=0.75$ while $\\alpha _{asr}=0.25$ or $\\alpha _{mt}=0.25$. For many-to-many setting, we use $\\alpha _{st}=0.6, \\alpha _{asr}=0.2$ and $\\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.", "Many-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation. Their model solves the subnet waste issue by concatenating an ST decoder to an ASR encoder-decoder model. Notably, their ST decoder can consume representations from the speech encoder as well as the ASR decoder. For a fair comparison, the speech encoder and the ASR decoder are initialized from the pre-trained ASR model. The Triangle model is fine-tuned under their multi-task manner." ], "highlighted_evidence": [ "Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.\n\nPre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.\n\nMulti-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\\alpha _{st}=0.75$ while $\\alpha _{asr}=0.25$ or $\\alpha _{mt}=0.25$. For many-to-many setting, we use $\\alpha _{st}=0.6, \\alpha _{asr}=0.2$ and $\\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.\n\nMany-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation. Their model solves the subnet waste issue by concatenating an ST decoder to an ASR encoder-decoder model. Notably, their ST decoder can consume representations from the speech encoder as well as the ASR decoder. For a fair comparison, the speech encoder and the ASR decoder are initialized from the pre-trained ASR model. The Triangle model is fine-tuned under their multi-task manner." ] }, { "raw_evidence": [ "We compare our method with following baselines.", "Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.", "Pre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.", "Multi-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\\alpha _{st}=0.75$ while $\\alpha _{asr}=0.25$ or $\\alpha _{mt}=0.25$. For many-to-many setting, we use $\\alpha _{st}=0.6, \\alpha _{asr}=0.2$ and $\\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.", "Many-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation. Their model solves the subnet waste issue by concatenating an ST decoder to an ASR encoder-decoder model. Notably, their ST decoder can consume representations from the speech encoder as well as the ASR decoder. For a fair comparison, the speech encoder and the ASR decoder are initialized from the pre-trained ASR model. The Triangle model is fine-tuned under their multi-task manner." ], "highlighted_evidence": [ "We compare our method with following baselines.\n\n", "Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.", "Pre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.", "Multi-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\\alpha _{st}=0.75$ while $\\alpha _{asr}=0.25$ or $\\alpha _{mt}=0.25$. For many-to-many setting, we use $\\alpha _{st}=0.6, \\alpha _{asr}=0.2$ and $\\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.", "Many-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. Triangle+pre-train: BIBREF18", "Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation. Their model solves the subnet waste issue by concatenating an ST decoder to an ASR encoder-decoder model. Notably, their ST decoder can consume representations from the speech encoder as well as the ASR decoder. For a fair comparison, the speech encoder and the ASR decoder are initialized from the pre-trained ASR model. The Triangle model is fine-tuned under their multi-task manner." ] }, { "raw_evidence": [ "We compare our method with following baselines.", "Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.", "Pre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.", "Multi-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\\alpha _{st}=0.75$ while $\\alpha _{asr}=0.25$ or $\\alpha _{mt}=0.25$. For many-to-many setting, we use $\\alpha _{st}=0.6, \\alpha _{asr}=0.2$ and $\\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.", "Many-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation. Their model solves the subnet waste issue by concatenating an ST decoder to an ASR encoder-decoder model. Notably, their ST decoder can consume representations from the speech encoder as well as the ASR decoder. For a fair comparison, the speech encoder and the ASR decoder are initialized from the pre-trained ASR model. The Triangle model is fine-tuned under their multi-task manner." ], "highlighted_evidence": [ "We compare our method with following baselines.\n\nVanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.\n\nPre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.\n\nMulti-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\\alpha _{st}=0.75$ while $\\alpha _{asr}=0.25$ or $\\alpha _{mt}=0.25$. For many-to-many setting, we use $\\alpha _{st}=0.6, \\alpha _{asr}=0.2$ and $\\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.\n\nMany-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. " ] } ] }, { "question": "What is the attention module pretrained on?", "answers": [ { "answer": "the model is pre-trained on CTC-based ASR task and MT task in the pre-training stage.", "type": "extractive" } ], "q_uid": "022c365a14fdec406c7a945a1a18e7e79df37f08", "evidence": [ { "raw_evidence": [ "To sufficiently utilize the large dataset $\\mathcal {A}$ and $\\mathcal {M}$, the model is pre-trained on CTC-based ASR task and MT task in the pre-training stage." ], "highlighted_evidence": [ "o sufficiently utilize the large dataset $\\mathcal {A}$ and $\\mathcal {M}$, the model is pre-trained on CTC-based ASR task and MT task in the pre-training stage." ] } ] } ], "1701.04056": [ { "question": "How long of dialog history is captured?", "answers": [ { "answer": "two previous turns", "type": "abstractive" }, { "answer": "160", "type": "extractive" } ], "q_uid": "5260cb56b7d127772425583c5c28958c37cb9bea", "evidence": [ { "raw_evidence": [ "The previously proposed contextual language models, such as DRNNLM and CCDCLM, treat dialog history as a sequence of inputs, without modeling dialog interactions. A dialog turn from one speaker may not only be a direct response to the other speaker's query, but also likely to be a continuation of his own previous statement. Thus, when modeling turn $k$ in a dialog, we propose to connect the last RNN state of turn $k-2$ directly to the starting RNN state of turn $k$ , instead of letting it to propagate through the RNN for turn $k-1$ . The last RNN state of turn $k-1$ serves as the context vector to turn $k$ , which is fed to turn $k$ 's RNN hidden state at each time step together with the word input. The model architecture is as shown in Figure 2 . The context vector $c$ and the initial RNN hidden state for the $k$ th turn $h^{\\mathbf {U}_k}_{0}$ are defined as:" ], "highlighted_evidence": [ " A dialog turn from one speaker may not only be a direct response to the other speaker's query, but also likely to be a continuation of his own previous statement. Thus, when modeling turn $k$ in a dialog, we propose to connect the last RNN state of turn $k-2$ directly to the starting RNN state of turn $k$ , instead of letting it to propagate through the RNN for turn $k-1$ ." ] }, { "raw_evidence": [ "We use the Switchboard Dialog Act Corpus (SwDA) in evaluating our contextual langauge models. The SwDA corpus extends the Switchboard-1 Telephone Speech Corpus with turn and utterance-level dialog act tags. The utterances are also tagged with part-of-speech (POS) tags. We split the data in folder sw00 to sw09 as training set, folder sw10 as test set, and folder sw11 to sw13 as validation set. The training, validation, and test sets contain 98.7K turns (190.0K utterances), 5.7K turns (11.3K utterances), and 11.9K turns (22.2K utterances) respectively. Maximum turn length is set to 160. The vocabulary is defined with the top frequent 10K words." ], "highlighted_evidence": [ "Maximum turn length is set to 160" ] } ] } ], "1904.07904": [ { "question": "What evaluation metrics were used?", "answers": [ { "answer": "Exact Match (EM), Macro-averaged F1 scores (F1)", "type": "extractive" }, { "answer": "Exact Match (EM) and Macro-averaged F1 scores (F1) ", "type": "extractive" } ], "q_uid": "9b97805a0c093df405391a85e4d3ab447671c86a", "evidence": [ { "raw_evidence": [ "The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer. We used the standard evaluation script from SQuAD BIBREF1 to evaluate the performance." ], "highlighted_evidence": [ "The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer." ] }, { "raw_evidence": [ "The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer. We used the standard evaluation script from SQuAD BIBREF1 to evaluate the performance." ], "highlighted_evidence": [ "The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer." ] } ] }, { "question": "What was the previous best model?", "answers": [ { "answer": "(c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 ", "type": "extractive" } ], "q_uid": "7ee5c45b127fb284a4a9e72bb9b980a602f7445a", "evidence": [ { "raw_evidence": [ "To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 . The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 . We also compare to the approach proposed by Lan et al. BIBREF16 in the row (d). This approach is originally proposed for spoken language understanding, and we adopt the same approach on the setting here. The approach models domain-specific features from the source and target domains separately by two different embedding encoders with a shared embedding encoder for modeling domain-general features. The domain-general parameters are adversarially trained by domain discriminator.", "Row (e) is the model that the weights of all layers are tied between the source domain and the target domain. Row (f) uses the same architecture as row (e) with an additional domain discriminator applied to the embedding encoder. It can be found that row (f) outperforms row (e), indicating that the proposed domain adversarial learning is helpful. Therefore, our following experiments contain domain adversarial learning. The proposed approach (row (f)) outperforms previous best model (row (c)) by 2% EM score and over 1.5% F1 score. We also show the results of applying the domain discriminator to the top of context query attention layer in row (g), which obtains poor performance. To sum it up, incorporating adversarial learning by applying the domain discriminator on top of the embedding encoder layer is effective." ], "highlighted_evidence": [ "The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 .", "The proposed approach (row (f)) outperforms previous best model (row (c)) by 2% EM score and over 1.5% F1 score." ] } ] }, { "question": "Which datasets did they use for evaluation?", "answers": [ { "answer": "Spoken-SQuAD testing set", "type": "extractive" }, { "answer": "Spoken-SQuAD", "type": "extractive" } ], "q_uid": "ddf5e1f600b9ce2e8f63213982ef4209bab01fd8", "evidence": [ { "raw_evidence": [ "Spoken-SQuAD is chosen as the target domain data for training and testing. Spoken-SQuAD BIBREF5 is an automatically generated corpus in which the document is in spoken form and the question is in text form. The reference transcriptions are from SQuAD BIBREF1 . There are 37,111 and 5,351 question answer pairs in the training and testing sets respectively, and the word error rate (WER) of both sets is around 22.7%.", "The original SQuAD, Text-SQuAD, is chosen as the source domain data, where only question answering pairs appearing in Spoken-SQuAD are utilized. In our task setting, during training we train the proposed QA model on both Text-SQuAD and Spoken-SQuAD training sets. While in the testing stage, we evaluate the performance on Spoken-SQuAD testing set." ], "highlighted_evidence": [ "Spoken-SQuAD is chosen as the target domain data for training and testing. Spoken-SQuAD BIBREF5 is an automatically generated corpus in which the document is in spoken form and the question is in text form. The reference transcriptions are from SQuAD BIBREF1 . There are 37,111 and 5,351 question answer pairs in the training and testing sets respectively, and the word error rate (WER) of both sets is around 22.7%.\n\nThe original SQuAD, Text-SQuAD, is chosen as the source domain data, where only question answering pairs appearing in Spoken-SQuAD are utilized. In our task setting, during training we train the proposed QA model on both Text-SQuAD and Spoken-SQuAD training sets. While in the testing stage, we evaluate the performance on Spoken-SQuAD testing set." ] }, { "raw_evidence": [ "Spoken-SQuAD is chosen as the target domain data for training and testing. Spoken-SQuAD BIBREF5 is an automatically generated corpus in which the document is in spoken form and the question is in text form. The reference transcriptions are from SQuAD BIBREF1 . There are 37,111 and 5,351 question answer pairs in the training and testing sets respectively, and the word error rate (WER) of both sets is around 22.7%.", "The original SQuAD, Text-SQuAD, is chosen as the source domain data, where only question answering pairs appearing in Spoken-SQuAD are utilized. In our task setting, during training we train the proposed QA model on both Text-SQuAD and Spoken-SQuAD training sets. While in the testing stage, we evaluate the performance on Spoken-SQuAD testing set." ], "highlighted_evidence": [ "Spoken-SQuAD is chosen as the target domain data for training and testing.", "While in the testing stage, we evaluate the performance on Spoken-SQuAD testing set." ] } ] } ], "2003.11645": [ { "question": "What Named Entity Recognition dataset is used?", "answers": [ { "answer": "Groningen Meaning Bank", "type": "extractive" }, { "answer": "Groningen Meaning Bank (GMB)", "type": "extractive" } ], "q_uid": "ef3567ce7301b28e34377e7b62c4ec9b496f00bf", "evidence": [ { "raw_evidence": [ "It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets." ], "highlighted_evidence": [ "The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. " ] }, { "raw_evidence": [ "It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets." ], "highlighted_evidence": [ "The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples." ] } ] }, { "question": "What sentiment analysis dataset is used?", "answers": [ { "answer": "IMDb dataset of movie reviews", "type": "extractive" }, { "answer": "IMDb", "type": "extractive" } ], "q_uid": "7595260c5747aede0b32b7414e13899869209506", "evidence": [ { "raw_evidence": [ "It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets." ], "highlighted_evidence": [ "The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples." ] }, { "raw_evidence": [ "It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets." ], "highlighted_evidence": [ "The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. " ] } ] } ], "1909.02304": [ { "question": "What is the state-of-the-art model for the task?", "answers": [ { "answer": "OpATT BIBREF6, Neural Content Planning with conditional copy (NCP+CC) BIBREF4", "type": "extractive" } ], "q_uid": "5a22293b055f5775081d6acdc0450f7bd5f5de04", "evidence": [ { "raw_evidence": [ "Table TABREF23 displays the automatic evaluation results on both development and test set. We chose Conditional Copy (CC) model as our baseline, which is the best model in Wiseman. We included reported scores with updated IE model by Puduppully and our implementation's result on CC in this paper. Also, we compared our models with other existing works on this dataset including OpATT BIBREF6 and Neural Content Planning with conditional copy (NCP+CC) BIBREF4. In addition, we implemented three other hierarchical encoders that encoded tables' row dimension information in both record-level and row-level to compare with the hierarchical structure of encoder in our model. The decoder was equipped with dual attention BIBREF9. The one with LSTM cell is similar to the one in N18-2097 with 1 layer from {1,2,3}. The one with CNN cell BIBREF10 has kernel width 3 from {3, 5} and 10 layer from {5,10,15,20}. The one with transformer-style encoder (MHSA) BIBREF11 has 8 head from {8, 10} and 5 layer from {2,3,4,5,6}. The heads and layers mentioned above were for both record-level encoder and row-level encoder respectively. The self-attention (SA) cell we used, as described in Section SECREF3, achieved better overall performance in terms of F1% of CS, CO and BLEU among the hierarchical encoders. Also we implemented a template system same as the one used in Wiseman which outputted eight sentences: an introductory sentence (two teams' points and who win), six top players' statistics (ranked by their points) and a conclusion sentence. We refer the readers to Wiseman's paper for more detailed information on templates. The gold reference's result is also included in Table TABREF23. Overall, our model performs better than other neural models on both development and test set in terms of RG's P%, F1% score of CS, CO and BLEU, indicating our model's clear improvement on generating high-fidelity, informative and fluent texts. Also, our model with three dimension representations outperforms hierarchical encoders with only row dimension representation on development set. This indicates that cell and time dimension representation are important in representing the tables. Compared to reported baseline result in Wiseman, we achieved improvement of $22.27\\%$ in terms of RG, $26.84\\%$ in terms of CS F1%, $35.28\\%$ in terms of CO and $18.75\\%$ in terms of BLEU on test set. Unsurprisingly, template system achieves best on RG P% and CS R% due to the included domain knowledge. Also, the high RG # and low CS P% indicates that template will include vast information while many of them are deemed redundant. In addition, the low CO and low BLEU indicates that the rigid structure of the template will produce texts that aren't as adaptive to the given tables and natural as those produced by neural models. Also, we conducted ablation study on our model to evaluate each component's contribution on development set. Based on the results, the absence of row-level encoder hurts our model's performance across all metrics especially the content selection ability." ], "highlighted_evidence": [ "Also, we compared our models with other existing works on this dataset including OpATT BIBREF6 and Neural Content Planning with conditional copy (NCP+CC) BIBREF4." ] } ] }, { "question": "What is the strong baseline?", "answers": [ { "answer": "Conditional Copy (CC) model ", "type": "extractive" }, { "answer": "delayed copy model (DEL), template system (TEM), conditional copy (CC), NCP+CC (NCP)", "type": "extractive" } ], "q_uid": "03c967763e51ef2537793db7902e2c9c17e43e95", "evidence": [ { "raw_evidence": [ "Table TABREF23 displays the automatic evaluation results on both development and test set. We chose Conditional Copy (CC) model as our baseline, which is the best model in Wiseman. We included reported scores with updated IE model by Puduppully and our implementation's result on CC in this paper. Also, we compared our models with other existing works on this dataset including OpATT BIBREF6 and Neural Content Planning with conditional copy (NCP+CC) BIBREF4. In addition, we implemented three other hierarchical encoders that encoded tables' row dimension information in both record-level and row-level to compare with the hierarchical structure of encoder in our model. The decoder was equipped with dual attention BIBREF9. The one with LSTM cell is similar to the one in N18-2097 with 1 layer from {1,2,3}. The one with CNN cell BIBREF10 has kernel width 3 from {3, 5} and 10 layer from {5,10,15,20}. The one with transformer-style encoder (MHSA) BIBREF11 has 8 head from {8, 10} and 5 layer from {2,3,4,5,6}. The heads and layers mentioned above were for both record-level encoder and row-level encoder respectively. The self-attention (SA) cell we used, as described in Section SECREF3, achieved better overall performance in terms of F1% of CS, CO and BLEU among the hierarchical encoders. Also we implemented a template system same as the one used in Wiseman which outputted eight sentences: an introductory sentence (two teams' points and who win), six top players' statistics (ranked by their points) and a conclusion sentence. We refer the readers to Wiseman's paper for more detailed information on templates. The gold reference's result is also included in Table TABREF23. Overall, our model performs better than other neural models on both development and test set in terms of RG's P%, F1% score of CS, CO and BLEU, indicating our model's clear improvement on generating high-fidelity, informative and fluent texts. Also, our model with three dimension representations outperforms hierarchical encoders with only row dimension representation on development set. This indicates that cell and time dimension representation are important in representing the tables. Compared to reported baseline result in Wiseman, we achieved improvement of $22.27\\%$ in terms of RG, $26.84\\%$ in terms of CS F1%, $35.28\\%$ in terms of CO and $18.75\\%$ in terms of BLEU on test set. Unsurprisingly, template system achieves best on RG P% and CS R% due to the included domain knowledge. Also, the high RG # and low CS P% indicates that template will include vast information while many of them are deemed redundant. In addition, the low CO and low BLEU indicates that the rigid structure of the template will produce texts that aren't as adaptive to the given tables and natural as those produced by neural models. Also, we conducted ablation study on our model to evaluate each component's contribution on development set. Based on the results, the absence of row-level encoder hurts our model's performance across all metrics especially the content selection ability." ], "highlighted_evidence": [ "We chose Conditional Copy (CC) model as our baseline, which is the best model in Wiseman. " ] }, { "raw_evidence": [ "Row, column and time dimension information are important to the modeling of tables because subtracting any of them will result in performance drop. Also, position embedding is critical when modeling time dimension information according to the results. In addition, record fusion gate plays an important role because BLEU, CO, RG P% and CS P% drop significantly after subtracting it from full model. Results show that each component in the model contributes to the overall performance. In addition, we compare our model with delayed copy model (DEL) BIBREF12 along with gold text, template system (TEM), conditional copy (CC) BIBREF2 and NCP+CC (NCP) BIBREF4. Li's model generate a template at first and then fill in the slots with delayed copy mechanism. Since its result in Li's paper was evaluated by IE model trained by Wiseman and \u201crelexicalization\u201d by Li, we adopted the corresponding IE model and re-implement \u201crelexicalization\u201d as suggested by Li for fair comparison. Please note that CC's evaluation results via our re-implemented \u201crelexicalization\u201d is comparable to the reported result in Li. We applied them on models other than DEL as shown in Table TABREF28 and report DEL's result from BIBREF12's paper. It shows that our model outperform Li's model significantly across all automatic evaluation metrics in Table TABREF28." ], "highlighted_evidence": [ "In addition, we compare our model with delayed copy model (DEL) BIBREF12 along with gold text, template system (TEM), conditional copy (CC) BIBREF2 and NCP+CC (NCP) BIBREF4. Li's model generate a template at first and then fill in the slots with delayed copy mechanism." ] } ] } ], "1604.03114": [ { "question": "what aspects of conversation flow do they look at?", "answers": [ { "answer": "The time devoted to self-coverage, opponent-coverage, and the number of adopted discussion points.", "type": "abstractive" }, { "answer": "\u2014promoting one's own points and attacking the opponents' points", "type": "extractive" } ], "q_uid": "26327ccebc620a73ba37a95aabe968864e3392b2", "evidence": [ { "raw_evidence": [ "The flow of talking points . A side can either promote its own talking points , address its opponent's points, or steer away from these initially salient ideas altogether. We quantify the use of these strategies by comparing the airtime debaters devote to talking points . For a side INLINEFORM0 , let the self-coverage INLINEFORM1 be the fraction of content words uttered by INLINEFORM2 in round INLINEFORM3 that are among their own talking points INLINEFORM4 ; and the opponent-coverage INLINEFORM5 be the fraction of its content words covering opposing talking points INLINEFORM6 .", "Conversation flow features. We use all conversational features discussed above. For each side INLINEFORM0 we include INLINEFORM1 , INLINEFORM2 , and their sum. We also use the drop in self-coverage given by subtracting corresponding values for INLINEFORM3 , and the number of discussion points adopted by each side. We call these the Flow features." ], "highlighted_evidence": [ "We quantify the use of these strategies by comparing the airtime debaters devote to talking points . For a side INLINEFORM0 , let the self-coverage INLINEFORM1 be the fraction of content words uttered by INLINEFORM2 in round INLINEFORM3 that are among their own talking points INLINEFORM4 ; and the opponent-coverage INLINEFORM5 be the fraction of its content words covering opposing talking points INLINEFORM6 .", " We use all conversational features discussed above. For each side INLINEFORM0 we include INLINEFORM1 , INLINEFORM2 , and their sum. We also use the drop in self-coverage given by subtracting corresponding values for INLINEFORM3 , and the number of discussion points adopted by each side." ] }, { "raw_evidence": [ "In this work we introduce a computational framework for characterizing debates in terms of conversational flow. This framework captures two main debating strategies\u2014promoting one's own points and attacking the opponents' points\u2014and tracks their relative usage throughout the debate. By applying this methodology to a setting where debate winners are known, we show that conversational flow patterns are predictive of which debater is more likely to persuade an audience." ], "highlighted_evidence": [ "This framework captures two main debating strategies\u2014promoting one's own points and attacking the opponents' points\u2014and tracks their relative usage throughout the debate. " ] } ] }, { "question": "what debates dataset was used?", "answers": [ { "answer": "Intelligence Squared Debates", "type": "extractive" }, { "answer": "\u201cIntelligence Squared Debates\u201d (IQ2 for short)", "type": "extractive" } ], "q_uid": "ababb79dd3c301f4541beafa181f6a6726839a10", "evidence": [ { "raw_evidence": [ "In this study we use transcripts and results of Oxford-style debates from the public debate series \u201cIntelligence Squared Debates\u201d (IQ2 for short). These debates are recorded live, and contain motions covering a diversity of topics ranging from foreign policy issues to the benefits of organic food. Each debate consists of two opposing teams\u2014one for the motion and one against\u2014of two or three experts in the topic of the particular motion, along with a moderator. Each debate follows the Oxford-style format and consists of three rounds. In the introduction, each debater is given 7 minutes to lay out their main points. During the discussion, debaters take questions from the moderator and audience, and respond to attacks from the other team. This round lasts around 30 minutes and is highly interactive; teams frequently engage in direct conversation with each other. Finally, in the conclusion, each debater is given 2 minutes to make final remarks." ], "highlighted_evidence": [ "In this study we use transcripts and results of Oxford-style debates from the public debate series \u201cIntelligence Squared Debates\u201d (IQ2 for short)." ] }, { "raw_evidence": [ "In this study we use transcripts and results of Oxford-style debates from the public debate series \u201cIntelligence Squared Debates\u201d (IQ2 for short). These debates are recorded live, and contain motions covering a diversity of topics ranging from foreign policy issues to the benefits of organic food. Each debate consists of two opposing teams\u2014one for the motion and one against\u2014of two or three experts in the topic of the particular motion, along with a moderator. Each debate follows the Oxford-style format and consists of three rounds. In the introduction, each debater is given 7 minutes to lay out their main points. During the discussion, debaters take questions from the moderator and audience, and respond to attacks from the other team. This round lasts around 30 minutes and is highly interactive; teams frequently engage in direct conversation with each other. Finally, in the conclusion, each debater is given 2 minutes to make final remarks." ], "highlighted_evidence": [ "In this study we use transcripts and results of Oxford-style debates from the public debate series \u201cIntelligence Squared Debates\u201d (IQ2 for short). " ] } ] } ], "1608.06757": [ { "question": "what standard dataset were used?", "answers": [ { "answer": "The GENIA Corpus , CoNLL2003", "type": "extractive" }, { "answer": "GENIA Corpus BIBREF3, CoNLL2003 BIBREF14, KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC", "type": "extractive" }, { "answer": "CoNLL2003-testA, GENIA", "type": "extractive" } ], "q_uid": "8eefa116e3c3d3db751423cc4095d1c4153d3a5f", "evidence": [ { "raw_evidence": [ "Table TABREF33 gives an overview of the standard data sets we use for training. The GENIA Corpus BIBREF3 contains biomedical abstracts from the PubMed database. We use GENIA technical term annotations 3.02, which cover linguistic expressions to entities of interest in molecular biology, e.g. proteins, genes and cells. CoNLL2003 BIBREF14 is a standard NER dataset based on the Reuters RCV-1 news corpus. It covers named entities of type person, location, organization and misc.", "For testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. Additionally, we test on the complete KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC data sets using the GERBIL evaluation framework BIBREF23 ." ], "highlighted_evidence": [ "The GENIA Corpus BIBREF3 contains biomedical abstracts from the PubMed database. We use GENIA technical term annotations 3.02, which cover linguistic expressions to entities of interest in molecular biology, e.g. proteins, genes and cells. CoNLL2003 BIBREF14 is a standard NER dataset based on the Reuters RCV-1 news corpus. It covers named entities of type person, location, organization and misc.\n\nFor testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. " ] }, { "raw_evidence": [ "Table TABREF33 gives an overview of the standard data sets we use for training. The GENIA Corpus BIBREF3 contains biomedical abstracts from the PubMed database. We use GENIA technical term annotations 3.02, which cover linguistic expressions to entities of interest in molecular biology, e.g. proteins, genes and cells. CoNLL2003 BIBREF14 is a standard NER dataset based on the Reuters RCV-1 news corpus. It covers named entities of type person, location, organization and misc.", "For testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. Additionally, we test on the complete KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC data sets using the GERBIL evaluation framework BIBREF23 ." ], "highlighted_evidence": [ "The GENIA Corpus BIBREF3 contains biomedical abstracts from the PubMed database. We use GENIA technical term annotations 3.02, which cover linguistic expressions to entities of interest in molecular biology, e.g. proteins, genes and cells. CoNLL2003 BIBREF14 is a standard NER dataset based on the Reuters RCV-1 news corpus. It covers named entities of type person, location, organization and misc.\n\nFor testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. Additionally, we test on the complete KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC data sets using the GERBIL evaluation framework BIBREF23 ." ] }, { "raw_evidence": [ "For testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. Additionally, we test on the complete KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC data sets using the GERBIL evaluation framework BIBREF23 ." ], "highlighted_evidence": [ "For testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. " ] } ] } ], "2001.03131": [ { "question": "Do they perform error analysis?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "133eb4aa4394758be5f41744c60c99901b2bc01c", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] }, { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "What is the Random Kitchen Sink approach?", "answers": [ { "answer": "Random Kitchen Sink method uses a kernel function to map data vectors to a space where linear separation is possible.", "type": "abstractive" }, { "answer": "explicitly maps data vectors to a space where linear separation is possible, RKS method provides an approximate kernel function via explicit mapping", "type": "extractive" } ], "q_uid": "a778b8204a415b295f73b93623d09599f242f202", "evidence": [ { "raw_evidence": [ "RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible. It has been explored for natural language processing tasks BIBREF23, BIBREF24. The RKS method provides an approximate kernel function via explicit mapping." ], "highlighted_evidence": [ "RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible.", "The RKS method provides an approximate kernel function via explicit mapping." ] }, { "raw_evidence": [ "RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible. It has been explored for natural language processing tasks BIBREF23, BIBREF24. The RKS method provides an approximate kernel function via explicit mapping.", "Here, $\\phi (.)$ denotes the implicit mapping function (used to compute kernel matrix), $Z(.)$ denotes the explicit mapping function using RKS and ${\\Omega _k}$ denotes random variable ." ], "highlighted_evidence": [ "RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible. It has been explored for natural language processing tasks BIBREF23, BIBREF24. The RKS method provides an approximate kernel function via explicit mapping.\n\nHere, $\\phi (.)$ denotes the implicit mapping function (used to compute kernel matrix), $Z(.)$ denotes the explicit mapping function using RKS and ${\\Omega _k}$ denotes random variable ." ] } ] } ], "1606.02891": [ { "question": "what are the baseline systems?", "answers": [ { "answer": "attentional encoder-decoder networks BIBREF0", "type": "extractive" }, { "answer": " the dl4mt-tutorial", "type": "extractive" } ], "q_uid": "642e8cf1d39faa1cd985d16750cdc6696c52db2f", "evidence": [ { "raw_evidence": [ "Our systems are attentional encoder-decoder networks BIBREF0 . We base our implementation on the dl4mt-tutorial, which we enhanced with new features such as ensemble decoding and pervasive dropout." ], "highlighted_evidence": [ "Our systems are attentional encoder-decoder networks BIBREF0 . We base our implementation on the dl4mt-tutorial, which we enhanced with new features such as ensemble decoding and pervasive dropout." ] }, { "raw_evidence": [ "Our systems are attentional encoder-decoder networks BIBREF0 . We base our implementation on the dl4mt-tutorial, which we enhanced with new features such as ensemble decoding and pervasive dropout.", "We use minibatches of size 80, a maximum sentence length of 50, word embeddings of size 500, and hidden layers of size 1024. We clip the gradient norm to 1.0 BIBREF4 . We train the models with Adadelta BIBREF5 , reshuffling the training corpus between epochs. We validate the model every 10000 minibatches via Bleu on a validation set (newstest2013, newstest2014, or half of newsdev2016 for EN INLINEFORM0 RO). We perform early stopping for single models, and use the 4 last saved models (with models saved every 30000 minibatches) for the ensemble results. Note that ensemble scores are the result of a single training run. Due to resource limitations, we did not train ensemble components independently, which could result in more diverse models and better ensembles.", "Decoding is performed with beam search with a beam size of 12. For some language pairs, we used the AmuNMT C++ decoder as a more efficient alternative to the theano implementation of the dl4mt tutorial." ], "highlighted_evidence": [ "Our systems are attentional encoder-decoder networks BIBREF0 . We base our implementation on the dl4mt-tutorial, which we enhanced with new features such as ensemble decoding and pervasive dropout.", "We use minibatches of size 80, a maximum sentence length of 50, word embeddings of size 500, and hidden layers of size 1024. We clip the gradient norm to 1.0 BIBREF4 . We train the models with Adadelta BIBREF5 , reshuffling the training corpus between epochs. We validate the model every 10000 minibatches via Bleu on a validation set (newstest2013, newstest2014, or half of newsdev2016 for EN INLINEFORM0 RO). We perform early stopping for single models, and use the 4 last saved models (with models saved every 30000 minibatches) for the ensemble results. Note that ensemble scores are the result of a single training run. Due to resource limitations, we did not train ensemble components independently, which could result in more diverse models and better ensembles.\n\nDecoding is performed with beam search with a beam size of 12. For some language pairs, we used the AmuNMT C++ decoder as a more efficient alternative to the theano implementation of the dl4mt tutorial." ] } ] } ], "1803.09123": [ { "question": "What word embeddings do they test?", "answers": [ { "answer": "Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model", "type": "extractive" }, { "answer": "Bernoulli embeddings, continuous bag-of-words, Distributed Memory version of Paragraph Vector, Global Vectors, equation embeddings, equation unit embeddings", "type": "extractive" } ], "q_uid": "493e971ee3f57a821ef1f67ef3cd47ade154e7c4", "evidence": [ { "raw_evidence": [ "We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model." ], "highlighted_evidence": [ "We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model." ] }, { "raw_evidence": [ "We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model.", "In this paper we propose equation embeddings (EqEmb), an unsupervised approach for learning distributed representations of equations. The idea is to treat the equation as a \"singleton word,\" one that appears once but that appears in the context of other words. The surrounding text of the equation\u2014and in particular, the distributed representations of that text\u2014provides the data we need to develop a useful representation of the equation.", "Building on our previous method, we define a new model which we call equation unit embeddings (EqEmb-U). EqEmb-U model equations by treating them as sentences where the words are the equation variables, symbols and operators which we refer to as units. The first step in representing equations using equation units is to tokenize them. We use the approach outlined in BIBREF8 which represents equations into a syntax layout tree (SLT), a sequence of SLT tuples each of which contains the spatial relationship information between two equation symbols found within a particular window of equation symbols. Figure FIGREF11 shows example SLT representations of three equations." ], "highlighted_evidence": [ "We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model.", "In this paper we propose equation embeddings (EqEmb), an unsupervised approach for learning distributed representations of equations. ", "Building on our previous method, we define a new model which we call equation unit embeddings (EqEmb-U)." ] } ] }, { "question": "How do they define similar equations?", "answers": [ { "answer": "By using Euclidean distance computed between the context vector representations of the equations", "type": "abstractive" }, { "answer": "Similar words were ranked by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ). Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B.", "type": "extractive" } ], "q_uid": "8dd8e5599fc56562f2acbc16dd8544689cddd938", "evidence": [ { "raw_evidence": [ "In addition to words, EqEmb models can capture the semantic similarity between equations in the collection. We performed qualitative analysis of the model performance using all discovered equations across the 4 collection. Table TABREF24 shows the query equation used in the previous analysis and its 5 most similar equations discovered using EqEmb-U. For qualitative comparisons across the other embedding models, in Appendix A we provide results over the same query using CBOW, PV-DM, GloVe and EqEmb. In Appendix A reader should notice the difference in performance between EqEmb-U and EqEmb compared to existing embedding models which fail to discover semantically similar equations. tab:irexample1,tab:nlpexample2 show two additional example equation and its 5 most similar equations and words discovered using the EqEmb model. Similar words were ranked by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ). Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B." ], "highlighted_evidence": [ "Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B." ] }, { "raw_evidence": [ "In addition to words, EqEmb models can capture the semantic similarity between equations in the collection. We performed qualitative analysis of the model performance using all discovered equations across the 4 collection. Table TABREF24 shows the query equation used in the previous analysis and its 5 most similar equations discovered using EqEmb-U. For qualitative comparisons across the other embedding models, in Appendix A we provide results over the same query using CBOW, PV-DM, GloVe and EqEmb. In Appendix A reader should notice the difference in performance between EqEmb-U and EqEmb compared to existing embedding models which fail to discover semantically similar equations. tab:irexample1,tab:nlpexample2 show two additional example equation and its 5 most similar equations and words discovered using the EqEmb model. Similar words were ranked by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ). Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B." ], "highlighted_evidence": [ "In addition to words, EqEmb models can capture the semantic similarity between equations in the collection. We performed qualitative analysis of the model performance using all discovered equations across the 4 collection. Table TABREF24 shows the query equation used in the previous analysis and its 5 most similar equations discovered using EqEmb-U. For qualitative comparisons across the other embedding models, in Appendix A we provide results over the same query using CBOW, PV-DM, GloVe and EqEmb. In Appendix A reader should notice the difference in performance between EqEmb-U and EqEmb compared to existing embedding models which fail to discover semantically similar equations. tab:irexample1,tab:nlpexample2 show two additional example equation and its 5 most similar equations and words discovered using the EqEmb model. Similar words were ranked by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ). Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B." ] } ] } ], "1910.01863": [ { "question": "What evaluation criteria and metrics were used to evaluate the generated text?", "answers": [ { "answer": "BLEU , NIST , METEOR , ROUGE-L, CIDEr , evaluation script, automatic evaluation, human evaluation, minimum edit evaluation, word error rate (WER), factual errors and their types, fluency issues, acceptability of the output for production use in a news agency", "type": "extractive" }, { "answer": "BLEU, NIST, METEOR, ROUGE-L, CIDEr", "type": "extractive" } ], "q_uid": "abe2393415e533cb06311e74ed1c5674cff8571f", "evidence": [ { "raw_evidence": [ "In Table TABREF15 we measure BLEU BIBREF19, NIST BIBREF20, METEOR BIBREF21, ROUGE-L BIBREF22 and CIDEr BIBREF23 metrics on the 2018 E2E NLG Challenge test data using the evaluation script provided by the organizers. Our generation system is compared to the official shared task baseline system, TGen BIBREF24, as well as to the top performing participant system on each score (ST top). Our system outperforms the TGen baseline on 3 out of 5 metrics (BLEU, METEOR and ROUGE-L), which is on par with the official shared task results, where not a single one participant system was able to surpass the baseline on all five metrics. On two metrics, BLEU and METEOR, our system outperforms the best shared task participants.", "Our alignment serves as a gold standard reflecting which events the journalists have chosen to mention for each game. In our generation task, we are presented with the problem of selecting appropriate events from the full game statistics. We use the gold standard selection during training and validation of the text generation model, as well as the automatic evaluation. As we deploy our text generation model for manual evaluation, we use a Conditional Random Field (CRF) model to predict which events to mention.", "The second human evaluation aimed at judging the acceptability of the output for production use in a news agency. The output is evaluated in terms of its usability for a news channel labelled as being machine-generated, i.e. not aiming at the level of a human journalist equipped with substantial background information. The evaluation was carried out by two journalists from the STT agency, who split the 59 games among themselves approximately evenly. The first journalist edited the games to a form corresponding to a draft for subsequent minor post-editing by a human, simulating the use of the generated output as a product where the final customer is expected to do own post-editing before publication. The second journalist directly edited the news to a state ready for direct publication in a news stream labeled as machine-generated news. In addition to correcting factual errors, the journalists removed excessive repetition, improved text fluency, as well as occasionally included important facts which the system left ungenerated. The WER measured against the output considered ready for post-editing, is 9.9% (11.2% disregarding punctuation), only slightly worse than the evaluation with only the factual and grammatical errors corrected. The WER measured against the output considered ready for direct release, was 22.0% (24.4% disregarding punctuation). In other words, 75\u201390% of the generated text can be directly used, depending on the expected post-editing effort.", "In the minimum edit evaluation, carried out by the annotator who created the news corpus, only factual mistakes and grammatical errors are corrected, resulting in text which may remain awkward or unfluent. The word error rate (WER) of the generated text compared to its corrected variant as a reference is 5.6% (6.2% disregarding punctuation). The WER measure is defined as the number of insertions, substitutions, and deletions divided by the total length of the reference, in terms of tokens. The measure is the edit distance of the generated text and its corrected variant, directly reflecting the amount of effort needed to correct the generated output.", "The factual errors and their types are summarized in Table TABREF23. From the total of 510 game events generated by the system, 78 of these contained a factual error, i.e. 84.7% were generated without factual errors.", "Most fluency issues relate to the overall flow and structure of the report. Addressing these issues would require the model to take into account multiple events in a game, and combine the information more flexibly to avoid repetition. For instance, the output may repeatedly mention the period number for all goals in the same period. Likewise, this setup sometimes results in unnatural, yet grammatical, repetition of words across consecutive sentences. Even though the model has learned a selection of verbs meaning to score a goal, it is unable to ensure their varied use. While not successful in our initial experiments, generating text based on the multi-event alignments or at document level may eventually overcome these issues." ], "highlighted_evidence": [ "In Table TABREF15 we measure BLEU BIBREF19, NIST BIBREF20, METEOR BIBREF21, ROUGE-L BIBREF22 and CIDEr BIBREF23 metrics on the 2018 E2E NLG Challenge test data using the evaluation script provided by the organizers. ", "We use the gold standard selection during training and validation of the text generation model, as well as the automatic evaluation. As we deploy our text generation model for manual evaluation, we use a Conditional Random Field (CRF) model to predict which events to mention.", "The second human evaluation aimed at judging the acceptability of the output for production use in a news agency. ", "In the minimum edit evaluation, carried out by the annotator who created the news corpus, only factual mistakes and grammatical errors are corrected, resulting in text which may remain awkward or unfluent. The word error rate (WER) of the generated text compared to its corrected variant as a reference is 5.6% (6.2% disregarding punctuation). ", "The factual errors and their types are summarized in Table TABREF23. From the total of 510 game events generated by the system, 78 of these contained a factual error, i.e. 84.7% were generated without factual errors.", "Most fluency issues relate to the overall flow and structure of the report. Addressing these issues would require the model to take into account multiple events in a game, and combine the information more flexibly to avoid repetition. ", "The second human evaluation aimed at judging the acceptability of the output for production use in a news agency. The output is evaluated in terms of its usability for a news channel labelled as being machine-generated, i.e. not aiming at the level of a human journalist equipped with substantial background information. The evaluation was carried out by two journalists from the STT agency, who split the 59 games among themselves approximately evenly." ] }, { "raw_evidence": [ "In Table TABREF15 we measure BLEU BIBREF19, NIST BIBREF20, METEOR BIBREF21, ROUGE-L BIBREF22 and CIDEr BIBREF23 metrics on the 2018 E2E NLG Challenge test data using the evaluation script provided by the organizers. Our generation system is compared to the official shared task baseline system, TGen BIBREF24, as well as to the top performing participant system on each score (ST top). Our system outperforms the TGen baseline on 3 out of 5 metrics (BLEU, METEOR and ROUGE-L), which is on par with the official shared task results, where not a single one participant system was able to surpass the baseline on all five metrics. On two metrics, BLEU and METEOR, our system outperforms the best shared task participants." ], "highlighted_evidence": [ "In Table TABREF15 we measure BLEU BIBREF19, NIST BIBREF20, METEOR BIBREF21, ROUGE-L BIBREF22 and CIDEr BIBREF23 metrics on the 2018 E2E NLG Challenge test data using the evaluation script provided by the organizers." ] } ] } ], "1701.08229": [ { "question": "Do they evaluate only on English datasets?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "00c57e45ac6afbdfa67350a57e81b4fad0ed2885", "evidence": [ { "raw_evidence": [ "Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0." ], "highlighted_evidence": [ "We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). " ] }, { "raw_evidence": [ "Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0." ], "highlighted_evidence": [ "The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0." ] } ] }, { "question": "What are the three steps to feature elimination?", "answers": [ { "answer": "Reduction, Selection, Rank", "type": "extractive" }, { "answer": "reduced the dataset by eliminating features, apply feature selection to select highest ranked features to train and test the model and rank the performance of incrementally adding features.", "type": "abstractive" } ], "q_uid": "22714f6cad2d5c54c28823e7285dc85e8d6bc109", "evidence": [ { "raw_evidence": [ "Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:", "Reduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.", "Selection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.", "Rank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class." ], "highlighted_evidence": [ "We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:\n\nReduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.\n\nSelection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.\n\nRank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class." ] }, { "raw_evidence": [ "Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:", "Reduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.", "Selection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.", "Rank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class." ], "highlighted_evidence": [ "Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:\n\nReduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.\n\nSelection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.\n\nRank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class." ] } ] }, { "question": "How is the dataset annotated?", "answers": [ { "answer": "no evidence of depression, depressed mood, disturbed sleep, fatigue or loss of energy", "type": "extractive" }, { "answer": "The annotations are based on evidence of depression and further annotated by the depressive symptom if there is evidence of depression", "type": "abstractive" } ], "q_uid": "82642d3111287abf736b781043d49536fe48c350", "evidence": [ { "raw_evidence": [ "Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0." ], "highlighted_evidence": [ "Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 ." ] }, { "raw_evidence": [ "Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0." ], "highlighted_evidence": [ "We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 . " ] } ] }, { "question": "What dataset is used for this study?", "answers": [ { "answer": "BIBREF12 , BIBREF13", "type": "extractive" }, { "answer": "an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13", "type": "extractive" } ], "q_uid": "5a81732d52f64e81f1f83e8fd3514251227efbc7", "evidence": [ { "raw_evidence": [ "Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0." ], "highlighted_evidence": [ "We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets." ] }, { "raw_evidence": [ "Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0." ], "highlighted_evidence": [ "We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. " ] } ] } ], "1912.06262": [ { "question": "what were their performance results?", "answers": [ { "answer": " the hybrid NER model achieved a F1 score of $0.995$ on synthesized queries and $0.948$ on clinical notes while the i2b2 NER model achieved a F1 score of $0.441$ on synthesized queries and $0.927$ on clinical notes", "type": "extractive" }, { "answer": "hybrid NER model achieved a F1 score of $0.995$ on synthesized queries and $0.948$ on clinical notes", "type": "extractive" } ], "q_uid": "9a8b9ea3176d30da2453cac6e9347737c729a538", "evidence": [ { "raw_evidence": [ "With the above hyperparameter setting, the hybrid NER model achieved a F1 score of $0.995$ on synthesized queries and $0.948$ on clinical notes while the i2b2 NER model achieved a F1 score of $0.441$ on synthesized queries and $0.927$ on clinical notes (See Table TABREF23)." ], "highlighted_evidence": [ "With the above hyperparameter setting, the hybrid NER model achieved a F1 score of $0.995$ on synthesized queries and $0.948$ on clinical notes while the i2b2 NER model achieved a F1 score of $0.441$ on synthesized queries and $0.927$ on clinical notes (See Table TABREF23)." ] }, { "raw_evidence": [ "With the above hyperparameter setting, the hybrid NER model achieved a F1 score of $0.995$ on synthesized queries and $0.948$ on clinical notes while the i2b2 NER model achieved a F1 score of $0.441$ on synthesized queries and $0.927$ on clinical notes (See Table TABREF23)." ], "highlighted_evidence": [ "With the above hyperparameter setting, the hybrid NER model achieved a F1 score of $0.995$ on synthesized queries and $0.948$ on clinical notes while the i2b2 NER model achieved a F1 score of $0.441$ on synthesized queries and $0.927$ on clinical notes (See Table TABREF23)." ] } ] }, { "question": "where did they obtain the annotated clinical notes from?", "answers": [ { "answer": "clinical notes from the CE task in 2010 i2b2/VA", "type": "extractive" }, { "answer": "clinical notes from the CE task in 2010 i2b2/VA ", "type": "extractive" } ], "q_uid": "4477bb513d56e57732fba126944073d414d1f75f", "evidence": [ { "raw_evidence": [ "Despite the greater similarity between our task and the 2013 ShARe/CLEF Task 1, we use the clinical notes from the CE task in 2010 i2b2/VA on account of 1) the data from 2010 i2b2/VA being easier to access and parse, 2) 2013 ShARe/CLEF containing disjoint entities and hence requiring more complicated tagging schemes. The synthesized user queries are generated using the aforementioned dermatology glossary. Tagged sentences are extracted from the clinical notes. Sentences with no clinical entity present are ignored. 22,489 tagged sentences are extracted from the clinical notes. We will refer to these tagged sentences interchangeably as the i2b2 data. The sentences are shuffled and split into train/dev/test set with a ratio of 7:2:1. The synthesized user queries are composed by randomly selecting several clinical terms from the dermatology glossary and then combining them in no particular order. When combining the clinical terms, we attach the BIO tags to their constituent words. The synthesized user queries (13,697 in total) are then split into train/dev/test set with the same ratio. Next, each set in the i2b2 data and the corresponding set in the synthesized query data are combined to form a hybrid train/dev/test set, respectively. This way we ensure that in each hybrid train/dev/test set, the ratio between the i2b2 data and the synthesized query data is the same." ], "highlighted_evidence": [ "Despite the greater similarity between our task and the 2013 ShARe/CLEF Task 1, we use the clinical notes from the CE task in 2010 i2b2/VA on account of 1) the data from 2010 i2b2/VA being easier to access and parse, 2) 2013 ShARe/CLEF containing disjoint entities and hence requiring more complicated tagging schemes." ] }, { "raw_evidence": [ "Despite the greater similarity between our task and the 2013 ShARe/CLEF Task 1, we use the clinical notes from the CE task in 2010 i2b2/VA on account of 1) the data from 2010 i2b2/VA being easier to access and parse, 2) 2013 ShARe/CLEF containing disjoint entities and hence requiring more complicated tagging schemes. The synthesized user queries are generated using the aforementioned dermatology glossary. Tagged sentences are extracted from the clinical notes. Sentences with no clinical entity present are ignored. 22,489 tagged sentences are extracted from the clinical notes. We will refer to these tagged sentences interchangeably as the i2b2 data. The sentences are shuffled and split into train/dev/test set with a ratio of 7:2:1. The synthesized user queries are composed by randomly selecting several clinical terms from the dermatology glossary and then combining them in no particular order. When combining the clinical terms, we attach the BIO tags to their constituent words. The synthesized user queries (13,697 in total) are then split into train/dev/test set with the same ratio. Next, each set in the i2b2 data and the corresponding set in the synthesized query data are combined to form a hybrid train/dev/test set, respectively. This way we ensure that in each hybrid train/dev/test set, the ratio between the i2b2 data and the synthesized query data is the same." ], "highlighted_evidence": [ "Despite the greater similarity between our task and the 2013 ShARe/CLEF Task 1, we use the clinical notes from the CE task in 2010 i2b2/VA on account of 1) the data from 2010 i2b2/VA being easier to access and parse, 2) 2013 ShARe/CLEF containing disjoint entities and hence requiring more complicated tagging schemes." ] } ] } ], "1709.07814": [ { "question": "Which architecture do they use for the encoder and decoder?", "answers": [ { "answer": "we construct an encoder with several convolutional layers BIBREF14 followed by NIN layers BIBREF15 as the lower part in the encoder and integrate them with deep bidirectional long short-term memory (Bi-LSTM) BIBREF16 at the higher part, On the decoder side, we use a standard deep unidirectional LSTM with global attention BIBREF13 that is calculated by a multi-layer perceptron (MLP)", "type": "extractive" }, { "answer": "In encoder they use convolutional, NIN and bidirectional LSTM layers and in decoder they use unidirectional LSTM ", "type": "abstractive" } ], "q_uid": "1b23c4535a6c10eb70bbc95313c465e4a547db5e", "evidence": [ { "raw_evidence": [ "In this work, we use the raw waveform as the input representation instead of spectral-based features and a grapheme (character) sequence as the output representation. In contrast to most encoder-decoder architectures, which are purely based on recurrent neural network (RNNs) framework, we construct an encoder with several convolutional layers BIBREF14 followed by NIN layers BIBREF15 as the lower part in the encoder and integrate them with deep bidirectional long short-term memory (Bi-LSTM) BIBREF16 at the higher part. We use convolutional layers because they are suitable for extracting local information from raw speech. We use a striding mechanism to reduce the dimension from the input frames BIBREF17 , while the NIN layer represents more complex structures on the top of the convolutional layers. On the decoder side, we use a standard deep unidirectional LSTM with global attention BIBREF13 that is calculated by a multi-layer perceptron (MLP) as described in Eq. EQREF2 . For more details, we illustrate our architecture in Figure FIGREF4 ." ], "highlighted_evidence": [ " In contrast to most encoder-decoder architectures, which are purely based on recurrent neural network (RNNs) framework, we construct an encoder with several convolutional layers BIBREF14 followed by NIN layers BIBREF15 as the lower part in the encoder and integrate them with deep bidirectional long short-term memory (Bi-LSTM) BIBREF16 at the higher part.", "On the decoder side, we use a standard deep unidirectional LSTM with global attention BIBREF13 that is calculated by a multi-layer perceptron (MLP) as described in Eq. EQREF2 ." ] }, { "raw_evidence": [ "On the top layers of the encoder after the transferred convolutional and NIN layers, we put three bidirectional LSTMs (Bi-LSTM) with 256 hidden units (total 512 units for both directions). To reduce the computational time, we used hierarchical subsampling BIBREF21 , BIBREF22 , BIBREF10 . We applied subsampling on all the Bi-LSTM layers and reduced the length by a factor of 8.", "On the decoder side, the previous input phonemes / characters were converted into real vectors by a 128-dimensional embedding matrix. We used one unidirectional LSTM with 512 hidden units and followed by a softmax layer to output the character probability. For the end-to-end training phase, we froze the parameter values from the transferred layers from epoch 0 to epoch 10, and after epoch 10 we jointly optimized all the parameters together until the end of training (a total 40 epochs). We used an Adam BIBREF23 optimizer with a learning rate of 0.0005." ], "highlighted_evidence": [ "On the top layers of the encoder after the transferred convolutional and NIN layers, we put three bidirectional LSTMs (Bi-LSTM) with 256 hidden units (total 512 units for both directions).", "On the decoder side, the previous input phonemes / characters were converted into real vectors by a 128-dimensional embedding matrix. We used one unidirectional LSTM with 512 hidden units and followed by a softmax layer to output the character probability. " ] } ] }, { "question": "How does their decoder generate text?", "answers": [ { "answer": "decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information", "type": "extractive" }, { "answer": "Decoder predicts the sequence of phoneme or grapheme at each time based on the previous output and context information with a beam search strategy", "type": "abstractive" } ], "q_uid": "0a75a52450ed866df3a304077769e1725a995bb7", "evidence": [ { "raw_evidence": [ "where INLINEFORM0 , INLINEFORM1 is the number of hidden units for the encoder and INLINEFORM2 is the number of hidden units for the decoder. Finally, the decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information INLINEFORM4 , can be formulated as: DISPLAYFORM0" ], "highlighted_evidence": [ "Finally, the decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information INLINEFORM4 , can be formulated as: DISPLAYFORM0" ] }, { "raw_evidence": [ "where INLINEFORM0 , INLINEFORM1 is the number of hidden units for the encoder and INLINEFORM2 is the number of hidden units for the decoder. Finally, the decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information INLINEFORM4 , can be formulated as: DISPLAYFORM0", "The most common input INLINEFORM0 for speech recognition tasks is a sequence of feature vectors such as log Mel-spectral spectrogram and/or MFCC. Therefore, INLINEFORM1 where D is the number of the features and S is the total length of the utterance in frames. The output INLINEFORM2 can be either phoneme or grapheme (character) sequence.", "In the decoding phase, we used a beam search strategy with beam size INLINEFORM0 and we adjusted the score by dividing with the transcription length to prevent the decoder from favoring shorter transcriptions. We did not use any language model or lexicon dictionary for decoding. All of our models were implemented on the PyTorch framework ." ], "highlighted_evidence": [ "Finally, the decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information INLINEFORM4 , can be formulated as: DISPLAYFORM0", "The output INLINEFORM2 can be either phoneme or grapheme (character) sequence.", "In the decoding phase, we used a beam search strategy with beam size INLINEFORM0 and we adjusted the score by dividing with the transcription length to prevent the decoder from favoring shorter transcriptions." ] } ] }, { "question": "Which dataset do they use?", "answers": [ { "answer": "WSJ", "type": "extractive" }, { "answer": "WSJ-SI84, WSJ-SI284", "type": "extractive" } ], "q_uid": "fd0a3e9c210163a55d3ed791e95ae3875184b8f8", "evidence": [ { "raw_evidence": [ "In this study, we investigate the performance of our proposed models on WSJ BIBREF5 . We used the same definitions of the training, development and test set as the Kaldi s5 recipe BIBREF18 . The raw speech waveforms were segmented into multiple frames with a 25ms window size and a 10ms step size. We normalized the raw speech waveform into the range -1 to 1. For spectral based features such as MFCC and log Mel-spectrogram, we normalized the features for each dimension into zero mean and unit variance. For WSJ, we separated into two experiments by using WSJ-SI84 only and WSJ-SI284 data. We used dev_93 for our validation set and eval_92 for our test set. We used the character sequence as our decoder target and followed the preprocessing step proposed by BIBREF19 . The text from all the utterances was mapped into a 32-character set: 26 (a-z) alphabet, apostrophe, period, dash, space, noise, and \u201ceos\"." ], "highlighted_evidence": [ "In this study, we investigate the performance of our proposed models on WSJ BIBREF5 . " ] }, { "raw_evidence": [ "An example of our transfer learning results is shown in Figure FIGREF8 , and Table TABREF14 shows the speech recognition performance in CER for both the WSJ-SI84 and WSJ-SI284 datasets. We compared our method with several published models like CTC, Attention Encoder-Decoder and Joint CTC-Attention model that utilize CTC for training the encoder part. Besides, we also train our own baseline Attention Encoder-Decoder with Mel-scale spectrogram. The difference between our Attention Encoder-Decoder (\u201cAtt Enc-Dec (ours)\", \u201cAtt Enc-Dec Wav2Text\") with Attention Encoder-Decoder from BIBREF24 (\u201cAtt Enc-Dec Content\", \u201cAtt Enc-Dec Location\") is we used the current hidden states to generate the attention vector instead of the previous hidden states. Another addition is we utilized \u201cinput feedback\" method BIBREF13 by concatenating the previous context vector into the current input along with the character embedding vector. By using those modifications, we are able to improve the baseline performance." ], "highlighted_evidence": [ "An example of our transfer learning results is shown in Figure FIGREF8 , and Table TABREF14 shows the speech recognition performance in CER for both the WSJ-SI84 and WSJ-SI284 datasets." ] } ] } ], "1806.00738": [ { "question": "What model is used to encode the images?", "answers": [ { "answer": "a Convolutional Neural Network (CNN)", "type": "extractive" }, { "answer": "LSTM", "type": "extractive" } ], "q_uid": "c37f65c9f0d543a35c784263b79236ccf1c44fac", "evidence": [ { "raw_evidence": [ "Our model extends the image description model by BIBREF0 , which consists of an encoder-decoder architecture. The encoder is a Convolutional Neural Network (CNN) and the decoder is a Long Short-Term Memory (LSTM) network, as presented in Figure 2 . The image is passed through the encoder generating the image representation that is used by the decoder to know the content of the image and generate the description word by word. In the following, we describe how we extended this model for the visual storytelling task." ], "highlighted_evidence": [ "The encoder is a Convolutional Neural Network (CNN) and the decoder is a Long Short-Term Memory (LSTM) network, as presented in Figure 2 . The image is passed through the encoder generating the image representation that is used by the decoder to know the content of the image and generate the description word by word." ] }, { "raw_evidence": [ "The model's first component is a Recurrent Neural Network (RNN), more precisely an LSTM that summarizes the sequence of images. At every timestep $t$ the network takes as input an image $I_i$ where $i\\in \\lbrace 1,2,3,4,5\\rbrace $ from the sequence. At time $t=5$ , the LSTM has encoded the 5 images and provides the sequence's context through its last hidden state denoted by $h_e^{(t)}$ . The representation of the images was obtained through Inception V3." ], "highlighted_evidence": [ "The model's first component is a Recurrent Neural Network (RNN), more precisely an LSTM that summarizes the sequence of images. At every timestep $t$ the network takes as input an image $I_i$ where $i\\in \\lbrace 1,2,3,4,5\\rbrace $ from the sequence. At time $t=5$ , the LSTM has encoded the 5 images and provides the sequence's context through its last hidden state denoted by $h_e^{(t)}$ . The representation of the images was obtained through Inception V3." ] } ] }, { "question": "How is the sequential nature of the story captured?", "answers": [ { "answer": "we provide the decoder with the context of the whole sequence and the content of the current image (i.e. global and local information) to generate the corresponding text that will contribute to the overall story", "type": "extractive" }, { "answer": "The encoder takes the images in order, one at every timestep $t$ . At time $t=5$ , we obtain the context vector through $h_e^{(t)}$ (represented by $\\mathbf {Z}$ ). This vector is used to initialize each decoder's hidden state while the first input to each decoder is its corresponding image embedding $e(I_i)$ . Each decoder generates a sequence of words $\\lbrace p_1,...,p_{n}\\rbrace $ for each image in the sequence. ", "type": "extractive" } ], "q_uid": "584af673429c7f8621c6bf83362a37048daa0e5d", "evidence": [ { "raw_evidence": [ "The decoder is the second LSTM network that uses the information obtained from the encoder to generate the sequence's story. The first input $x_0$ to the decoder is the image for which the text is being generated. The last hidden state from the encoder $h_e^{(t)}$ is used to initialize the first hidden state of the decoder $h_d^{(0)}$ . With this strategy, we provide the decoder with the context of the whole sequence and the content of the current image (i.e. global and local information) to generate the corresponding text that will contribute to the overall story." ], "highlighted_evidence": [ "The decoder is the second LSTM network that uses the information obtained from the encoder to generate the sequence's story. The first input $x_0$ to the decoder is the image for which the text is being generated. The last hidden state from the encoder $h_e^{(t)}$ is used to initialize the first hidden state of the decoder $h_d^{(0)}$ . With this strategy, we provide the decoder with the context of the whole sequence and the content of the current image (i.e. global and local information) to generate the corresponding text that will contribute to the overall story." ] }, { "raw_evidence": [ "Our proposed architecture is presented in Figure 3 . For each image in the sequence, we obtain its representation $\\lbrace e(I_1),...,e(I_5)\\rbrace $ using Inception v3. The encoder takes the images in order, one at every timestep $t$ . At time $t=5$ , we obtain the context vector through $h_e^{(t)}$ (represented by $\\mathbf {Z}$ ). This vector is used to initialize each decoder's hidden state while the first input to each decoder is its corresponding image embedding $e(I_i)$ . Each decoder generates a sequence of words $\\lbrace p_1,...,p_{n}\\rbrace $ for each image in the sequence. The final story is the concatenation of the output of the 5 decoders." ], "highlighted_evidence": [ "For each image in the sequence, we obtain its representation $\\lbrace e(I_1),...,e(I_5)\\rbrace $ using Inception v3. The encoder takes the images in order, one at every timestep $t$ . At time $t=5$ , we obtain the context vector through $h_e^{(t)}$ (represented by $\\mathbf {Z}$ ). This vector is used to initialize each decoder's hidden state while the first input to each decoder is its corresponding image embedding $e(I_i)$ . Each decoder generates a sequence of words $\\lbrace p_1,...,p_{n}\\rbrace $ for each image in the sequence. The final story is the concatenation of the output of the 5 decoders." ] } ] }, { "question": "Is the position in the sequence part of the input?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "1be54c5b3ea67d837ffba2290a40c1e720d9587f", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] }, { "raw_evidence": [ "The model's first component is a Recurrent Neural Network (RNN), more precisely an LSTM that summarizes the sequence of images. At every timestep $t$ the network takes as input an image $I_i$ where $i\\in \\lbrace 1,2,3,4,5\\rbrace $ from the sequence. At time $t=5$ , the LSTM has encoded the 5 images and provides the sequence's context through its last hidden state denoted by $h_e^{(t)}$ . The representation of the images was obtained through Inception V3." ], "highlighted_evidence": [ "At every timestep $t$ the network takes as input an image $I_i$ where $i\\in \\lbrace 1,2,3,4,5\\rbrace $ from the sequence." ] } ] }, { "question": "Do the decoder LSTMs all have the same weights?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "b08f88d1facefceb87e134ba2c1fa90035018e83", "evidence": [ { "raw_evidence": [ "Our model contains five independent decoders, one for each image in the sequence. All the 5 decoders use the last hidden state of the encoder (i.e. the context) as its first hidden state and take the corresponding image embedding as its first input. In this way, the first decoder generates the sequence of words for the first image in the sequence, the second decoder for the second image in the sequence, and so on. This allows each decoder to learn a specific language model for each position of the sequence. For instance, the first decoder will learn the opening sentences of the story while the last decoder the closing sentences. The word embeddings were computed using word2vec BIBREF8 ." ], "highlighted_evidence": [ "Our model contains five independent decoders, one for each image in the sequence.", "This allows each decoder to learn a specific language model for each position of the sequence." ] }, { "raw_evidence": [ "Our model contains five independent decoders, one for each image in the sequence. All the 5 decoders use the last hidden state of the encoder (i.e. the context) as its first hidden state and take the corresponding image embedding as its first input. In this way, the first decoder generates the sequence of words for the first image in the sequence, the second decoder for the second image in the sequence, and so on. This allows each decoder to learn a specific language model for each position of the sequence. For instance, the first decoder will learn the opening sentences of the story while the last decoder the closing sentences. The word embeddings were computed using word2vec BIBREF8 ." ], "highlighted_evidence": [ "All the 5 decoders use the last hidden state of the encoder (i.e. the context) as its first hidden state and take the corresponding image embedding as its first input. In this way, the first decoder generates the sequence of words for the first image in the sequence, the second decoder for the second image in the sequence, and so on. This allows each decoder to learn a specific language model for each position of the sequence. For instance, the first decoder will learn the opening sentences of the story while the last decoder the closing sentences." ] } ] } ], "1811.00147": [ { "question": "Is fine-tuning required to incorporate these embeddings into existing models?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "b06512c17d99f9339ffdab12cedbc63501ff527e", "evidence": [ { "raw_evidence": [ "While it is obvious that our embeddings can be used as features for new predictive models, it is also very easy to incorporate our learned Dolores embeddings into existing predictive models on knowledge graphs. The only requirement is that the model accepts as input, an embedding layer (for entities and relations). If a model fulfills this requirement (which a large number of neural models on knowledge graphs do), we can just use Dolores embeddings as a drop-in replacement. We just initialize the corresponding embedding layer with Dolores embeddings. In our evaluation below, we show how to improve several state-of-the-art models on various tasks simply by incorporating Dolores as a drop-in replacement to the original embedding layer." ], "highlighted_evidence": [ "The only requirement is that the model accepts as input, an embedding layer (for entities and relations). If a model fulfills this requirement (which a large number of neural models on knowledge graphs do), we can just use Dolores embeddings as a drop-in replacement. We just initialize the corresponding embedding layer with Dolores embeddings." ] }, { "raw_evidence": [ "While it is obvious that our embeddings can be used as features for new predictive models, it is also very easy to incorporate our learned Dolores embeddings into existing predictive models on knowledge graphs. The only requirement is that the model accepts as input, an embedding layer (for entities and relations). If a model fulfills this requirement (which a large number of neural models on knowledge graphs do), we can just use Dolores embeddings as a drop-in replacement. We just initialize the corresponding embedding layer with Dolores embeddings. In our evaluation below, we show how to improve several state-of-the-art models on various tasks simply by incorporating Dolores as a drop-in replacement to the original embedding layer." ], "highlighted_evidence": [ "While it is obvious that our embeddings can be used as features for new predictive models, it is also very easy to incorporate our learned Dolores embeddings into existing predictive models on knowledge graphs. The only requirement is that the model accepts as input, an embedding layer (for entities and relations). If a model fulfills this requirement (which a large number of neural models on knowledge graphs do), we can just use Dolores embeddings as a drop-in replacement. We just initialize the corresponding embedding layer with Dolores embeddings. In our evaluation below, we show how to improve several state-of-the-art models on various tasks simply by incorporating Dolores as a drop-in replacement to the original embedding layer." ] } ] }, { "question": "How are meaningful chains in the graph selected?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "utilize the machinery of language modeling using deep neural networks to learn Dolores embeddings.", "type": "extractive" } ], "q_uid": "fd8e23947095fe2230ffe1a478945829b09c8c95", "evidence": [ { "raw_evidence": [ "While it is obvious that our embeddings can be used as features for new predictive models, it is also very easy to incorporate our learned Dolores embeddings into existing predictive models on knowledge graphs. The only requirement is that the model accepts as input, an embedding layer (for entities and relations). If a model fulfills this requirement (which a large number of neural models on knowledge graphs do), we can just use Dolores embeddings as a drop-in replacement. We just initialize the corresponding embedding layer with Dolores embeddings. In our evaluation below, we show how to improve several state-of-the-art models on various tasks simply by incorporating Dolores as a drop-in replacement to the original embedding layer." ], "highlighted_evidence": [ "While it is obvious that our embeddings can be used as features for new predictive models, it is also very easy to incorporate our learned Dolores embeddings into existing predictive models on knowledge graphs. The only requirement is that the model accepts as input, an embedding layer (for entities and relations). If a model fulfills this requirement (which a large number of neural models on knowledge graphs do), we can just use Dolores embeddings as a drop-in replacement. We just initialize the corresponding embedding layer with Dolores embeddings. In our evaluation below, we show how to improve several state-of-the-art models on various tasks simply by incorporating Dolores as a drop-in replacement to the original embedding layer." ] }, { "raw_evidence": [ "Having generated a set of paths on knowledge graphs representing local contexts of entities and relations, we are now ready to utilize the machinery of language modeling using deep neural networks to learn Dolores embeddings.", "After having estimated the parameters of the Dolores learner, we now extract the context-independent and context-dependent representations for each entity and relation and combine them to obtain Dolores embeddings. More specifically, Dolores embeddings are task specific combination of the context-dependent and context-independent representations learned by our learner. Note that our learner (which is an $L$ -layer Bi-Directional LSTM) computes a set of $2L + 1$ representations for each entity-relation pair which we denote by: $ R_t = [ x_t, \\overrightarrow{h_{t,i}}, \\overleftarrow{h_{t,i}} \\mid i = 1, 2, \\cdots , \\textit {L} ], $" ], "highlighted_evidence": [ "Having generated a set of paths on knowledge graphs representing local contexts of entities and relations, we are now ready to utilize the machinery of language modeling using deep neural networks to learn Dolores embeddings.", "After having estimated the parameters of the Dolores learner, we now extract the context-independent and context-dependent representations for each entity and relation and combine them to obtain Dolores embeddings. More specifically, Dolores embeddings are task specific combination of the context-dependent and context-independent representations learned by our learner. " ] } ] } ], "1610.09225": [ { "question": "Do they remove seasonality from the time series?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "3611a72f754de1e256fbd25b012197e1c24e8470", "evidence": [ { "raw_evidence": [ "Data Pre-Processing", "Stock prices data collected is not complete understandably because of weekends and public holidays when the stock market does not function. The missing data is approximated using a simple technique by Goel BIBREF17 . Stock data usually follows a concave function. So, if the stock value on a day is x and the next value present is y with some missing in between. The first missing value is approximated to be (y+x)/2 and the same method is followed to fill all the gaps." ], "highlighted_evidence": [ "Data Pre-Processing\nStock prices data collected is not complete understandably because of weekends and public holidays when the stock market does not function. The missing data is approximated using a simple technique by Goel BIBREF17 . Stock data usually follows a concave function. So, if the stock value on a day is x and the next value present is y with some missing in between. The first missing value is approximated to be (y+x)/2 and the same method is followed to fill all the gaps." ] } ] }, { "question": "What is the dimension of the embeddings?", "answers": [ { "answer": "300", "type": "extractive" }, { "answer": "300", "type": "extractive" } ], "q_uid": "4c07c33dfaf4f3e6db55e377da6fa69825d0ba15", "evidence": [ { "raw_evidence": [ "Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations. Once every word of the language has been mapped to a unique vector, vectors of words can be summed up yielding a resultant vector for any given collection of words BIBREF19 . Relationship between the words is exactly retained in this form of representation. Word vectors difference between Rome and Italy is very close to the difference between vectors of France and Paris This sustained relationship between word concepts makes word2vec model very attractive for textual analysis. In this representation, resultant vector which is sum of 300 dimensional vectors of all words in a tweet acts as features to the model." ], "highlighted_evidence": [ "Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations." ] }, { "raw_evidence": [ "Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations. Once every word of the language has been mapped to a unique vector, vectors of words can be summed up yielding a resultant vector for any given collection of words BIBREF19 . Relationship between the words is exactly retained in this form of representation. Word vectors difference between Rome and Italy is very close to the difference between vectors of France and Paris This sustained relationship between word concepts makes word2vec model very attractive for textual analysis. In this representation, resultant vector which is sum of 300 dimensional vectors of all words in a tweet acts as features to the model." ], "highlighted_evidence": [ "Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations." ] } ] }, { "question": "What dataset is used to train the model?", "answers": [ { "answer": "2,50,000 tweets, Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016", "type": "extractive" }, { "answer": "Collected tweets and opening and closing stock prices of Microsoft.", "type": "abstractive" } ], "q_uid": "b1ce129678e37070e69f01332f1a8587e18e06b0", "evidence": [ { "raw_evidence": [ "A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 . Twitter4J is a java application which helps us to extract tweets from twitter. The tweets were collected using Twitter API and filtered using keywords like $ MSFT, # Microsoft, #Windows etc. Not only the opinion of public about the company's stock but also the opinions about products and services offered by the company would have a significant impact and are worth studying. Based on this principle, the keywords used for filtering are devised with extensive care and tweets are extracted in such a way that they represent the exact emotions of public about Microsoft over a period of time. The news on twitter about Microsoft and tweets regarding the product releases were also included. Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 ." ], "highlighted_evidence": [ "A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 .", "The news on twitter about Microsoft and tweets regarding the product releases were also included. Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 ." ] }, { "raw_evidence": [ "A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 . Twitter4J is a java application which helps us to extract tweets from twitter. The tweets were collected using Twitter API and filtered using keywords like $ MSFT, # Microsoft, #Windows etc. Not only the opinion of public about the company's stock but also the opinions about products and services offered by the company would have a significant impact and are worth studying. Based on this principle, the keywords used for filtering are devised with extensive care and tweets are extracted in such a way that they represent the exact emotions of public about Microsoft over a period of time. The news on twitter about Microsoft and tweets regarding the product releases were also included. Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 ." ], "highlighted_evidence": [ "A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 .", "Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 ." ] } ] } ], "2003.08380": [ { "question": "What is the previous state of the art?", "answers": [ { "answer": "RoBERTa", "type": "extractive" }, { "answer": "RoBERTa", "type": "extractive" } ], "q_uid": "7fb27d8d5a8bb351f97236a1f6dcd8b2613b16f1", "evidence": [ { "raw_evidence": [ "Looking at the current WinoGrande leaderboard, it appears that the previous state of the art is based on RoBERTa BIBREF2, which can be characterized as an encoder-only transformer architecture. Since T5-3B is larger than RoBERTa, it cannot be ruled out that model size alone explains the performance gain. However, when coupled with the observations of Nogueira et al. BIBREF7, T5's \u201cgenerative capability\u201d, i.e., its ability to generate fluent text, honed through pretraining, seems to play an important role. The fact that the choice of target tokens affects prediction accuracy is consistent with this observation. How and why is the subject of ongoing work." ], "highlighted_evidence": [ "Looking at the current WinoGrande leaderboard, it appears that the previous state of the art is based on RoBERTa BIBREF2, which can be characterized as an encoder-only transformer architecture." ] }, { "raw_evidence": [ "Looking at the current WinoGrande leaderboard, it appears that the previous state of the art is based on RoBERTa BIBREF2, which can be characterized as an encoder-only transformer architecture. Since T5-3B is larger than RoBERTa, it cannot be ruled out that model size alone explains the performance gain. However, when coupled with the observations of Nogueira et al. BIBREF7, T5's \u201cgenerative capability\u201d, i.e., its ability to generate fluent text, honed through pretraining, seems to play an important role. The fact that the choice of target tokens affects prediction accuracy is consistent with this observation. How and why is the subject of ongoing work." ], "highlighted_evidence": [ "Looking at the current WinoGrande leaderboard, it appears that the previous state of the art is based on RoBERTa BIBREF2, which can be characterized as an encoder-only transformer architecture." ] } ] } ], "1811.05711": [ { "question": "Which text embedding methodologies are used?", "answers": [ { "answer": "Document to Vector (Doc2Vec)", "type": "extractive" }, { "answer": "Doc2Vec, PV-DBOW model", "type": "extractive" } ], "q_uid": "0689904db9b00a814e3109fb1698086370a28fa2", "evidence": [ { "raw_evidence": [ "Figure 1 shows a summary of our pipeline. First, we pre-process each document to transform text into consecutive word tokens, where words are in their most normalised forms, and some words are removed if they have no distinctive meaning when used out of context BIBREF5 , BIBREF6 . We then train a paragraph vector model using the Document to Vector (Doc2Vec) framework BIBREF7 on the whole set (13 million) of preprocessed text records, although training on smaller sets (1 million) also produces good results. This training step is only done once. This Doc2Vec model is subsequently used to infer high-dimensional vector descriptions for the text of each of the 3229 documents in our target analysis set. We then compute a matrix containing pairwise similarities between any pair of document vectors, as inferred with Doc2Vec. This matrix can be thought of as a full, weighted graph with documents as nodes and edges weighted by their similarity. We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF8 , a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The derived MST-kNN graph is analysed with Markov Stability BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , a multi-resolution dynamics-based graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity. MS uses a diffusive process on the graph to reveal the multiscale organisation at different resolutions without the need for choosing a priori the number of clusters, scale or organisation. To analyse a posteriori the different partitions across levels of resolution, we use both visualisations and quantitative scores. The visualisations include word clouds to summarise the main content, graph layouts, as well as Sankey diagrams and contingency tables that capture the correspondences across levels of resolution and relationships to the hand-coded classifications. The partitions are also evaluated quantitatively to score: (i) their intrinsic topic coherence (using pairwise mutual information BIBREF13 , BIBREF14 ), and (ii) their similarity to the operator hand-coded categories (using normalised mutual information BIBREF15 ). We now expand on the steps of the computational framework." ], "highlighted_evidence": [ " We then train a paragraph vector model using the Document to Vector (Doc2Vec) framework BIBREF7 on the whole set (13 million) of preprocessed text records, although training on smaller sets (1 million) also produces good results." ] }, { "raw_evidence": [ "Here, we use the Gensim Python library BIBREF23 to train the PV-DBOW model. The Doc2Vec training was repeated several times with a variety of training hyper-parameters to optimise the output based on our own numerical experiments and the general guidelines provided by BIBREF24 . We trained Doc2Vec models using text corpora of different sizes and content with different sets of hyper-parameters, in order to characterise the usability and quality of models. Specifically, we checked the effect of corpus size on model quality by training Doc2Vec models on the full 13 million NRLS records and on subsets of 1 million and 2 million randomly sampled records. (We note that our target subset of 3229 records has been excluded from these samples.) Furthermore, we checked the importance of the specificity of the text corpus by obtaining a Doc2Vec model from a generic, non-specific set of 5 million articles from Wikipedia representing standard English usage across a variety of topics." ], "highlighted_evidence": [ "Here, we use the Gensim Python library BIBREF23 to train the PV-DBOW model.", "We trained Doc2Vec models using text corpora of different sizes and content with different sets of hyper-parameters, in order to characterise the usability and quality of models." ] } ] } ], "1805.04508": [ { "question": "Which race and gender are given higher sentiment intensity predictions?", "answers": [ { "answer": "Females are given higher sentiment intensity when predicting anger, joy or valence, but males are given higher sentiment intensity when predicting fear.\nAfrican American names are given higher score on the tasks of anger, fear, and sadness intensity prediction, but European American names are given higher scores on joy and valence task.", "type": "abstractive" }, { "answer": " the number of systems consistently giving higher scores to sentences with female noun phrases, higher scores to sentences with African American names on the tasks of anger, fear, and sadness, joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names", "type": "extractive" } ], "q_uid": "cc354c952b5aaed2d4d1e932175e008ff2d801dd", "evidence": [ { "raw_evidence": [ "When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21\u201325) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8\u201313). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 .", "The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names. These tendencies reflect some common stereotypes that associate African Americans with more negative emotions BIBREF28 ." ], "highlighted_evidence": [ "When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21\u201325) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8\u201313). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 ", "The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names. These tendencies reflect some common stereotypes that associate African Americans with more negative emotions BIBREF28 ." ] }, { "raw_evidence": [ "When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21\u201325) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8\u201313). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 .", "The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names. These tendencies reflect some common stereotypes that associate African Americans with more negative emotions BIBREF28 ." ], "highlighted_evidence": [ "When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21\u201325) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8\u201313).", "The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names." ] } ] }, { "question": "What criteria are used to select the 8,640 English sentences?", "answers": [ { "answer": "Sentences involving at least one race- or gender-associated word, sentence have to be short and grammatically simple, sentence have to include expressions of sentiment and emotion.", "type": "abstractive" }, { "answer": "generated with the various combinations of INLINEFORM4 person INLINEFORM5 and INLINEFORM6 emotion word INLINEFORM7 values across the eleven templates, differ only in one word corresponding to gender or race", "type": "extractive" } ], "q_uid": "0f12dc077fe8e5b95ca9163cea1dd17195c96929", "evidence": [ { "raw_evidence": [ "We decided to use sentences involving at least one race- or gender-associated word. The sentences were intended to be short and grammatically simple. We also wanted some sentences to include expressions of sentiment and emotion, since the goal is to test sentiment and emotion systems. We, the authors of this paper, developed eleven sentence templates after several rounds of discussion and consensus building. They are shown in Table TABREF3 . The templates are divided into two groups. The first type (templates 1\u20137) includes emotion words. The purpose of this set is to have sentences expressing emotions. The second type (templates 8\u201311) does not include any emotion words. The purpose of this set is to have non-emotional (neutral) sentences." ], "highlighted_evidence": [ "We decided to use sentences involving at least one race- or gender-associated word. The sentences were intended to be short and grammatically simple. We also wanted some sentences to include expressions of sentiment and emotion, since the goal is to test sentiment and emotion systems." ] }, { "raw_evidence": [ "We generated sentences from the templates by replacing INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 variables with the values they can take. In total, 8,640 sentences were generated with the various combinations of INLINEFORM4 person INLINEFORM5 and INLINEFORM6 emotion word INLINEFORM7 values across the eleven templates. We manually examined the sentences to make sure they were grammatically well-formed. Notably, one can derive pairs of sentences from the EEC such that they differ only in one word corresponding to gender or race (e.g., `My daughter feels devastated' and `My son feels devastated'). We refer to the full set of 8,640 sentences as Equity Evaluation Corpus." ], "highlighted_evidence": [ "We generated sentences from the templates by replacing INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 variables with the values they can take. In total, 8,640 sentences were generated with the various combinations of INLINEFORM4 person INLINEFORM5 and INLINEFORM6 emotion word INLINEFORM7 values across the eleven templates. We manually examined the sentences to make sure they were grammatically well-formed. Notably, one can derive pairs of sentences from the EEC such that they differ only in one word corresponding to gender or race (e.g., `My daughter feels devastated' and `My son feels devastated')." ] } ] } ], "1909.13714": [ { "question": "Is collected multimodal in cabin dataset public?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "5563a3538d311c979c2fb83c1cc9afc66ff6fffc", "evidence": [ { "raw_evidence": [ "We explored leveraging multimodality for the NLU module in the SDS pipeline. As our AMIE in-cabin dataset has video and audio recordings, we investigated 3 modalities for the NLU: text, audio, and video. For text (language) modality, our previous work BIBREF1 presents the details of our best-performing Hierarchical & Joint Bi-LSTM models BIBREF3, BIBREF4, BIBREF5, BIBREF6 (H-Joint-2, see SECREF5) and the results for utterance-level intent recognition and word-level slot filling via transcribed and recognized (ASR output) textual data, using word embeddings (GloVe BIBREF7) as features. This study explores the following multimodal features:" ], "highlighted_evidence": [ "As our AMIE in-cabin dataset has video and audio recordings, we investigated 3 modalities for the NLU: text, audio, and video. " ] } ] } ], "1910.05603": [ { "question": "Is the model tested against any baseline?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "91bc8c0bc1634045177065536dd311f89134630b", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] }, { "raw_evidence": [ "For each domain text data, we train an n-gram language modelBIBREF1 that is optimized for that domain. As the results, we have more than 10 language models. These language models are combined based on perplexity calculated on a small text of a domain that we want to optimize for.", "To further improve the performance, we adopt system combination on the decoding lattice level. By combining systems, we can take advantage of the strength of each model that is optimized for different domains. The results for 2 test sets is showed on Table TABREF17 and TABREF18." ], "highlighted_evidence": [ "As the results, we have more than 10 language models. These language models are combined based on perplexity calculated on a small text of a domain that we want to optimize for.", "By combining systems, we can take advantage of the strength of each model that is optimized for different domains. " ] } ] }, { "question": "What is the language model combination technique used in the paper?", "answers": [ { "answer": "system combination on the decoding lattice level, combination weights", "type": "extractive" }, { "answer": "system combination on the decoding lattice level", "type": "extractive" } ], "q_uid": "fe1dcd6ef1f8618bbceee418f07cafe63a8efe08", "evidence": [ { "raw_evidence": [ "To further improve the performance, we adopt system combination on the decoding lattice level. By combining systems, we can take advantage of the strength of each model that is optimized for different domains. The results for 2 test sets is showed on Table TABREF17 and TABREF18.", "As we can see, for both test sets, system combination significantly reduce the WER. The best result for vlsp2018 of 4.85% WER is obtained by the combination weights 0.6:0.4 where 0.6 is given to the general language model and 0.4 is given to the conversation one. On the vlsp2019 set, the ratio is change slightly by 0.7:0.3 to deliver the best result of 15.09%." ], "highlighted_evidence": [ "To further improve the performance, we adopt system combination on the decoding lattice level. By combining systems, we can take advantage of the strength of each model that is optimized for different domains. ", "The best result for vlsp2018 of 4.85% WER is obtained by the combination weights 0.6:0.4 where 0.6 is given to the general language model and 0.4 is given to the conversation one. On the vlsp2019 set, the ratio is change slightly by 0.7:0.3 to deliver the best result of 15.09%." ] }, { "raw_evidence": [ "To further improve the performance, we adopt system combination on the decoding lattice level. By combining systems, we can take advantage of the strength of each model that is optimized for different domains. The results for 2 test sets is showed on Table TABREF17 and TABREF18." ], "highlighted_evidence": [ "To further improve the performance, we adopt system combination on the decoding lattice level. By combining systems, we can take advantage of the strength of each model that is optimized for different domains." ] } ] }, { "question": "What are the deep learning architectures used in the task?", "answers": [ { "answer": "DNN-based acoustic model BIBREF0", "type": "extractive" } ], "q_uid": "53f74250948015c394e7b8438a2041fdeb330911", "evidence": [ { "raw_evidence": [ "We adopt a DNN-based acoustic model BIBREF0 with 11 hidden layers and the alignment used to train the model is derived from a HMM-GMM model trained with SAT criterion. In a conventional Gaussian Mixture Model - Hidden Markov Model (GMM-HMM) acoustic model, the state emission log-likelihood of the observation feature vector $o_t$ for certain tied state $s_j$ of HMMs at time $t$ is computed as" ], "highlighted_evidence": [ "We adopt a DNN-based acoustic model BIBREF0 with 11 hidden layers and the alignment used to train the model is derived from a HMM-GMM model trained with SAT criterion." ] } ] } ], "1909.03405": [ { "question": "Do they train their model starting from a checkpoint?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "7b4fb6da74e6bd1baea556788a02969134cf0800", "evidence": [ { "raw_evidence": [ "This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.", "To accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\\beta _1$ of 0.9, a $\\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14. For the Masked LM task, we followed the same masking rate and settings as in BERT." ], "highlighted_evidence": [ "The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.\n\nTo accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\\beta _1$ of 0.9, a $\\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14." ] }, { "raw_evidence": [ "This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.", "To accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\\beta _1$ of 0.9, a $\\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14. For the Masked LM task, we followed the same masking rate and settings as in BERT." ], "highlighted_evidence": [ "This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.\n\nTo accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\\beta _1$ of 0.9, a $\\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14. For the Masked LM task, we followed the same masking rate and settings as in BERT." ] } ] }, { "question": "What BERT model do they test?", "answers": [ { "answer": "BERTbase", "type": "extractive" }, { "answer": "BERTbase", "type": "extractive" } ], "q_uid": "bc31a3d2f7c608df8c019a64d64cb0ccc5669210", "evidence": [ { "raw_evidence": [ "This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768." ], "highlighted_evidence": [ " The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768." ] }, { "raw_evidence": [ "This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768." ], "highlighted_evidence": [ "The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768." ] } ] } ], "1801.07887": [ { "question": "What downstream tasks are evaluated?", "answers": [ { "answer": "text classification", "type": "extractive" } ], "q_uid": "f67b9bda14ec70feba2e0d10c400b2b2025a0a6a", "evidence": [ { "raw_evidence": [ "We evaluate the effect batch size has on active learning stopping methods for text classification. We use the publicly available 20Newsgroups dataset in our experiments." ], "highlighted_evidence": [ "We evaluate the effect batch size has on active learning stopping methods for text classification." ] } ] }, { "question": "What is active learning?", "answers": [ { "answer": "A process of training a model when selected unlabeled samples are annotated on each iteration.", "type": "abstractive" }, { "answer": "Active learning is a process that selectively determines which unlabeled samples for a machine learning model should be annotated.", "type": "abstractive" } ], "q_uid": "1cfed6b0c9b5a079a51166209649a987e7553e4e", "evidence": [ { "raw_evidence": [ "Active learning sharply increases the performance of iteratively trained machine learning models by selectively determining which unlabeled samples should be annotated. The number of samples that are selected for annotation at each iteration of active learning is called the batch size." ], "highlighted_evidence": [ "Active learning sharply increases the performance of iteratively trained machine learning models by selectively determining which unlabeled samples should be annotated. " ] }, { "raw_evidence": [ "Active learning sharply increases the performance of iteratively trained machine learning models by selectively determining which unlabeled samples should be annotated. The number of samples that are selected for annotation at each iteration of active learning is called the batch size." ], "highlighted_evidence": [ "Active learning sharply increases the performance of iteratively trained machine learning models by selectively determining which unlabeled samples should be annotated. " ] } ] } ], "2002.04095": [ { "question": "How is segmentation quality evaluated?", "answers": [ { "answer": "Segmentation quality is evaluated by calculating the precision, recall, and F-score of the automatic segmentations in comparison to the segmentations made by expert annotators from the ANNODIS subcorpus.", "type": "abstractive" }, { "answer": "we compare the Annodis segmentation with the automatically produced segmentation", "type": "extractive" } ], "q_uid": "f8da63df16c4c42093e5778c01a8e7e9b270142e", "evidence": [ { "raw_evidence": [ "Two batch of tests were performed. The first on the $D$ set of documents common to the two subcorpus \u201cspecialist\u201d $E$ and \u201cnaive\u201d $N$ from Annodis. $D$ contains 38 documents with 13 364 words. This first test allowed to measure the distance between the human markers. In fact, in order to get an idea of the quality of the human segmentations, the cuts in the texts made by the specialists were measured it versus the so-called \u201cnaifs\u201d note takers and vice versa. The second series of tests consisted of using all the documents of the subcorpus \u201cspecialist\u201d $E$, because the documents of the subcorpus of Annodis are not identical. Then we benchmarked the performance of the three systems automatically.", "We have found that segmentation by experts and naive produces two subcorpus $E$ and $N$ with very similar characteristics. This surprised us, as we expected a more important difference between them. In any case, we deduced that, at least in this corpus, it is not necessary to be an expert in linguistics to discursively segment the documents. As far as system evaluations are concerned, we use the 78 $E$ documents as reference. Table TABREF26 shows the results.", "We calculate the precision $P$, the recall $R$ and the $F$-score on the text corpus used in our tests, as follow:" ], "highlighted_evidence": [ "The second series of tests consisted of using all the documents of the subcorpus \u201cspecialist\u201d $E$, because the documents of the subcorpus of Annodis are not identical. ", "As far as system evaluations are concerned, we use the 78 $E$ documents as reference. ", "We calculate the precision $P$, the recall $R$ and the $F$-score on the text corpus used in our tests, as follow:" ] }, { "raw_evidence": [ "In this first exploratory work, only documents in French were considered, but the system can be adapted to other languages. The evaluation is based on the correspondence of word pairs representing a border. In this way we compare the Annodis segmentation with the automatically produced segmentation. For each pair of reference segments, a $L_r$ list of word pairs is provided: the last word of the first segment and the first word of the second." ], "highlighted_evidence": [ "The evaluation is based on the correspondence of word pairs representing a border. In this way we compare the Annodis segmentation with the automatically produced segmentation." ] } ] } ], "1710.04203": [ { "question": "How do they compare lexicons?", "answers": [ { "answer": "Human evaluators were asked to evaluate on a scale from 1 to 5 the validity of the lexicon annotations made by the experts and crowd contributors.", "type": "abstractive" }, { "answer": "1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations)", "type": "extractive" } ], "q_uid": "c09a92e25e6a81369fcc4ae6045491f2690ccc10", "evidence": [ { "raw_evidence": [ "We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). The experts are two Ph.D. linguists, while the crowd is made up of random high quality contributors that choose to participate in the task. As a reference, the cost of hiring two experts is equal to the cost of employing nineteen contributors in Crowdflower.", "Evaluators were given a summary of the annotations received for the term group in the form of:The term group \"inequality inequity\" received annotations as 50.0% sadness, 33.33% disgust, 16.67% anger. Then, they were asked to evaluate on a scale from 1 to 5, how valid these annotations were considered." ], "highlighted_evidence": [ "We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). ", "Evaluators were given a summary of the annotations received for the term group in the form of:The term group \"inequality inequity\" received annotations as 50.0% sadness, 33.33% disgust, 16.67% anger. Then, they were asked to evaluate on a scale from 1 to 5, how valid these annotations were considered." ] }, { "raw_evidence": [ "We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). The experts are two Ph.D. linguists, while the crowd is made up of random high quality contributors that choose to participate in the task. As a reference, the cost of hiring two experts is equal to the cost of employing nineteen contributors in Crowdflower." ], "highlighted_evidence": [ "We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations)." ] } ] } ], "1911.01371": [ { "question": "How did they obtain the OSG dataset?", "answers": [ { "answer": "crawling and pre-processing an OSG web forum", "type": "extractive" }, { "answer": "data has been developed by crawling and pre-processing an OSG web forum", "type": "extractive" } ], "q_uid": "051df74dc643498e95d16e58851701628fdfd43e", "evidence": [ { "raw_evidence": [ "Our data has been developed by crawling and pre-processing an OSG web forum. The forum has a great variety of different groups such as depression, anxiety, stress, relationship, cancer, sexually transmitted diseases, etc. Each conversation starts with one post and can contain multiple comments. Each post or comment is represented by a poster, a timestamp, a list of users it is referencing to, thread id, a comment id and a conversation id. The thread id is the same for comments replying to each other, otherwise it is different. The thread id is increasing with time. Thus, it provides ordering among threads; whereas the timestamp provides ordering in the thread." ], "highlighted_evidence": [ "Our data has been developed by crawling and pre-processing an OSG web forum. The forum has a great variety of different groups such as depression, anxiety, stress, relationship, cancer, sexually transmitted diseases, etc." ] }, { "raw_evidence": [ "Datasets ::: OSG", "Our data has been developed by crawling and pre-processing an OSG web forum. The forum has a great variety of different groups such as depression, anxiety, stress, relationship, cancer, sexually transmitted diseases, etc. Each conversation starts with one post and can contain multiple comments. Each post or comment is represented by a poster, a timestamp, a list of users it is referencing to, thread id, a comment id and a conversation id. The thread id is the same for comments replying to each other, otherwise it is different. The thread id is increasing with time. Thus, it provides ordering among threads; whereas the timestamp provides ordering in the thread." ], "highlighted_evidence": [ "PLEASE ", "Datasets ::: OSG\nOur data has been developed by crawling and pre-processing an OSG web forum. " ] } ] }, { "question": "How large is the Twitter dataset?", "answers": [ { "answer": "1,873 Twitter conversation threads, roughly 14k tweets", "type": "extractive" }, { "answer": "1,873 Twitter conversation threads, roughly 14k tweets", "type": "extractive" } ], "q_uid": "33554065284110859a8ea3ca7346474ab2cab100", "evidence": [ { "raw_evidence": [ "We have downloaded 1,873 Twitter conversation threads, roughly 14k tweets, from a publicly available resource that were previously pre-processed and have conversation threads extracted. A conversation in the dataset consists of at least 4 tweets. Even though, according to BIBREF23, Twitter is broadly applicable to public health research, our expectation is that it contains less therapeutic conversations in comparison to specialized on-line support forums." ], "highlighted_evidence": [ "We have downloaded 1,873 Twitter conversation threads, roughly 14k tweets, from a publicly available resource that were previously pre-processed and have conversation threads extracted." ] }, { "raw_evidence": [ "We have downloaded 1,873 Twitter conversation threads, roughly 14k tweets, from a publicly available resource that were previously pre-processed and have conversation threads extracted. A conversation in the dataset consists of at least 4 tweets. Even though, according to BIBREF23, Twitter is broadly applicable to public health research, our expectation is that it contains less therapeutic conversations in comparison to specialized on-line support forums." ], "highlighted_evidence": [ "We have downloaded 1,873 Twitter conversation threads, roughly 14k tweets, from a publicly available resource that were previously pre-processed and have conversation threads extracted." ] } ] } ], "1909.09551": [ { "question": "How they utilize LDA and Gibbs sampling to evaluate ISWC and WWW publications?", "answers": [ { "answer": "the LDA approaches to recommendation systems and given the importance of research, we have studied recent impressive articles on this subject and presented a taxonomy of recommendation systems based on LDA of the recent research, we evaluated ISWC and WWW conferences articles from DBLP website and used the Gibbs sampling algorithm as an evaluation parameter", "type": "extractive" }, { "answer": "discover the trends of the topics and find relationship between LDA topics and paper features and generate trust tags, learn a LDA model with 100 topics; $\\alpha =0.01$, $\\beta = 0.01$ and using Gibbs sampling as a parameter estimation", "type": "extractive" } ], "q_uid": "54830abe73fef4e629a36866ceeeca10214bd2c8", "evidence": [ { "raw_evidence": [ "In this study, we focused on the LDA approaches to recommendation systems and given the importance of research, we have studied recent impressive articles on this subject and presented a taxonomy of recommendation systems based on LDA of the recent research. we evaluated ISWC and WWW conferences articles from DBLP website and used the Gibbs sampling algorithm as an evaluation parameter. We succeeded in discovering the relationship between LDA topics and paper features and also obtained the researchers' interest in research field. According to our studies, some issues require further research, which can be very effective and attractive for the future.", "As previously mentioned, Topic modeling can find a collection of distributions over words for each topic and the relationship of topics with each document. To perform approximate inference and learning LDA, there are many inference methods for LDA topic model such as Gibbs sampling, collapsed Variational Bayes, Expectation Maximization. Gibbs sampling is a popular technique because of its simplicity and low latency. However, for large numbers of topics, Gibbs sampling can become unwieldy. In this paper, we use Gibbs Sampling in our experiment in section 5." ], "highlighted_evidence": [ "In this study, we focused on the LDA approaches to recommendation systems and given the importance of research, we have studied recent impressive articles on this subject and presented a taxonomy of recommendation systems based on LDA of the recent research. we evaluated ISWC and WWW conferences articles from DBLP website and used the Gibbs sampling algorithm as an evaluation parameter. We succeeded in discovering the relationship between LDA topics and paper features and also obtained the researchers' interest in research field.", "To perform approximate inference and learning LDA, there are many inference methods for LDA topic model such as Gibbs sampling, collapsed Variational Bayes, Expectation Maximization. Gibbs sampling is a popular technique because of its simplicity and low latency. However, for large numbers of topics, Gibbs sampling can become unwieldy. In this paper, we use Gibbs Sampling in our experiment in section 5." ] }, { "raw_evidence": [ "We extracted ISWC and WWW conferences publications from DBLP website by only considering conferences for which data was available for years 2013-2017. In total, It should be noted that in these experiments, we considered abstracts and titles from each article. In this paper, we used MALLET (http://mallet.cs.umass.edu/) to implement the inference and obtain the topic models. In addition, our full dataset is available at https://github.com/JeloH/Dataset_DBLP. The most important goal of this experiment is discover the trends of the topics and find relationship between LDA topics and paper features and generate trust tags.", "In this paper, all experiments were carried out on a machine running Windows 7 with CoreI3 and 4 GB memory. We learn a LDA model with 100 topics; $\\alpha =0.01$, $\\beta = 0.01$ and using Gibbs sampling as a parameter estimation. Related words for a topic are quite intuitive and comprehensive in the sense of supplying a semantic short of a specific research field." ], "highlighted_evidence": [ "The most important goal of this experiment is discover the trends of the topics and find relationship between LDA topics and paper features and generate trust tags.", "We learn a LDA model with 100 topics; $\\alpha =0.01$, $\\beta = 0.01$ and using Gibbs sampling as a parameter estimation. Related words for a topic are quite intuitive and comprehensive in the sense of supplying a semantic short of a specific research field." ] } ] } ], "1701.02962": [ { "question": "What dataset do they use to evaluate their method?", "answers": [ { "answer": "antonym and synonym pairs, collected from WordNet BIBREF9 and Wordnik", "type": "extractive" }, { "answer": "English Wikipedia dump from June 2016", "type": "extractive" } ], "q_uid": "2fbb6322e485e7743ec3fb4bb02d44bf4b5ea8a6", "evidence": [ { "raw_evidence": [ "For training the models, neural networks require a large amount of training data. We use the existing large-scale antonym and synonym pairs previously used by Nguyen:16. Originally, the data pairs were collected from WordNet BIBREF9 and Wordnik." ], "highlighted_evidence": [ "We use the existing large-scale antonym and synonym pairs previously used by Nguyen:16. Originally, the data pairs were collected from WordNet BIBREF9 and Wordnik." ] }, { "raw_evidence": [ "We use the English Wikipedia dump from June 2016 as the corpus resource for our methods and baselines. For parsing the corpus, we rely on spaCy. For the lemma embeddings, we rely on the word embeddings of the dLCE model BIBREF10 which is the state-of-the-art vector representation for distinguishing antonyms from synonyms. We re-implemented this cutting-edge model on Wikipedia with 100 dimensions, and then make use of the dLCE word embeddings for initialization the lemma embeddings. The embeddings of POS tags, dependency labels, distance labels, and out-of-vocabulary lemmas are initialized randomly. The number of dimensions is set to 10 for the embeddings of POS tags, dependency labels and distance labels. We use the validation sets to tune the number of dimensions for these labels. For optimization, we rely on the cross-entropy loss function and Stochastic Gradient Descent with the Adadelta update rule BIBREF11 . For training, we use the Theano framework BIBREF12 . Regularization is applied by a dropout of 0.5 on each of component's embeddings (dropout rate is tuned on the validation set). We train the models with 40 epochs and update all embeddings during training." ], "highlighted_evidence": [ "We use the English Wikipedia dump from June 2016 as the corpus resource for our methods and baselines. " ] } ] } ], "1806.05504": [ { "question": "Why are current ELS's not sufficiently effective?", "answers": [ { "answer": "Linked entities may be ambiguous or too common", "type": "abstractive" }, { "answer": "linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 . These issues can be summarized into two parts: ambiguity and coarseness., the linked entities may also be too common to be considered an entity.", "type": "extractive" } ], "q_uid": "ef7212075e80bf35b7889dc8dd52fcbae0d1400a", "evidence": [ { "raw_evidence": [ "Despite its usefulness, linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 . These issues can be summarized into two parts: ambiguity and coarseness.", "First, the extracted entities may be ambiguous. In the example, the entity \u201cSouth Korean\u201d is ambiguous because it can refer to both the South Korean person and the South Korean language, among others. In our experimental data, we extract (1) the top 100 entities based on frequency, and (2) the entities extracted from 100 randomly selected texts, and check whether they have disambiguation pages in Wikipedia or not. We discover that $71.0\\%$ of the top 100 entities and $53.6\\%$ of the entities picked at random have disambiguation pages, which shows that most entities are prone to ambiguity problems.", "Second, the linked entities may also be too common to be considered an entity. This may introduce errors and irrelevance to the summary. In the example, \u201cWednesday\u201d is erroneous because it is wrongly linked to the entity \u201cWednesday Night Baseball\u201d. Also, \u201cswap\u201d is irrelevant because although it is linked correctly to the entity \u201cTrade (Sports)\u201d, it is too common and irrelevant when generating the summaries. In our experimental data, we randomly select 100 data instances and tag the correctness and relevance of extracted entities into one of four labels: A: correct and relevant, B: correct and somewhat relevant, C: correct but irrelevant, and D: incorrect. Results show that $29.4\\%$ , $13.7\\%$ , $30.0\\%$ , and $26.9\\%$ are tagged with A, B, C, and D, respectively, which shows that there is a large amount of incorrect and irrelevant entities." ], "highlighted_evidence": [ "Despite its usefulness, linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 .", "First, the extracted entities may be ambiguous.", "Second, the linked entities may also be too common to be considered an entity. " ] }, { "raw_evidence": [ "Despite its usefulness, linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 . These issues can be summarized into two parts: ambiguity and coarseness.", "Second, the linked entities may also be too common to be considered an entity. This may introduce errors and irrelevance to the summary. In the example, \u201cWednesday\u201d is erroneous because it is wrongly linked to the entity \u201cWednesday Night Baseball\u201d. Also, \u201cswap\u201d is irrelevant because although it is linked correctly to the entity \u201cTrade (Sports)\u201d, it is too common and irrelevant when generating the summaries. In our experimental data, we randomly select 100 data instances and tag the correctness and relevance of extracted entities into one of four labels: A: correct and relevant, B: correct and somewhat relevant, C: correct but irrelevant, and D: incorrect. Results show that $29.4\\%$ , $13.7\\%$ , $30.0\\%$ , and $26.9\\%$ are tagged with A, B, C, and D, respectively, which shows that there is a large amount of incorrect and irrelevant entities." ], "highlighted_evidence": [ "linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 . These issues can be summarized into two parts: ambiguity and coarseness.", "the linked entities may also be too common to be considered an entity. " ] } ] } ], "1908.05828": [ { "question": "What is the best model?", "answers": [ { "answer": "BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS ", "type": "extractive" } ], "q_uid": "567dc9bad8428ea9a2658c88203a0ed0f8da0dc3", "evidence": [ { "raw_evidence": [ "We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively." ], "highlighted_evidence": [ "Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively." ] } ] }, { "question": "Do the authors train a Naive Bayes classifier on their dataset?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "d8627ba08b7342e473b8a2b560baa8cdbae3c7fd", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] }, { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "Which machine learning models do they explore?", "answers": [ { "answer": "BiLSTM, BiLSTM-CNN, BiLSTM-CRF, BiLSTM-CNN-CRF", "type": "extractive" }, { "answer": "BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2, CNN modelBIBREF0 and Stanford CRF modelBIBREF21", "type": "extractive" } ], "q_uid": "8a7615fc6ff1de287d36ab21bf2c6a3b2914f73d", "evidence": [ { "raw_evidence": [ "In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\\sqrt{k},\\sqrt{k})$ where $k=\\frac{1}{hidden\\_size}$." ], "highlighted_evidence": [ "We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. " ] }, { "raw_evidence": [ "Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone." ], "highlighted_evidence": [ "First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21." ] } ] }, { "question": "What is the source of their dataset?", "answers": [ { "answer": "daily newspaper of the year 2015-2016", "type": "extractive" }, { "answer": "daily newspaper of the year 2015-2016", "type": "extractive" } ], "q_uid": "bb2de20ee5937da7e3e6230e942bec7b6e8f61ee", "evidence": [ { "raw_evidence": [ "Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25." ], "highlighted_evidence": [ "Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. " ] }, { "raw_evidence": [ "Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25." ], "highlighted_evidence": [ "This dataset contains the sentences collected from daily newspaper of the year 2015-2016." ] } ] }, { "question": "Do they try to use byte-pair encoding representations?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "1170e4ee76fa202cabac9f621e8fbeb4a6c5f094", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] }, { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "Which models are used to solve NER for Nepali?", "answers": [ { "answer": "BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2, CNN modelBIBREF0 and Stanford CRF modelBIBREF21", "type": "extractive" }, { "answer": "BiLSTM, BiLSTM+CNN, BiLSTM+CRF, BiLSTM+CNN+CRF, CNN, Stanford CRF", "type": "extractive" } ], "q_uid": "6d1217b3d9cfb04be7fcd2238666fa02855ce9c5", "evidence": [ { "raw_evidence": [ "Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone." ], "highlighted_evidence": [ "First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21." ] }, { "raw_evidence": [ "Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone." ], "highlighted_evidence": [ "Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. " ] } ] } ], "1909.13104": [ { "question": "What language(s) is/are represented in the dataset?", "answers": [ { "answer": "english", "type": "extractive" }, { "answer": "english", "type": "extractive" } ], "q_uid": "1e775cf30784e6b1c2b573294a82e145a3f959bb", "evidence": [ { "raw_evidence": [ "As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These \"noisy\" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models." ], "highlighted_evidence": [ "To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation." ] }, { "raw_evidence": [ "As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These \"noisy\" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models." ], "highlighted_evidence": [ "As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. " ] } ] }, { "question": "What baseline model is used?", "answers": [ { "answer": " LastStateRNN, AvgRNN, AttentionRNN", "type": "extractive" }, { "answer": "LastStateRNN, AvgRNN, AttentionRNN ", "type": "extractive" } ], "q_uid": "392fb87564c4f45d0d8d491a9bb217c4fce87f03", "evidence": [ { "raw_evidence": [ "We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category." ], "highlighted_evidence": [ "Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category." ] }, { "raw_evidence": [ "We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category." ], "highlighted_evidence": [ "We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category.\n\n" ] } ] }, { "question": "Which variation provides the best results on this dataset?", "answers": [ { "answer": "the model with multi-attention mechanism and a projected layer", "type": "abstractive" }, { "answer": "Projected Layer", "type": "extractive" } ], "q_uid": "203337c15bd1ee05763c748391d295a1f6415b9b", "evidence": [ { "raw_evidence": [ "We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement." ], "highlighted_evidence": [ "We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement.\n\n" ] }, { "raw_evidence": [ "We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement." ], "highlighted_evidence": [ "In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement." ] } ] }, { "question": "What are the different variations of the attention-based approach which are examined?", "answers": [ { "answer": "classic RNN model, avgRNN model, attentionRNN model and multiattention RNN model with and without a projected layer", "type": "abstractive" }, { "answer": " four attention mechanisms instead of one, a projection layer for the word embeddings", "type": "extractive" } ], "q_uid": "d004ca2e999940ac5c1576046e30efa3059832fa", "evidence": [ { "raw_evidence": [ "We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category." ], "highlighted_evidence": [ "We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category." ] }, { "raw_evidence": [ "where $h_{*}$ is the state that comes out from the MLP. The weights $\\alpha _{t}$ are produced by an attention mechanism presented in BIBREF9 (see Fig. FIGREF7), which is an MLP with l layers. This attention mechanism differs from most previous ones BIBREF16, BIBREF17, because it is used in a classification setting, where there is no previously generated output sub-sequence to drive the attention. It assigns larger weights $\\alpha _{t}$ to hidden states $h_{t}$ corresponding to positions, where there is more evidence that the tweet should be harassment (or any other specific type of harassment) or not. In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. Particularly, we are using one attention mechanism per category. Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2). In the next subsection we describe the Model Architecture of our approach." ], "highlighted_evidence": [ " In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. ", "Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2)." ] } ] }, { "question": "What dataset is used for this work?", "answers": [ { "answer": "Twitter dataset provided by the organizers", "type": "abstractive" }, { "answer": "The dataset from the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference.", "type": "abstractive" } ], "q_uid": "21548433abd21346659505296fb0576e78287a74", "evidence": [ { "raw_evidence": [ "In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classi\ufb01cation specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion." ], "highlighted_evidence": [ "We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. " ] }, { "raw_evidence": [ "In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classi\ufb01cation specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion." ], "highlighted_evidence": [ "In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference.", "We are using the dataset of the competition, which includes text from tweets having the aforementioned categories." ] } ] }, { "question": "What types of online harassment are studied?", "answers": [ { "answer": "indirect harassment, sexual and physical harassment", "type": "extractive" }, { "answer": "indirect, physical, sexual", "type": "extractive" } ], "q_uid": "f0b2289cb887740f9255909018f400f028b1ef26", "evidence": [ { "raw_evidence": [ "The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment. We can see in Table TABREF1 the class distribution of our dataset. One important issue here is that the categories of indirect and physical harassment seem to be more rare in the train set than in the validation and test sets. To tackle this issue, as we describe in the next section, we are performing data augmentation techniques. However, the dataset is imbalanced and this has a significant impact in our results." ], "highlighted_evidence": [ "The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment." ] }, { "raw_evidence": [ "In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classi\ufb01cation specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion." ], "highlighted_evidence": [ "The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well." ] } ] }, { "question": "What was the baseline?", "answers": [ { "answer": "LastStateRNN, AvgRNN, AttentionRNN", "type": "extractive" } ], "q_uid": "51b1142c1d23420dbf6d49446730b0e82b32137c", "evidence": [ { "raw_evidence": [ "We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category." ], "highlighted_evidence": [ "Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9." ] } ] }, { "question": "What were the datasets used in this paper?", "answers": [ { "answer": "The dataset from the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. ", "type": "abstractive" }, { "answer": "Twitter dataset provided by organizers containing harassment and non-harassment tweets", "type": "abstractive" } ], "q_uid": "58355e2a782bf145b61ee2a3e0e426119985c179", "evidence": [ { "raw_evidence": [ "In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classi\ufb01cation specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion." ], "highlighted_evidence": [ "In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. ", " We are using the dataset of the competition, which includes text from tweets having the aforementioned categories." ] }, { "raw_evidence": [ "In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classi\ufb01cation specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion.", "The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment. We can see in Table TABREF1 the class distribution of our dataset. One important issue here is that the categories of indirect and physical harassment seem to be more rare in the train set than in the validation and test sets. To tackle this issue, as we describe in the next section, we are performing data augmentation techniques. However, the dataset is imbalanced and this has a significant impact in our results." ], "highlighted_evidence": [ "We are using the dataset of the competition, which includes text from tweets having the aforementioned categories.", "The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment. We can see in Table TABREF1 the class distribution of our dataset. One important issue here is that the categories of indirect and physical harassment seem to be more rare in the train set than in the validation and test sets. To tackle this issue, as we describe in the next section, we are performing data augmentation techniques. However, the dataset is imbalanced and this has a significant impact in our results." ] } ] } ], "2002.02070": [ { "question": "Is car-speak language collection of abstract features that classifier is later trained on?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "25c1c4a91f5dedd4e06d14121af3b5921db125e9", "evidence": [ { "raw_evidence": [ "The term \u201cfast\u201d is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term \u201cfast\u201d pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term \u201cfast\u201d refers to.", "We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13." ], "highlighted_evidence": [ "Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term \u201cfast\u201d pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term \u201cfast\u201d refers to.", "We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13." ] }, { "raw_evidence": [ "The term \u201cfast\u201d is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term \u201cfast\u201d pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term \u201cfast\u201d refers to.", "Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them." ], "highlighted_evidence": [ "The term \u201cfast\u201d is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term \u201cfast\u201d pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term \u201cfast\u201d refers to.", "In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them." ] } ] }, { "question": "Is order of \"words\" important in car speak language?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "f88036174b4a0dbf4fe70ddad884d16082c5748d", "evidence": [ { "raw_evidence": [ "We would like to be able to represent each car with the most relevant car-speak terms. We can do this by filtering each review using the NLTK library BIBREF8, only retaining the most relevant words. First we token-ize each review and then keep only the nouns and adjectives from each review since they are the most salient parts of speech BIBREF9. This leaves us with $10,867$ words across all reviews. Figure FIGREF6 shows the frequency of the top 20 words that remain.", "So far we have compiled the most relevant terms in from the reviews. We now need to weight these terms for each review, so that we know the car-speak terms are most associated with a car. Using TF-IDF (Term Frequency-Inverse Document Frequency) has been used as a reliable metric for finding the relevant terms in a document BIBREF10." ], "highlighted_evidence": [ " We can do this by filtering each review using the NLTK library BIBREF8, only retaining the most relevant words. First we token-ize each review and then keep only the nouns and adjectives from each review since they are the most salient parts of speech BIBREF9. This leaves us with $10,867$ words across all reviews.", "Using TF-IDF (Term Frequency-Inverse Document Frequency) has been used as a reliable metric for finding the relevant terms in a document BIBREF10." ] } ] }, { "question": "What are labels in car speak language dataset?", "answers": [ { "answer": "car ", "type": "extractive" }, { "answer": "the car", "type": "extractive" } ], "q_uid": "a267d620af319b48e56c191aa4c433ea3870f6fb", "evidence": [ { "raw_evidence": [ "We represent each review as a vector of TF-IDF scores for each word in the review. The length of this vector is $10,867$. We label each review vector with the car it reviews. We ignore the year of the car being reviewed and focus specifically on the model (i.e Acura ILX, not 2013 Acura ILX). This is because there a single model of car generally retains the same characteristics over time BIBREF11, BIBREF12." ], "highlighted_evidence": [ "We label each review vector with the car it reviews. " ] }, { "raw_evidence": [ "Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them.", "We represent each review as a vector of TF-IDF scores for each word in the review. The length of this vector is $10,867$. We label each review vector with the car it reviews. We ignore the year of the car being reviewed and focus specifically on the model (i.e Acura ILX, not 2013 Acura ILX). This is because there a single model of car generally retains the same characteristics over time BIBREF11, BIBREF12." ], "highlighted_evidence": [ "Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification.", "We represent each review as a vector of TF-IDF scores for each word in the review. The length of this vector is $10,867$. We label each review vector with the car it reviews. We ignore the year of the car being reviewed and focus specifically on the model (i.e Acura ILX, not 2013 Acura ILX). " ] } ] }, { "question": "How big is dataset of car-speak language?", "answers": [ { "answer": "$3,209$ reviews ", "type": "extractive" }, { "answer": "$3,209$ reviews about 553 different cars from 49 different car manufacturers", "type": "extractive" } ], "q_uid": "899ed05c460bf2aa0aa65101cad1986d4f622652", "evidence": [ { "raw_evidence": [ "Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them." ], "highlighted_evidence": [ "Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers." ] }, { "raw_evidence": [ "Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them." ], "highlighted_evidence": [ "Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms." ] } ] }, { "question": "How does car speak pertains to a car's physical attributes?", "answers": [ { "answer": "we do not know exactly", "type": "extractive" } ], "q_uid": "6bf93968110c6e3e3640360440607744007a5228", "evidence": [ { "raw_evidence": [ "The term \u201cfast\u201d is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term \u201cfast\u201d pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term \u201cfast\u201d refers to." ], "highlighted_evidence": [ "Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term \u201cfast\u201d pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term \u201cfast\u201d refers to." ] } ] } ], "1611.03599": [ { "question": "What topic is covered in the Chinese Facebook data? ", "answers": [ { "answer": "anti-nuclear-power", "type": "extractive" }, { "answer": "anti-nuclear-power", "type": "extractive" } ], "q_uid": "37a79be0148e1751ffb2daabe4c8ec6680036106", "evidence": [ { "raw_evidence": [ "The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen\u2019s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero." ], "highlighted_evidence": [ "The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs." ] }, { "raw_evidence": [ "The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen\u2019s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero." ], "highlighted_evidence": [ "The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. " ] } ] }, { "question": "How many layers does the UTCNN model have?", "answers": [ { "answer": "eight layers", "type": "abstractive" } ], "q_uid": "518dae6f936882152c162058895db4eca815e649", "evidence": [ { "raw_evidence": [ "Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic.", "As for comments, we view them as short documents with authors only but without likers nor their own comments. Therefore we apply document composition on comments although here users are commenters (users who comment). It is noticed that the word embeddings INLINEFORM0 for the same word in the posts and comments are the same, but after being transformed to INLINEFORM1 in the document composition process shown in Figure FIGREF4 , they might become different because of their different engaged users. The output comment representation together with the commenter vector embedding INLINEFORM2 and topic vector embedding INLINEFORM3 are concatenated and a maximum pooling layer is added to select the most important feature for comments. Instead of requiring that the comment stance agree with the post, UTCNN simply extracts the most important features of the comment contents; they could be helpful, whether they show obvious agreement or disagreement. Therefore when combining comment information here, the maximum pooling layer is more appropriate than other pooling or merging layers. Indeed, we believe this is one reason for UTCNN's performance gains.", "Finally, the pooled comment representation, together with user vector embedding INLINEFORM0 , topic vector embedding INLINEFORM1 , and document representation are fed to a fully connected network, and softmax is applied to yield the final stance label prediction for the post." ], "highlighted_evidence": [ "Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic.\n\nAs for comments, we view them as short documents with authors only but without likers nor their own comments. Therefore we apply document composition on comments although here users are commenters (users who comment). It is noticed that the word embeddings INLINEFORM0 for the same word in the posts and comments are the same, but after being transformed to INLINEFORM1 in the document composition process shown in Figure FIGREF4 , they might become different because of their different engaged users. The output comment representation together with the commenter vector embedding INLINEFORM2 and topic vector embedding INLINEFORM3 are concatenated and a maximum pooling layer is added to select the most important feature for comments. Instead of requiring that the comment stance agree with the post, UTCNN simply extracts the most important features of the comment contents; they could be helpful, whether they show obvious agreement or disagreement. Therefore when combining comment information here, the maximum pooling layer is more appropriate than other pooling or merging layers. Indeed, we believe this is one reason for UTCNN's performance gains.\n\nFinally, the pooled comment representation, together with user vector embedding INLINEFORM0 , topic vector embedding INLINEFORM1 , and document representation are fed to a fully connected network, and softmax is applied to yield the final stance label prediction for the post." ] } ] }, { "question": "What topics are included in the debate data?", "answers": [ { "answer": "abortion, gay rights, Obama, marijuana", "type": "extractive" }, { "answer": "abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR)", "type": "extractive" } ], "q_uid": "e44a6bf67ce3fde0c6608b150030e44d87eb25e3", "evidence": [ { "raw_evidence": [ "The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 ." ], "highlighted_evidence": [ "The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). " ] }, { "raw_evidence": [ "The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 ." ], "highlighted_evidence": [ "The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. " ] } ] }, { "question": "What is the size of the Chinese data?", "answers": [ { "answer": "32,595 posts", "type": "extractive" }, { "answer": "32,595", "type": "extractive" } ], "q_uid": "6a31db1aca57a818f36bba9002561724655372a7", "evidence": [ { "raw_evidence": [ "To test whether the assumption of this paper \u2013 posts attract users who hold the same stance to like them \u2013 is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach." ], "highlighted_evidence": [ "Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts." ] }, { "raw_evidence": [ "The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen\u2019s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero.", "To test whether the assumption of this paper \u2013 posts attract users who hold the same stance to like them \u2013 is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach." ], "highlighted_evidence": [ "The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. ", "Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts." ] } ] }, { "question": "Did they collected the two datasets?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "e330e162ec29722f5ec9f83853d129c9e0693d65", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] }, { "raw_evidence": [ "We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. Results using these two datasets show the applicability and superiority for different topics, languages, data distributions, and platforms.", "The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 ." ], "highlighted_evidence": [ "We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. ", "The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 ." ] } ] }, { "question": "What are the baselines?", "answers": [ { "answer": "SVM with unigram, bigram, and trigram features, SVM with average word embedding, SVM with average transformed word embeddings, CNN, ecurrent Convolutional Neural Networks, SVM and deep learning models with comment information", "type": "extractive" }, { "answer": "SVM with unigram, bigram, trigram features, with average word embedding, with average transformed word embeddings, CNN and RCNN, SVM, CNN, RCNN with comment information", "type": "abstractive" } ], "q_uid": "d3093062aebff475b4deab90815004051e802aa6", "evidence": [ { "raw_evidence": [ "We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced." ], "highlighted_evidence": [ "We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; " ] }, { "raw_evidence": [ "We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced." ], "highlighted_evidence": [ "We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced." ] } ] } ], "1908.10084": [ { "question": "What transfer learning tasks are evaluated?", "answers": [ { "answer": "MR, CR, SUBJ, MPQA, SST, TREC, MRPC", "type": "extractive" }, { "answer": "MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.\n\nCR: Sentiment prediction of customer product reviews BIBREF26.\n\nSUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.\n\nMPQA: Phrase level opinion polarity classification from newswire BIBREF28.\n\nSST: Stanford Sentiment Treebank with binary labels BIBREF29.\n\nTREC: Fine grained question-type classification from TREC BIBREF30.\n\nMRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31.", "type": "extractive" }, { "answer": "Semantic Textual Similarity, sentiment prediction, subjectivity prediction, phrase level opinion polarity classification, Stanford Sentiment Treebank, fine grained question-type classification.", "type": "abstractive" } ], "q_uid": "4944cd597b836b62616a4e37c045ce48de8c82ca", "evidence": [ { "raw_evidence": [ "We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:", "MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.", "CR: Sentiment prediction of customer product reviews BIBREF26.", "SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.", "MPQA: Phrase level opinion polarity classification from newswire BIBREF28.", "SST: Stanford Sentiment Treebank with binary labels BIBREF29.", "TREC: Fine grained question-type classification from TREC BIBREF30.", "MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31." ], "highlighted_evidence": [ "We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:\n\nMR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.\n\nCR: Sentiment prediction of customer product reviews BIBREF26.\n\nSUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.\n\nMPQA: Phrase level opinion polarity classification from newswire BIBREF28.\n\nSST: Stanford Sentiment Treebank with binary labels BIBREF29.\n\nTREC: Fine grained question-type classification from TREC BIBREF30.\n\nMRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31." ] }, { "raw_evidence": [ "The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by devlin2018bert for new tasks is the more suitable method, as it updates all layers of the BERT network. However, SentEval can still give an impression on the quality of our sentence embeddings for various tasks.", "We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:", "MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.", "CR: Sentiment prediction of customer product reviews BIBREF26.", "SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.", "MPQA: Phrase level opinion polarity classification from newswire BIBREF28.", "SST: Stanford Sentiment Treebank with binary labels BIBREF29.", "TREC: Fine grained question-type classification from TREC BIBREF30.", "MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31." ], "highlighted_evidence": [ "The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by devlin2018bert for new tasks is the more suitable method, as it updates all layers of the BERT network. However, SentEval can still give an impression on the quality of our sentence embeddings for various tasks.\n\nWe compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:\n\nMR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.\n\nCR: Sentiment prediction of customer product reviews BIBREF26.\n\nSUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.\n\nMPQA: Phrase level opinion polarity classification from newswire BIBREF28.\n\nSST: Stanford Sentiment Treebank with binary labels BIBREF29.\n\nTREC: Fine grained question-type classification from TREC BIBREF30.\n\nMRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31." ] }, { "raw_evidence": [ "We fine-tune SBERT on NLI data, which creates sentence embeddings that significantly outperform other state-of-the-art sentence embedding methods like InferSent BIBREF4 and Universal Sentence Encoder BIBREF5. On seven Semantic Textual Similarity (STS) tasks, SBERT achieves an improvement of 11.7 points compared to InferSent and 5.5 points compared to Universal Sentence Encoder. On SentEval BIBREF6, an evaluation toolkit for sentence embeddings, we achieve an improvement of 2.1 and 2.6 points, respectively.", "We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:", "MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.", "CR: Sentiment prediction of customer product reviews BIBREF26.", "SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.", "MPQA: Phrase level opinion polarity classification from newswire BIBREF28.", "SST: Stanford Sentiment Treebank with binary labels BIBREF29.", "TREC: Fine grained question-type classification from TREC BIBREF30.", "MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31." ], "highlighted_evidence": [ "On seven Semantic Textual Similarity (STS) tasks, SBERT achieves an improvement of 11.7 points compared to InferSent and 5.5 points compared to Universal Sentence Encoder.", "We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:\n\nMR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.\n\nCR: Sentiment prediction of customer product reviews BIBREF26.\n\nSUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.\n\nMPQA: Phrase level opinion polarity classification from newswire BIBREF28.\n\nSST: Stanford Sentiment Treebank with binary labels BIBREF29.\n\nTREC: Fine grained question-type classification from TREC BIBREF30.\n\nMRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31." ] } ] }, { "question": "What metrics are used for the STS tasks?", "answers": [ { "answer": " Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels", "type": "extractive" }, { "answer": "Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels", "type": "extractive" } ], "q_uid": "a29c071065d26e5ee3c3bcd877e7f215c59d1d33", "evidence": [ { "raw_evidence": [ "We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 - 2016 BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, the STS benchmark BIBREF10, and the SICK-Relatedness dataset BIBREF21. These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosine-similarity. The results are depicted in Table TABREF6." ], "highlighted_evidence": [ "We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels." ] }, { "raw_evidence": [ "We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 - 2016 BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, the STS benchmark BIBREF10, and the SICK-Relatedness dataset BIBREF21. These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosine-similarity. The results are depicted in Table TABREF6." ], "highlighted_evidence": [ "Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. " ] } ] }, { "question": "How much time takes its training?", "answers": [ { "answer": "20 minutes", "type": "extractive" } ], "q_uid": "7f207549c75f5c4388efc15ed28822672b845663", "evidence": [ { "raw_evidence": [ "Previous neural sentence embedding methods started the training from a random initialization. In this publication, we use the pre-trained BERT and RoBERTa network and only fine-tune it to yield useful sentence embeddings. This reduces significantly the needed training time: SBERT can be tuned in less than 20 minutes, while yielding better results than comparable sentence embedding methods." ], "highlighted_evidence": [ "his reduces significantly the needed training time: SBERT can be tuned in less than 20 minutes, while yielding better results than comparable sentence embedding methods." ] } ] }, { "question": "How are the siamese networks trained?", "answers": [ { "answer": "update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity., Classification Objective Function, Regression Objective Function, Triplet Objective Function", "type": "extractive" } ], "q_uid": "2e89ebd2e4008c67bb2413699589ee55f59c4f36", "evidence": [ { "raw_evidence": [ "SBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN.", "In order to fine-tune BERT / RoBERTa, we create siamese and triplet networks BIBREF15 to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity.", "The network structure depends on the available training data. We experiment with the following structures and objective functions.", "Classification Objective Function. We concatenate the sentence embeddings $u$ and $v$ with the element-wise difference $|u-v|$ and multiply it with the trainable weight $W_t \\in \\mathbb {R}^{3n \\times k}$:", "where $n$ is the dimension of the sentence embeddings and $k$ the number of labels. We optimize cross-entropy loss. This structure is depicted in Figure FIGREF4.", "Regression Objective Function. The cosine-similarity between the two sentence embeddings $u$ and $v$ is computed (Figure FIGREF5). We use mean-squared-error loss as the objective function.", "Triplet Objective Function. Given an anchor sentence $a$, a positive sentence $p$, and a negative sentence $n$, triplet loss tunes the network such that the distance between $a$ and $p$ is smaller than the distance between $a$ and $n$. Mathematically, we minimize the following loss function:", "with $s_x$ the sentence embedding for $a$/$n$/$p$, $||\\cdot ||$ a distance metric and margin $\\epsilon $. Margin $\\epsilon $ ensures that $s_p$ is at least $\\epsilon $ closer to $s_a$ than $s_n$. As metric we use Euclidean distance and we set $\\epsilon =1$ in our experiments." ], "highlighted_evidence": [ "Model\nSBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN.\n\nIn order to fine-tune BERT / RoBERTa, we create siamese and triplet networks BIBREF15 to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity.\n\nThe network structure depends on the available training data. We experiment with the following structures and objective functions.\n\nClassification Objective Function. We concatenate the sentence embeddings $u$ and $v$ with the element-wise difference $|u-v|$ and multiply it with the trainable weight $W_t \\in \\mathbb {R}^{3n \\times k}$:\n\nwhere $n$ is the dimension of the sentence embeddings and $k$ the number of labels. We optimize cross-entropy loss. This structure is depicted in Figure FIGREF4.\n\nRegression Objective Function. The cosine-similarity between the two sentence embeddings $u$ and $v$ is computed (Figure FIGREF5). We use mean-squared-error loss as the objective function.\n\nTriplet Objective Function. Given an anchor sentence $a$, a positive sentence $p$, and a negative sentence $n$, triplet loss tunes the network such that the distance between $a$ and $p$ is smaller than the distance between $a$ and $n$. Mathematically, we minimize the following loss function:\n\nwith $s_x$ the sentence embedding for $a$/$n$/$p$, $||\\cdot ||$ a distance metric and margin $\\epsilon $. Margin $\\epsilon $ ensures that $s_p$ is at least $\\epsilon $ closer to $s_a$ than $s_n$. As metric we use Euclidean distance and we set $\\epsilon =1$ in our experiments." ] } ] } ], "1707.06806": [ { "question": "Which pretrained word vectors did they use?", "answers": [ { "answer": " pre-trained GloVe word vectors ", "type": "extractive" }, { "answer": "GloVe word vectors BIBREF16 pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC)", "type": "extractive" } ], "q_uid": "ed67359889cf61fa11ee291d6c378cccf83d599d", "evidence": [ { "raw_evidence": [ "Since the input of our method is textual data, we follow the approach of BIBREF15 and map the text into a fixed-size vector representation. To this end, we use word embeddings that were successfully applied in other domains. We follow BIBREF5 and use pre-trained GloVe word vectors BIBREF16 to initialize the embedding layer (also known as look-up table). Section SECREF18 discusses the embedding layer in more details." ], "highlighted_evidence": [ "Since the input of our method is textual data, we follow the approach of BIBREF15 and map the text into a fixed-size vector representation. To this end, we use word embeddings that were successfully applied in other domains. We follow BIBREF5 and use pre-trained GloVe word vectors BIBREF16 to initialize the embedding layer (also known as look-up table). Section SECREF18 discusses the embedding layer in more details." ] }, { "raw_evidence": [ "As a text embedding in our experiments, we use publicly available GloVe word vectors BIBREF16 pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC). Since their output dimensionality can be modified, we show the results for varying dimensionality sizes. On top of that, we evaluate two training approaches: using static word vectors and fine-tuning them during training phase." ], "highlighted_evidence": [ "As a text embedding in our experiments, we use publicly available GloVe word vectors BIBREF16 pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC). " ] } ] }, { "question": "What evaluation metrics are used?", "answers": [ { "answer": "standard accuracy metric", "type": "extractive" }, { "answer": "accuracy", "type": "extractive" } ], "q_uid": "425bd2ccfd95ead91d8f2b1b1c8ab9fc3446cb82", "evidence": [ { "raw_evidence": [ "In this section, we evaluate our method and compare its performance against the competitive approaches. We use INLINEFORM0 -fold evaluation protocol with INLINEFORM1 with random dataset split. We measure the performance using standard accuracy metric which we define as a ratio between correctly classified data samples from test dataset and all test samples." ], "highlighted_evidence": [ "In this section, we evaluate our method and compare its performance against the competitive approaches. We use INLINEFORM0 -fold evaluation protocol with INLINEFORM1 with random dataset split. We measure the performance using standard accuracy metric which we define as a ratio between correctly classified data samples from test dataset and all test samples." ] }, { "raw_evidence": [ "In this paper we propose a method for online content popularity prediction based on a bidirectional recurrent neural network called BiLSTM. This work is inspired by recent successful applications of deep neural networks in many natural language processing problems BIBREF5 , BIBREF6 . Our method attempts to model complex relationships between the title of an article and its popularity using novel deep network architecture that, in contrast to the previous approaches, gives highly interpretable results. Last but not least, the proposed BiLSTM method provides a significant performance boost in terms of prediction accuracy over the standard shallow approach, while outperforming the current state-of-the-art on two distinct datasets with over 40,000 samples." ], "highlighted_evidence": [ "Last but not least, the proposed BiLSTM method provides a significant performance boost in terms of prediction accuracy over the standard shallow approach, while outperforming the current state-of-the-art on two distinct datasets with over 40,000 samples." ] } ] }, { "question": "Which shallow approaches did they experiment with?", "answers": [ { "answer": "SVM", "type": "extractive" }, { "answer": "SVM with linear kernel using bag-of-words features", "type": "abstractive" } ], "q_uid": "955de9f7412ba98a0c91998919fa048d339b1d48", "evidence": [ { "raw_evidence": [ "As a first baseline we use Bag-of-Words, a well-known and robust text representations used in various domains BIBREF21 , combined with a standard shallow classifier, namely, a Support Vector Machine with linear kernel. We used LIBSVM implementation of SVM." ], "highlighted_evidence": [ "As a first baseline we use Bag-of-Words, a well-known and robust text representations used in various domains BIBREF21 , combined with a standard shallow classifier, namely, a Support Vector Machine with linear kernel. We used LIBSVM implementation of SVM." ] }, { "raw_evidence": [ "As a first baseline we use Bag-of-Words, a well-known and robust text representations used in various domains BIBREF21 , combined with a standard shallow classifier, namely, a Support Vector Machine with linear kernel. We used LIBSVM implementation of SVM." ], "highlighted_evidence": [ "As a first baseline we use Bag-of-Words, a well-known and robust text representations used in various domains BIBREF21 , combined with a standard shallow classifier, namely, a Support Vector Machine with linear kernel." ] } ] }, { "question": "Where do they obtain the news videos from?", "answers": [ { "answer": "NowThisNews Facebook page", "type": "extractive" }, { "answer": "NowThisNews Facebook page", "type": "extractive" } ], "q_uid": "3b371ea554fa6639c76a364060258454e4b931d4", "evidence": [ { "raw_evidence": [ "In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles.", "contains 4090 posts with associated videos from NowThisNews Facebook page collected between 07/2015 and 07/2016. For each post we collected its title and the number of views of the corresponding video, which we consider our popularity metric. Due to a fairly lengthy data collection process, we decided to normalize our data by first grouping posts according to their publication month and then labeling the posts for which the popularity metric exceeds the median monthly value as popular, the remaining part as unpopular." ], "highlighted_evidence": [ "In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles.\n\ncontains 4090 posts with associated videos from NowThisNews Facebook page collected between 07/2015 and 07/2016. For each post we collected its title and the number of views of the corresponding video, which we consider our popularity metric. Due to a fairly lengthy data collection process, we decided to normalize our data by first grouping posts according to their publication month and then labeling the posts for which the popularity metric exceeds the median monthly value as popular, the remaining part as unpopular." ] }, { "raw_evidence": [ "In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles.", "contains 4090 posts with associated videos from NowThisNews Facebook page collected between 07/2015 and 07/2016. For each post we collected its title and the number of views of the corresponding video, which we consider our popularity metric. Due to a fairly lengthy data collection process, we decided to normalize our data by first grouping posts according to their publication month and then labeling the posts for which the popularity metric exceeds the median monthly value as popular, the remaining part as unpopular." ], "highlighted_evidence": [ "In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles.", "contains 4090 posts with associated videos from NowThisNews Facebook page collected between 07/2015 and 07/2016." ] } ] }, { "question": "What is the source of the news articles?", "answers": [ { "answer": "main news channels, such as Yahoo News, The Guardian or The Washington Post", "type": "extractive" }, { "answer": "The BreakingNews dataset", "type": "extractive" } ], "q_uid": "ddb23a71113cbc092cbc158066d891cae261e2c6", "evidence": [ { "raw_evidence": [ "BIBREF4 contains a variety of news-related information such as images, captions, geo-location information and comments which could be used as a proxy for article popularity. The articles in this dataset were collected between January and December 2014. Although we tried to retrieve the entire dataset, we were able to download only 38,182 articles due to the dead links published in the dataset. The retrieved articles were published in main news channels, such as Yahoo News, The Guardian or The Washington Post. Similarly, to The NowThisNews dataset we normalize the data by grouping articles per publisher, and classifying them as popular, when the number of comments exceeds the median value for given publisher." ], "highlighted_evidence": [ "BIBREF4 contains a variety of news-related information such as images, captions, geo-location information and comments which could be used as a proxy for article popularity. The articles in this dataset were collected between January and December 2014. Although we tried to retrieve the entire dataset, we were able to download only 38,182 articles due to the dead links published in the dataset. The retrieved articles were published in main news channels, such as Yahoo News, The Guardian or The Washington Post. Similarly, to The NowThisNews dataset we normalize the data by grouping articles per publisher, and classifying them as popular, when the number of comments exceeds the median value for given publisher." ] }, { "raw_evidence": [ "In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles." ], "highlighted_evidence": [ "In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles." ] } ] } ], "1806.04511": [ { "question": "which non-english language was the had the worst results?", "answers": [ { "answer": "Turkish", "type": "extractive" } ], "q_uid": "c7486d039304ca9d50d0571236429f4f6fbcfcf7", "evidence": [ { "raw_evidence": [ "Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages. Building separate models for each language requires both labeled and unlabeled data. Even though having lots of labeled data in every language is the perfect case, it is unrealistic. Therefore, eliminating the resource requirement in this resource-constrained task is crucial. The fact that machine translation can be used in reusing models from different languages is promising for reducing the data requirements." ], "highlighted_evidence": [ "Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages." ] } ] }, { "question": "what datasets were used in evaluation?", "answers": [ { "answer": "SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28", "type": "extractive" }, { "answer": " English reviews , restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian)", "type": "extractive" } ], "q_uid": "f1f1dcc67b3e4d554bfeb508226cdadb3c32d2e9", "evidence": [ { "raw_evidence": [ "For evaluation of the multilingual approach, we use four languages. These datasets are part of SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28 . Table TABREF7 shows the number of observations in each test corpus." ], "highlighted_evidence": [ "These datasets are part of SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28 . Table TABREF7 shows the number of observations in each test corpus." ] }, { "raw_evidence": [ "Two sets of corpora are used in this study, both are publicly available. The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian). We focus on polarity detection in reviews, therefore all datasets in this study have two class values (positive, negative)." ], "highlighted_evidence": [ "Two sets of corpora are used in this study, both are publicly available. The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian)." ] } ] }, { "question": "what are the baselines?", "answers": [ { "answer": "majority baseline, lexicon-based approach", "type": "extractive" }, { "answer": "majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset, lexicon-based approach", "type": "extractive" } ], "q_uid": "a103636c8d1dbfa53341133aeb751ffec269415c", "evidence": [ { "raw_evidence": [ "In addition to the majority baseline, we also compare our results with a lexicon-based approach. We use SentiWordNet BIBREF29 to obtain a positive and a negative sentiment score for each token in a review. Then sum of positive sentiment scores and negative sentiment scores for each review is obtained by summing up the scores for each token. If the positive sum score for a given review is greater than the negative sum score, we accept that review as a positive review. If negative sum is larger than or equal to the positive sum, the review is labeled as a negative review.", "For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts \u201cpositive\u201d will be 60% accurate and will make mistakes 40% of the time." ], "highlighted_evidence": [ "In addition to the majority baseline, we also compare our results with a lexicon-based approach.", "For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset." ] }, { "raw_evidence": [ "For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts \u201cpositive\u201d will be 60% accurate and will make mistakes 40% of the time.", "In addition to the majority baseline, we also compare our results with a lexicon-based approach. We use SentiWordNet BIBREF29 to obtain a positive and a negative sentiment score for each token in a review. Then sum of positive sentiment scores and negative sentiment scores for each review is obtained by summing up the scores for each token. If the positive sum score for a given review is greater than the negative sum score, we accept that review as a positive review. If negative sum is larger than or equal to the positive sum, the review is labeled as a negative review." ], "highlighted_evidence": [ "For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts \u201cpositive\u201d will be 60% accurate and will make mistakes 40% of the time.\n\nIn addition to the majority baseline, we also compare our results with a lexicon-based approach. " ] } ] }, { "question": "how did the authors translate the reviews to other languages?", "answers": [ { "answer": "Using Google translation API.", "type": "abstractive" }, { "answer": "Google translation API", "type": "extractive" } ], "q_uid": "55139fcfe04ce90aad407e2e5a0067a45f31e07e", "evidence": [ { "raw_evidence": [ "In order to eliminate the need to find data and build separate models for each language, we propose a multilingual approach where a single model is built in the language where the largest resources are available. In this paper we focus on English as there are several sentiment analysis datasets in English. To make the English sentiment analysis model as generalizable as possible, we first start by training with a large dataset that has product reviews for different categories. Then, using the trained weights from the larger generic dataset, we make the model more specialized for a specific domain. We further train the model with domain-specific English reviews and use this trained model to score reviews that share the same domain from different languages. To be able to employ the trained model, test sets are first translated to English via machine translation and then inference takes place. Figure FIGREF1 shows our multilingual sentiment analysis approach. It is important to note that this approach does not utilize any resource in any of the languages of the test sets (e.g., word embeddings, lexicons, training set).", "Throughout our experiments, we use SAS Deep Learning Toolkit. For machine translation, Google translation API is used." ], "highlighted_evidence": [ " To be able to employ the trained model, test sets are first translated to English via machine translation and then inference takes place. ", " For machine translation, Google translation API is used." ] }, { "raw_evidence": [ "Throughout our experiments, we use SAS Deep Learning Toolkit. For machine translation, Google translation API is used." ], "highlighted_evidence": [ "For machine translation, Google translation API is used." ] } ] }, { "question": "what dataset was used for training?", "answers": [ { "answer": "Amazon reviews, Yelp restaurant reviews, restaurant reviews", "type": "extractive" }, { "answer": "Amazon reviews BIBREF23 , BIBREF24, Yelp restaurant reviews dataset, restaurant reviews dataset as part of a Kaggle competition BIBREF26", "type": "extractive" } ], "q_uid": "fbaf060004f196a286fef67593d2d76826f0304e", "evidence": [ { "raw_evidence": [ "With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 ." ], "highlighted_evidence": [ "With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 ." ] }, { "raw_evidence": [ "With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 ." ], "highlighted_evidence": [ "With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 ." ] } ] } ], "1904.04358": [ { "question": "How do they demonstrate that this type of EEG has discriminative information about the intended articulatory movements responsible for speech?", "answers": [ { "answer": "we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 .", "type": "extractive" } ], "q_uid": "7ae38f51243cb80b16a1df14872b72a1f8a2048f", "evidence": [ { "raw_evidence": [ "To further investigate the feature representation achieved by our model, we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 . We particularly select these two tasks as our model exhibits respectively minimum and maximum performance for these two. The tSNE visualization reveals that the second set of features are more easily separable than the first one, thereby giving a rationale for our performance.", "FLOAT SELECTED: Fig. 3. tSNE feature visualization for \u00b1nasal (left) and V/C classification (right). Red and green colours indicate the distribution of two different types of features" ], "highlighted_evidence": [ "To further investigate the feature representation achieved by our model, we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 . We particularly select these two tasks as our model exhibits respectively minimum and maximum performance for these two. The tSNE visualization reveals that the second set of features are more easily separable than the first one, thereby giving a rationale for our performance.", "FLOAT SELECTED: Fig. 3. tSNE feature visualization for \u00b1nasal (left) and V/C classification (right). Red and green colours indicate the distribution of two different types of features" ] } ] }, { "question": "What are the five different binary classification tasks?", "answers": [ { "answer": " presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels.", "type": "extractive" }, { "answer": "presence/absence of consonants, presence/absence of phonemic nasal, presence/absence of bilabial, presence/absence of high-front vowels, and presence/absence of high-back vowels", "type": "abstractive" } ], "q_uid": "deb89bca0925657e0f91ab5daca78b9e548de2bd", "evidence": [ { "raw_evidence": [ "We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels." ], "highlighted_evidence": [ " In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels." ] }, { "raw_evidence": [ "We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels." ], "highlighted_evidence": [ "In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels." ] } ] }, { "question": "How was the spatial aspect of the EEG signal computed?", "answers": [ { "answer": "we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers.", "type": "extractive" }, { "answer": "They use four-layered 2D CNN and two fully connected hidden layers on the channel covariance matrix to compute the spatial aspect.", "type": "abstractive" } ], "q_uid": "9c33b340aefbc1f15b6eb6fb3e23ee615ce5b570", "evidence": [ { "raw_evidence": [ "In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN." ], "highlighted_evidence": [ "In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN." ] }, { "raw_evidence": [ "In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN." ], "highlighted_evidence": [ "In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers." ] } ] }, { "question": "What data was presented to the subjects to elicit event-related responses?", "answers": [ { "answer": "7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw)", "type": "extractive" }, { "answer": "KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw)", "type": "extractive" } ], "q_uid": "e6583c60b13b87fc37af75ffc975e7e316d4f4e0", "evidence": [ { "raw_evidence": [ "We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels." ], "highlighted_evidence": [ "We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. " ] }, { "raw_evidence": [ "We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels." ], "highlighted_evidence": [ "We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels." ] } ] }, { "question": "How many electrodes were used on the subject in EEG sessions?", "answers": [ { "answer": "1913 signals", "type": "extractive" } ], "q_uid": "c7b6e6cb997de1660fd24d31759fe6bb21c7863f", "evidence": [ { "raw_evidence": [ "We performed two sets of experiments with the single-trial EEG data. In PHASE-ONE, our goals was to identify the best architectures and hyperparameters for our networks with a reasonable number of runs. For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE." ], "highlighted_evidence": [ " For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE." ] } ] }, { "question": "How many subjects does the EEG data come from?", "answers": [ { "answer": "14", "type": "extractive" }, { "answer": "14 participants", "type": "extractive" } ], "q_uid": "f9f59c171531c452bd2767dc332dc74cadee5120", "evidence": [ { "raw_evidence": [ "We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels." ], "highlighted_evidence": [ "We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. " ] }, { "raw_evidence": [ "We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels." ], "highlighted_evidence": [ "The dataset consists of 14 participants, with each prompt presented 11 times to each individual. " ] } ] } ], "1912.00667": [ { "question": "Do they report results only on English data?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "4ac2c3c259024d7cd8e449600b499f93332dab60", "evidence": [ { "raw_evidence": [ "Datasets. We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath). These event categories are chosen as they are representative of important event types that are of interest to many governments and companies. The need to create our own dataset was motivated by the lack of public datasets for event detection on microposts. The few available datasets do not suit our requirements. For example, the publicly available Events-2012 Twitter dataset BIBREF20 contains generic event descriptions such as Politics, Sports, Culture etc. Our work targets more specific event categories BIBREF21. Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.) For each dataset, we randomly select 500 tweets from the unlabeled subset and manually label them for evaluation. Table TABREF25 shows key statistics from our two datasets." ], "highlighted_evidence": [ "Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.)" ] } ] }, { "question": "What type of classifiers are used?", "answers": [ { "answer": "probabilistic model", "type": "extractive" }, { "answer": "Logistic Regression, Multilayer Perceptron", "type": "extractive" } ], "q_uid": "bc730e4d964b6a66656078e2da130310142ab641", "evidence": [ { "raw_evidence": [ "This section introduces our probabilistic model that infers keyword expectation and trains the target model simultaneously. We start by formalizing the problem and introducing our model, before describing the model learning method." ], "highlighted_evidence": [ "This section introduces our probabilistic model that infers keyword expectation and trains the target model simultaneously." ] }, { "raw_evidence": [ "Comparison Methods. To demonstrate the generality of our approach on different event detection models, we consider Logistic Regression (LR) BIBREF1 and Multilayer Perceptron (MLP) BIBREF2 as the target models. As the goal of our experiments is to demonstrate the effectiveness of our approach as a new model training technique, we use these widely used models. Also, we note that in our case other neural network models with more complex network architectures for event detection, such as the bi-directional LSTM BIBREF17, turn out to be less effective than a simple feedforward network. For both LR and MLP, we evaluate our proposed human-AI loop approach for keyword discovery and expectation estimation by comparing against the weakly supervised learning method proposed by BIBREF1 (BIBREF1) and BIBREF17 (BIBREF17) where only one initial keyword is used with an expectation estimated by an individual expert." ], "highlighted_evidence": [ "To demonstrate the generality of our approach on different event detection models, we consider Logistic Regression (LR) BIBREF1 and Multilayer Perceptron (MLP) BIBREF2 as the target models." ] } ] }, { "question": "Which real-world datasets are used?", "answers": [ { "answer": "Tweets related to CyberAttack and tweets related to PoliticianDeath", "type": "abstractive" }, { "answer": "cyber security (CyberAttack), death of politicians (PoliticianDeath)", "type": "extractive" } ], "q_uid": "3941401a182a3d6234894a5c8a75d48c6116c45c", "evidence": [ { "raw_evidence": [ "Datasets. We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath). These event categories are chosen as they are representative of important event types that are of interest to many governments and companies. The need to create our own dataset was motivated by the lack of public datasets for event detection on microposts. The few available datasets do not suit our requirements. For example, the publicly available Events-2012 Twitter dataset BIBREF20 contains generic event descriptions such as Politics, Sports, Culture etc. Our work targets more specific event categories BIBREF21. Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.) For each dataset, we randomly select 500 tweets from the unlabeled subset and manually label them for evaluation. Table TABREF25 shows key statistics from our two datasets." ], "highlighted_evidence": [ "Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively." ] }, { "raw_evidence": [ "Datasets. We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath). These event categories are chosen as they are representative of important event types that are of interest to many governments and companies. The need to create our own dataset was motivated by the lack of public datasets for event detection on microposts. The few available datasets do not suit our requirements. For example, the publicly available Events-2012 Twitter dataset BIBREF20 contains generic event descriptions such as Politics, Sports, Culture etc. Our work targets more specific event categories BIBREF21. Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.) For each dataset, we randomly select 500 tweets from the unlabeled subset and manually label them for evaluation. Table TABREF25 shows key statistics from our two datasets." ], "highlighted_evidence": [ "We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath)." ] } ] }, { "question": "How are the interpretability merits of the approach demonstrated?", "answers": [ { "answer": "By involving humans for post-hoc evaluation of model's interpretability", "type": "abstractive" }, { "answer": "directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model", "type": "extractive" } ], "q_uid": "67e9e147b2cab5ba43572ce8a17fc863690172f0", "evidence": [ { "raw_evidence": [ "Human-in-the-Loop Approaches. Our work extends weakly supervised learning methods by involving humans in the loop BIBREF13. Existing human-in-the-loop approaches mainly leverage crowds to label individual data instances BIBREF9, BIBREF10 or to debug the training data BIBREF30, BIBREF31 or components BIBREF32, BIBREF33, BIBREF34 of a machine learning system. Unlike these works, we leverage crowd workers to label sampled microposts in order to obtain keyword-specific expectations, which can then be generalized to help classify microposts containing the same keyword, thus amplifying the utility of the crowd. Our work is further connected to the topic of interpretability and transparency of machine learning models BIBREF11, BIBREF35, BIBREF12, for which humans are increasingly involved, for instance for post-hoc evaluations of the model's interpretability. In contrast, our approach directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model." ], "highlighted_evidence": [ "Our work is further connected to the topic of interpretability and transparency of machine learning models BIBREF11, BIBREF35, BIBREF12, for which humans are increasingly involved, for instance for post-hoc evaluations of the model's interpretability. " ] }, { "raw_evidence": [ "Human-in-the-Loop Approaches. Our work extends weakly supervised learning methods by involving humans in the loop BIBREF13. Existing human-in-the-loop approaches mainly leverage crowds to label individual data instances BIBREF9, BIBREF10 or to debug the training data BIBREF30, BIBREF31 or components BIBREF32, BIBREF33, BIBREF34 of a machine learning system. Unlike these works, we leverage crowd workers to label sampled microposts in order to obtain keyword-specific expectations, which can then be generalized to help classify microposts containing the same keyword, thus amplifying the utility of the crowd. Our work is further connected to the topic of interpretability and transparency of machine learning models BIBREF11, BIBREF35, BIBREF12, for which humans are increasingly involved, for instance for post-hoc evaluations of the model's interpretability. In contrast, our approach directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model." ], "highlighted_evidence": [ "In contrast, our approach directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model." ] } ] }, { "question": "How are the accuracy merits of the approach demonstrated?", "answers": [ { "answer": "significant improvements clearly demonstrate that our approach is effective at improving model performance", "type": "extractive" }, { "answer": "By evaluating the performance of the approach using accuracy and AUC", "type": "abstractive" } ], "q_uid": "a74190189a6ced2a2d5b781e445e36f4e527e82a", "evidence": [ { "raw_evidence": [ "Our approach improves LR by 5.17% (Accuracy) and 18.38% (AUC), and MLP by 10.71% (Accuracy) and 30.27% (AUC) on average. Such significant improvements clearly demonstrate that our approach is effective at improving model performance. We observe that the target models generally converge between the 7th and 9th iteration on both datasets when performance is measured by AUC. The performance can slightly degrade when the models are further trained for more iterations on both datasets. This is likely due to the fact that over time, the newly discovered keywords entail lower novel information for model training. For instance, for the CyberAttack dataset the new keyword in the 9th iteration `election' frequently co-occurs with the keyword `russia' in the 5th iteration (in microposts that connect Russian hackers with US elections), thus bringing limited new information for improving the model performance. As a side remark, we note that the models converge faster when performance is measured by accuracy. Such a comparison result confirms the difference between the metrics and shows the necessity for more keywords to discriminate event-related microposts from non event-related ones." ], "highlighted_evidence": [ "Our approach improves LR by 5.17% (Accuracy) and 18.38% (AUC), and MLP by 10.71% (Accuracy) and 30.27% (AUC) on average. Such significant improvements clearly demonstrate that our approach is effective at improving model performance." ] }, { "raw_evidence": [ "Evaluation. Following BIBREF1 (BIBREF1) and BIBREF3 (BIBREF3), we use accuracy and area under the precision-recall curve (AUC) metrics to measure the performance of our proposed approach. We note that due to the imbalance in our datasets (20% positive microposts in CyberAttack and 27% in PoliticianDeath), accuracy is dominated by negative examples; AUC, in comparison, better characterizes the discriminative power of the model." ], "highlighted_evidence": [ "Following BIBREF1 (BIBREF1) and BIBREF3 (BIBREF3), we use accuracy and area under the precision-recall curve (AUC) metrics to measure the performance of our proposed approach. " ] } ] }, { "question": "How is the keyword specific expectation elicited from the crowd?", "answers": [ { "answer": "workers are first asked to find those microposts where the model predictions are deemed correct, asked to find the keyword that best indicates the class of the microposts", "type": "extractive" } ], "q_uid": "43f074bacabd0a355b4e0f91a1afd538c0a6244f", "evidence": [ { "raw_evidence": [ "To identify new keywords in the selected microposts, we again leverage crowdsourcing, as humans are typically better than machines at providing specific explanations BIBREF18, BIBREF19. In the crowdsourcing task, workers are first asked to find those microposts where the model predictions are deemed correct. Then, from those microposts, workers are asked to find the keyword that best indicates the class of the microposts as predicted by the model. The keyword most frequently identified by the workers is then used as the initial keyword for the following iteration. In case multiple keywords are selected, e.g., the top-$N$ frequent ones, workers will be asked to perform $N$ micropost classification tasks for each keyword in the next iteration, and the model training will be performed on multiple keyword-specific expectations." ], "highlighted_evidence": [ "To identify new keywords in the selected microposts, we again leverage crowdsourcing, as humans are typically better than machines at providing specific explanations BIBREF18, BIBREF19. In the crowdsourcing task, workers are first asked to find those microposts where the model predictions are deemed correct. Then, from those microposts, workers are asked to find the keyword that best indicates the class of the microposts as predicted by the model." ] } ] } ], "1912.08904": [ { "question": "Does the paper provide any case studies to illustrate how one can use Macaw for CIS research?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "58ef2442450c392bfc55c4dc35f216542f5f2dbb", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] }, { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "What functionality does Macaw provide?", "answers": [ { "answer": "Co-Reference Resolution, Query Generation, Retrieval Model, Result Generation", "type": "extractive" }, { "answer": "conversational search, conversational question answering, conversational recommendation, conversational natural language interface to structured and semi-structured data", "type": "extractive" } ], "q_uid": "78a5546e87d4d88e3d9638a0a8cd0b7debf1f09d", "evidence": [ { "raw_evidence": [ "Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.", "Query Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.", "Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.", "Result Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering." ], "highlighted_evidence": [ "Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.\n\nQuery Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.\n\nRetrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.\n\nResult Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering." ] }, { "raw_evidence": [ "Macaw is designed based on a modular architecture to support different information seeking tasks, including conversational search, conversational question answering, conversational recommendation, and conversational natural language interface to structured and semi-structured data. Each interaction in Macaw (from both user and system) is a Message object, thus a conversation is a list of Messages. Macaw consists of multiple actions, each action is a module that can satisfy the information needs of users for some requests. For example, search and question answering can be two actions in Macaw. Even multiple search algorithms can be also seen as multiple actions. Each action can produce multiple outputs (e.g., multiple retrieved documents). For every user interaction, Macaw runs all actions in parallel. The actions' outputs produced within a predefined time interval (i.e., an interaction timeout constant) are then post-processed. Macaw can choose one or combine multiple of these outputs and prepare an output Message object as the user's response." ], "highlighted_evidence": [ "Macaw is designed based on a modular architecture to support different information seeking tasks, including conversational search, conversational question answering, conversational recommendation, and conversational natural language interface to structured and semi-structured data." ] } ] }, { "question": "What is a wizard of oz setup?", "answers": [ { "answer": "seeker interacts with a real conversational interface, intermediary (or the wizard) receives the seeker's message and performs different information seeking actions", "type": "extractive" }, { "answer": "a setup where the seeker interacts with a real conversational interface and the wizard, an intermediary, performs actions related to the seeker's message", "type": "abstractive" } ], "q_uid": "375b281e7441547ba284068326dd834216e55c07", "evidence": [ { "raw_evidence": [ "Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis. This setup can simulate an ideal CIS system and thus is useful for collecting high-quality data from real users for CIS research." ], "highlighted_evidence": [ "Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis." ] }, { "raw_evidence": [ "Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis. This setup can simulate an ideal CIS system and thus is useful for collecting high-quality data from real users for CIS research." ], "highlighted_evidence": [ "Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis. This setup can simulate an ideal CIS system and thus is useful for collecting high-quality data from real users for CIS research." ] } ] }, { "question": "What interface does Macaw currently have?", "answers": [ { "answer": "File IO, Standard IO, Telegram", "type": "extractive" }, { "answer": "The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps.", "type": "extractive" } ], "q_uid": "05c49b9f84772e6df41f530d86c1f7a1da6aa489", "evidence": [ { "raw_evidence": [ "We have implemented the following interfaces for Macaw:", "[leftmargin=*]", "File IO: This interface is designed for experimental purposes, such as evaluating the performance of a conversational search technique on a dataset with multiple queries. This is not an interactive interface.", "Standard IO: This interactive command line interface is designed for development purposes to interact with the system, see the logs, and debug or improve the system.", "Telegram: This interactive interface is designed for interaction with real users (see FIGREF4). Telegram is a popular instant messaging service whose client-side code is open-source. We have implemented a Telegram bot that can be used with different devices (personal computers, tablets, and mobile phones) and different operating systems (Android, iOS, Linux, Mac OS, and Windows). This interface allows multi-modal interactions (text, speech, click, image). It can be also used for speech-only interactions. For speech recognition and generation, Macaw relies on online APIs, e.g., the services provided by Google Cloud and Microsoft Azure. In addition, there exist multiple popular groups and channels in Telegram, which allows further integration of social networks with conversational systems. For example, see the Naseri and Zamani's study on news popularity in Telegram BIBREF12." ], "highlighted_evidence": [ "We have implemented the following interfaces for Macaw:\n\n[leftmargin=*]\n\nFile IO: This interface is designed for experimental purposes, such as evaluating the performance of a conversational search technique on a dataset with multiple queries. This is not an interactive interface.\n\nStandard IO: This interactive command line interface is designed for development purposes to interact with the system, see the logs, and debug or improve the system.\n\nTelegram: This interactive interface is designed for interaction with real users (see FIGREF4). Telegram is a popular instant messaging service whose client-side code is open-source. We have implemented a Telegram bot that can be used with different devices (personal computers, tablets, and mobile phones) and different operating systems (Android, iOS, Linux, Mac OS, and Windows). This interface allows multi-modal interactions (text, speech, click, image). It can be also used for speech-only interactions. For speech recognition and generation, Macaw relies on online APIs, e.g., the services provided by Google Cloud and Microsoft Azure. In addition, there exist multiple popular groups and channels in Telegram, which allows further integration of social networks with conversational systems. For example, see the Naseri and Zamani's study on news popularity in Telegram BIBREF12." ] }, { "raw_evidence": [ "The modular design of Macaw makes it relatively easy to configure a different user interface or add a new one. The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps. In more detail, Macaw's interface can be a Telegram bot, which supports a wide range of devices and operating systems (see FIGREF4). This allows Macaw to support multi-modal interactions, such as text, speech, image, click, etc. A number of APIs for automatic speech recognition and generation have been employed to support speech interactions. Note that the Macaw's architecture and implementation allows mixed-initiative interactions." ], "highlighted_evidence": [ "The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps." ] } ] }, { "question": "What modalities are supported by Macaw?", "answers": [ { "answer": "text, speech, image, click, etc", "type": "extractive" } ], "q_uid": "6ecb69360449bb9915ac73c0a816c8ac479cbbfc", "evidence": [ { "raw_evidence": [ "The modular design of Macaw makes it relatively easy to configure a different user interface or add a new one. The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps. In more detail, Macaw's interface can be a Telegram bot, which supports a wide range of devices and operating systems (see FIGREF4). This allows Macaw to support multi-modal interactions, such as text, speech, image, click, etc. A number of APIs for automatic speech recognition and generation have been employed to support speech interactions. Note that the Macaw's architecture and implementation allows mixed-initiative interactions." ], "highlighted_evidence": [ "This allows Macaw to support multi-modal interactions, such as text, speech, image, click, etc." ] } ] }, { "question": "What are the different modules in Macaw?", "answers": [ { "answer": "Co-Reference Resolution, Query Generation, Retrieval Model, Result Generation", "type": "extractive" }, { "answer": "Co-Reference Resolution, Query Generation, Retrieval Model, Result Generation", "type": "extractive" } ], "q_uid": "68df324e5fa697baed25c761d0be4c528f7f5cf7", "evidence": [ { "raw_evidence": [ "The overview of retrieval and question answering actions in Macaw is shown in FIGREF17. These actions consist of the following components:", "[leftmargin=*]", "Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.", "Query Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.", "Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.", "Result Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering." ], "highlighted_evidence": [ "These actions consist of the following components:\n\n[leftmargin=*]\n\nCo-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval.", "Query Generation: This component generates a query based on the past user-system interactions.", "Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection.", "Result Generation: The retrieved documents can be too long to be presented using some interfaces." ] }, { "raw_evidence": [ "Macaw has a modular design, with the goal of making it easy to configure and add new modules such as a different user interface or different retrieval module. The overall setup also follows a Model-View-Controller (MVC) like architecture. The design decisions have been made to smooth the Macaw's adoptions and extensions. Macaw is implemented in Python, thus machine learning models implemented using PyTorch, Scikit-learn, or TensorFlow can be easily integrated into Macaw. The high-level overview of Macaw is depicted in FIGREF8. The user interacts with the interface and the interface produces a Message object from the current interaction of user. The interaction can be in multi-modal form, such as text, speech, image, and click. Macaw stores all interactions in an \u201cInteraction Database\u201d. For every interaction, Macaw looks for most recent user-system interactions (including the system's responses) to create a list of Messages, called the conversation list. It is then dispatched to multiple information seeking (and related) actions. The actions run in parallel, and each should respond within a pre-defined time interval. The output selection component selects from (or potentially combines) the outputs generated by different actions and creates a Message object as the system's response. This message is logged into the interaction database and is sent to the interface to be presented to the user. Again, the response message can be multi-modal and include text, speech, link, list of options, etc.", "The overview of retrieval and question answering actions in Macaw is shown in FIGREF17. These actions consist of the following components:", "[leftmargin=*]", "Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.", "Query Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.", "Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.", "Result Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering." ], "highlighted_evidence": [ "Macaw has a modular design, with the goal of making it easy to configure and add new modules such as a different user interface or different retrieval module. The overall setup also follows a Model-View-Controller (MVC) like architecture.", "These actions consist of the following components:\n\n[leftmargin=*]\n\nCo-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.\n\nQuery Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.\n\nRetrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.\n\nResult Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering." ] } ] } ], "1703.10344": [ { "question": "What baseline model is used?", "answers": [ { "answer": "For Article-Entity placement, they consider two baselines: the first one using only salience-based features, and the second baseline checks if the entity appears in the title of the article. \n\nFor Article-Section Placement, they consider two baselines: the first picks the section with the highest lexical similarity to the article, and the second one picks the most frequent section.", "type": "abstractive" }, { "answer": "B1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 ., B2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .\n\n, S1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2, S2: Place the news into the most frequent section in INLINEFORM0", "type": "extractive" } ], "q_uid": "2ee715c7c6289669f11a79743a6b2b696073805d", "evidence": [ { "raw_evidence": [ "Here we introduce the evaluation setup and analyze the results for the article\u2013entity (AEP) placement task. We only report the evaluation metrics for the `relevant' news-entity pairs. A detailed explanation on why we focus on the `relevant' pairs is provided in Section SECREF16 .", "Baselines. We consider the following baselines for this task.", "B1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .", "B2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .", "Here we show the evaluation setup for ASP task and discuss the results with a focus on three main aspects, (i) the overall performance across the years, (ii) the entity class specific performance, and (iii) the impact on entity profile expansion by suggesting missing sections to entities based on the pre-computed templates.", "Baselines. To the best of our knowledge, we are not aware of any comparable approach for this task. Therefore, the baselines we consider are the following:", "S1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2", "S2: Place the news into the most frequent section in INLINEFORM0" ], "highlighted_evidence": [ "Here we introduce the evaluation setup and analyze the results for the article\u2013entity (AEP) placement task. We only report the evaluation metrics for the `relevant' news-entity pairs. ", "Baselines. We consider the following baselines for this task.\n\nB1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .\n\nB2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .", "Here we show the evaluation setup for ASP task and discuss the results with a focus on three main aspects, (i) the overall performance across the years, (ii) the entity class specific performance, and (iii) the impact on entity profile expansion by suggesting missing sections to entities based on the pre-computed templates.", "Baselines. To the best of our knowledge, we are not aware of any comparable approach for this task. Therefore, the baselines we consider are the following:\n\nS1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2\n\nS2: Place the news into the most frequent section in INLINEFORM0" ] }, { "raw_evidence": [ "Baselines. We consider the following baselines for this task.", "B1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .", "B2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .", "Baselines. To the best of our knowledge, we are not aware of any comparable approach for this task. Therefore, the baselines we consider are the following:", "S1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2", "S2: Place the news into the most frequent section in INLINEFORM0" ], "highlighted_evidence": [ "Baselines. We consider the following baselines for this task.\n\nB1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .\n\nB2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .", " Therefore, the baselines we consider are the following:\n\nS1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2\n\nS2: Place the news into the most frequent section in INLINEFORM0" ] } ] }, { "question": "What news article sources are used?", "answers": [ { "answer": " the news external references in Wikipedia", "type": "extractive" } ], "q_uid": "61a9ea36ddc37c60d1a51dabcfff9445a2225725", "evidence": [ { "raw_evidence": [ "We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages. Given the Wikipedia snapshot at a given year (in our case [2009-2014]), we suggest news articles that might be cited in the coming years. The existing news references in the entity pages along with their reference date act as our ground-truth to evaluate our approach. In summary, we make the following contributions." ], "highlighted_evidence": [ "We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages." ] } ] }, { "question": "How do they determine the exact section to use the input article?", "answers": [ { "answer": "They use a multi-class classifier to determine the section it should be cited", "type": "abstractive" } ], "q_uid": "cc850bc8245a7ae790e1f59014371d4f35cd46d7", "evidence": [ { "raw_evidence": [ "We model the ASP placement task as a successor of the AEP task. For all the `relevant' news entity pairs, the task is to determine the correct entity section. Each section in a Wikipedia entity page represents a different topic. For example, Barack Obama has the sections `Early Life', `Presidency', `Family and Personal Life' etc. However, many entity pages have an incomplete section structure. Incomplete or missing sections are due to two Wikipedia properties. First, long-tail entities miss information and sections due to their lack of popularity. Second, for all entities whether popular or not, certain sections might occur for the first time due to real world developments. As an example, the entity Germanwings did not have an `Accidents' section before this year's disaster, which was the first in the history of the airline.", "Article-Section Ground-truth. The dataset consists of the triple INLINEFORM0 , where INLINEFORM1 , where we assume that INLINEFORM2 has already been determined as relevant. We therefore have a multi-class classification problem where we need to determine the section of INLINEFORM3 where INLINEFORM4 is cited. Similar to the article-entity ground truth, here too the features compute the similarity between INLINEFORM5 , INLINEFORM6 and INLINEFORM7 ." ], "highlighted_evidence": [ "We model the ASP placement task as a successor of the AEP task. For all the `relevant' news entity pairs, the task is to determine the correct entity section. Each section in a Wikipedia entity page represents a different topic. For example, Barack Obama has the sections `Early Life', `Presidency', `Family and Personal Life' etc.", "Article-Section Ground-truth. The dataset consists of the triple INLINEFORM0 , where INLINEFORM1 , where we assume that INLINEFORM2 has already been determined as relevant. We therefore have a multi-class classification problem where we need to determine the section of INLINEFORM3 where INLINEFORM4 is cited. " ] } ] }, { "question": "What features are used to represent the novelty of news articles to entity pages?", "answers": [ { "answer": "KL-divergences of language models for the news article and the already added news references", "type": "abstractive" }, { "answer": "KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6", "type": "extractive" } ], "q_uid": "984fc3e726848f8f13dfe72b89e3770d00c3a1af", "evidence": [ { "raw_evidence": [ "An important feature when suggesting an article INLINEFORM0 to an entity INLINEFORM1 is the novelty of INLINEFORM2 w.r.t the already existing entity profile INLINEFORM3 . Studies BIBREF17 have shown that on comparable collections to ours (TREC GOV2) the number of duplicates can go up to INLINEFORM4 . This figure is likely higher for major events concerning highly authoritative entities on which all news media will report.", "Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 ." ], "highlighted_evidence": [ "An important feature when suggesting an article INLINEFORM0 to an entity INLINEFORM1 is the novelty of INLINEFORM2 w.r.t the already existing entity profile INLINEFORM3", "Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 .\n\n" ] }, { "raw_evidence": [ "Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 ." ], "highlighted_evidence": [ "Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . " ] } ] }, { "question": "What features are used to represent the salience and relative authority of entities?", "answers": [ { "answer": "Salience features positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in.\nThe relative authority of entity features: comparative relevance of the news article to the different entities occurring in it.", "type": "abstractive" }, { "answer": "positional features, occurrence frequency, internal POS structure of the entity and the sentence it occurs in, relative entity frequency, centrality measures like PageRank ", "type": "extractive" } ], "q_uid": "fb1227b3681c69f60eb0539e16c5a8cd784177a7", "evidence": [ { "raw_evidence": [ "Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details." ], "highlighted_evidence": [ "As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. " ] }, { "raw_evidence": [ "Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details.", "Relative Entity Frequency. Although frequency of mention and positional features play some role in baseline features, their interaction is not modeled by a single feature nor do the positional features encode more than sentence position. We therefore suggest a novel feature called relative entity frequency, INLINEFORM0 , that has three properties.: (i) It rewards entities for occurring throughout the text instead of only in some parts of the text, measured by the number of paragraphs it occurs in (ii) it rewards entities that occur more frequently in the opening paragraphs of an article as we model INLINEFORM1 as an exponential decay function. The decay corresponds to the positional index of the news paragraph. This is inspired by the news-specific discourse structure that tends to give short summaries of the most important facts and entities in the opening paragraphs. (iii) it compares entity frequency to the frequency of its co-occurring mentions as the weight of an entity appearing in a specific paragraph, normalized by the sum of the frequencies of other entities in INLINEFORM2 . DISPLAYFORM0", "The a priori authority of an entity (denoted by INLINEFORM0 ) can be measured in several ways. We opt for two approaches: (i) probability of entity INLINEFORM1 occurring in the corpus INLINEFORM2 , and (ii) authority assessed through centrality measures like PageRank BIBREF16 . For the second case we construct the graph INLINEFORM3 consisting of entities in INLINEFORM4 and news articles in INLINEFORM5 as vertices. The edges are established between INLINEFORM6 and entities in INLINEFORM7 , that is INLINEFORM8 , and the out-links from INLINEFORM9 , that is INLINEFORM10 (arrows present the edge direction)." ], "highlighted_evidence": [ "Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details.", "Relative Entity Frequency. Although frequency of mention and positional features play some role in baseline features, their interaction is not modeled by a single feature nor do the positional features encode more than sentence position. We therefore suggest a novel feature called relative entity frequency, INLINEFORM0 , that has three properties.: (i) It rewards entities for occurring throughout the text instead of only in some parts of the text, measured by the number of paragraphs it occurs in (ii) it rewards entities that occur more frequently in the opening paragraphs of an article as we model INLINEFORM1 as an exponential decay function. ", "The a priori authority of an entity (denoted by INLINEFORM0 ) can be measured in several ways. We opt for two approaches: (i) probability of entity INLINEFORM1 occurring in the corpus INLINEFORM2 , and (ii) authority assessed through centrality measures like PageRank BIBREF16 . For the second case we construct the graph INLINEFORM3 consisting of entities in INLINEFORM4 and news articles in INLINEFORM5 as vertices. The edges are established between INLINEFORM6 and entities in INLINEFORM7 , that is INLINEFORM8 , and the out-links from INLINEFORM9 , that is INLINEFORM10 (arrows present the edge direction)." ] } ] } ], "2003.13032": [ { "question": "Do they experiment with other tasks?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "8df35c24af9efc3348d3b8d746df116480dfe661", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "What baselines do they introduce?", "answers": [ { "answer": "Conditional Random Fields, BiLSTM-CRF, Multi-Task Learning, BioBERT\n", "type": "extractive" }, { "answer": "Conditional Random Fields, BiLSTM-CRF, Multi-Task Learning, BioBERT", "type": "extractive" } ], "q_uid": "277a7e916e65dfefd44d2d05774f95257ac946ae", "evidence": [ { "raw_evidence": [ "Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields", "Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric.", "Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF", "Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs.", "Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning", "Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve.", "Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT", "Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning." ], "highlighted_evidence": [ " Conditional Random Fields\nConditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling.", "BiLSTM-CRF\nPrior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling.", "Multi-Task Learning\nMulti-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning.", "BioBERT\nDeep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17." ] }, { "raw_evidence": [ "Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields", "Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric.", "Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF", "Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs.", "Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning", "Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve.", "Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT", "Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning." ], "highlighted_evidence": [ "Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields\nConditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. ", "Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF\nPrior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input.", "Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning\nMulti-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning.", "Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT\nDeep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17." ] } ] }, { "question": "How large is the corpus?", "answers": [ { "answer": "8,275 sentences and 167,739 words in total", "type": "extractive" }, { "answer": "The corpus comprises 8,275 sentences and 167,739 words in total.", "type": "extractive" } ], "q_uid": "2916bbdb95ef31ab26527ba67961cf5ec94d6afe", "evidence": [ { "raw_evidence": [ "The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24." ], "highlighted_evidence": [ "The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total." ] }, { "raw_evidence": [ "The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24." ], "highlighted_evidence": [ "The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24." ] } ] }, { "question": "How was annotation performed?", "answers": [ { "answer": "Experienced medical doctors used a linguistic annotation tool to annotate entities.", "type": "abstractive" }, { "answer": "WebAnno", "type": "extractive" } ], "q_uid": "f2e8497aa16327aa297a7f9f7d156e485fe33945", "evidence": [ { "raw_evidence": [ "We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports.", "The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers." ], "highlighted_evidence": [ "We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports.", "The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists." ] }, { "raw_evidence": [ "The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers." ], "highlighted_evidence": [ "The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation." ] } ] }, { "question": "How many documents are in the new corpus?", "answers": [ { "answer": "53 documents", "type": "extractive" }, { "answer": "53 documents", "type": "extractive" } ], "q_uid": "9b76f428b7c8c9fc930aa88ee585a03478bff9b3", "evidence": [ { "raw_evidence": [ "The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24." ], "highlighted_evidence": [ "The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average." ] }, { "raw_evidence": [ "The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24." ], "highlighted_evidence": [ "The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. " ] } ] }, { "question": "What baseline systems are proposed?", "answers": [ { "answer": "Conditional Random Fields, BiLSTM-CRF, Multi-Task Learning, BioBERT", "type": "extractive" }, { "answer": "Conditional Random Fields, BiLSTM-CRF, Multi-Task Learning, BioBERT", "type": "extractive" } ], "q_uid": "dd6b378d89c05058e8f49e48fd48f5c458ea2ebc", "evidence": [ { "raw_evidence": [ "Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields", "Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric.", "Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF", "Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs.", "Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning", "Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve.", "Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT", "Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning." ], "highlighted_evidence": [ "Conditional Random Fields\nConditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling.", "BiLSTM-CRF\nPrior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling.", "Multi-Task Learning\nMulti-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning.", "BioBERT\nDeep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17." ] }, { "raw_evidence": [ "Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields", "Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF", "Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning", "Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT" ], "highlighted_evidence": [ "Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields", "Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF", "Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning", "Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT" ] } ] } ], "1910.06592": [ { "question": "How did they obtain the dataset?", "answers": [ { "answer": "For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy", "type": "extractive" }, { "answer": "public resources where suspicious Twitter accounts were annotated, list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy", "type": "extractive" } ], "q_uid": "e35c2fa99d5c84d8cb5d83fca2b434dcd83f3851", "evidence": [ { "raw_evidence": [ "Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset." ], "highlighted_evidence": [ "We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1.", "On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties." ] }, { "raw_evidence": [ "Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset." ], "highlighted_evidence": [ "We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API." ] } ] }, { "question": "What activation function do they use in their model?", "answers": [ { "answer": "relu, selu, tanh", "type": "extractive" }, { "answer": "Activation function is hyperparameter. Possible values: relu, selu, tanh.", "type": "abstractive" } ], "q_uid": "c00ce1e3be14610fb4e1f0614005911bb5ff0302", "evidence": [ { "raw_evidence": [ "Experimental Setup. We apply a 5 cross-validation on the account's level. For the FacTweet model, we experiment with 25% of the accounts for validation and parameters selection. We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16). The validation split is extracted on the class level using stratified sampling: we took a random 25% of the accounts from each class since the dataset is unbalanced. Discarding the classes' size in the splitting process may affect the minority classes (e.g. hoax). For the baselines' classifier, we tested many classifiers and the LR showed the best overall performance." ], "highlighted_evidence": [ " We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16)" ] }, { "raw_evidence": [ "Experimental Setup. We apply a 5 cross-validation on the account's level. For the FacTweet model, we experiment with 25% of the accounts for validation and parameters selection. We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16). The validation split is extracted on the class level using stratified sampling: we took a random 25% of the accounts from each class since the dataset is unbalanced. Discarding the classes' size in the splitting process may affect the minority classes (e.g. hoax). For the baselines' classifier, we tested many classifiers and the LR showed the best overall performance." ], "highlighted_evidence": [ "We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16)." ] } ] }, { "question": "What baselines do they compare to?", "answers": [ { "answer": "LR + Bag-of-words, Tweet2vec, LR + All Features (tweet-level), LR + All Features (chunk-level), FacTweet (tweet-level), Top-$k$ replies, likes, or re-tweets", "type": "extractive" }, { "answer": "Top-$k$ replies, likes, or re-tweets, FacTweet (tweet-level), LR + All Features (chunk-level), LR + All Features (tweet-level), Tweet2vec, LR + Bag-of-words", "type": "extractive" } ], "q_uid": "71fe5822d9fccb1cb391c11283b223dc8aa1640c", "evidence": [ { "raw_evidence": [ "Baselines. We compare our approach (FacTweet) to the following set of baselines:", "[leftmargin=4mm]", "LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.", "Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.", "LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.", "LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.", "FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.", "Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline." ], "highlighted_evidence": [ "Baselines. We compare our approach (FacTweet) to the following set of baselines:\n\n[leftmargin=4mm]\n\nLR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.\n\nTweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.\n\nLR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.\n\nLR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.\n\nFacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.\n\nTop-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline." ] }, { "raw_evidence": [ "Baselines. We compare our approach (FacTweet) to the following set of baselines:", "LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.", "Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.", "LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.", "LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.", "FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.", "Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline." ], "highlighted_evidence": [ "Baselines. We compare our approach (FacTweet) to the following set of baselines:", "LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.\n\nTweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.\n\nLR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.\n\nLR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.\n\nFacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.\n\nTop-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline." ] } ] }, { "question": "How are chunks defined?", "answers": [ { "answer": "Chunks is group of tweets from single account that is consecutive in time - idea is that this group can show secret intention of malicious accounts.", "type": "abstractive" }, { "answer": "sequence of $s$ tweets", "type": "abstractive" } ], "q_uid": "97d0f9a1540a48e0b4d30d7084a8c524dd09a4c3", "evidence": [ { "raw_evidence": [ "Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks.", "The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts." ], "highlighted_evidence": [ "Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account.", "Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts." ] }, { "raw_evidence": [ "Input Representation. Let $t$ be a Twitter account that contains $m$ tweets. These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets. Each tweet in $ck_i$ is represented by a vector $v \\in {\\rm I\\!R}^d$ , where $v$ is the concatenation of a set of features' vectors, that is $v = \\langle f_1, \\ldots , f_n \\rangle $. Each feature vector $f_i$ is built by counting the presence of tweet's words in a set of lexical lists. The final representation of the tweet is built by averaging the single word vectors." ], "highlighted_evidence": [ "These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets." ] } ] }, { "question": "What features are extracted?", "answers": [ { "answer": "Sentiment, Morality, Style, Words embeddings", "type": "extractive" }, { "answer": "15 emotion types, sentiment classes, positive and negative, care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation, count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions, uppercase ratio, tweet length, words embeddings", "type": "extractive" } ], "q_uid": "1062a0506c3691a93bb914171c2701d2ae9621cb", "evidence": [ { "raw_evidence": [ "Features. We argue that different kinds of features like the sentiment of the text, morality, and other text-based features are critical to detect the nonfactual Twitter accounts by utilizing their occurrence during reporting the news in an account's timeline. We employ a rich set of features borrowed from previous works in fake news, bias, and rumors detection BIBREF0, BIBREF1, BIBREF8, BIBREF9.", "[leftmargin=4mm]", "Emotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon.", "Sentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.", "Morality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).", "Style: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.", "Words embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings." ], "highlighted_evidence": [ "Features. We argue that different kinds of features like the sentiment of the text, morality, and other text-based features are critical to detect the nonfactual Twitter accounts by utilizing their occurrence during reporting the news in an account's timeline. We employ a rich set of features borrowed from previous works in fake news, bias, and rumors detection BIBREF0, BIBREF1, BIBREF8, BIBREF9.\n\n[leftmargin=4mm]\n\nEmotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon.\n\nSentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.\n\nMorality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).\n\nStyle: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.\n\nWords embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings." ] }, { "raw_evidence": [ "Emotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon.", "Sentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.", "Morality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).", "Style: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.", "Words embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings." ], "highlighted_evidence": [ "Emotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. ", "Sentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.", "Morality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).", "Style: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.", "Words embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model." ] } ] }, { "question": "Was the approach used in this work to detect fake news fully supervised?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "483a699563efcb8804e1861b18809279f21c7610", "evidence": [ { "raw_evidence": [ "Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks." ], "highlighted_evidence": [ "Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account." ] } ] }, { "question": "Based on this paper, what is the more predictive set of features to detect fake news?", "answers": [ { "answer": "words embeddings, style, and morality features", "type": "extractive" }, { "answer": "words embeddings, style, and morality features", "type": "extractive" } ], "q_uid": "d3ff2986ca8cb85a9a5cec039c266df756947b43", "evidence": [ { "raw_evidence": [ "Results. Table TABREF25 presents the results. We present the results using a chunk size of 20, which was found to be the best size on the held-out data. Figure FIGREF24 shows the results of different chunks sizes. FacTweet performs better than the proposed baselines and obtains the highest macro-F1 value of $0.565$. Our results indicate the importance of taking into account the sequence of the tweets in the accounts' timelines. The sequence of these tweets is better captured by our proposed model sequence-agnostic or non-neural classifiers. Moreover, the results demonstrate that the features at tweet-level do not perform well to detect the Twitter accounts factuality, since they obtain a result near to the majority class ($0.18$). Another finding from our experiments shows that the performance of the Tweet2vec is weak. This demonstrates that tweets' hashtags are not informative to detect non-factual accounts. In Table TABREF25, we present ablation tests so as to quantify the contribution of subset of features. The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement). These performance figures suggest that non-factual accounts use semantic and stylistic hidden signatures mostly while tweeting news, so as to be able to mislead the readers and behave as reputable (i.e., factual) sources. We leave a more fine-grained, diachronic analysis of semantic and stylistic features \u2013 how semantic and stylistic signature evolve across time and change across the accounts' timelines \u2013 for future work." ], "highlighted_evidence": [ "The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement)" ] }, { "raw_evidence": [ "Results. Table TABREF25 presents the results. We present the results using a chunk size of 20, which was found to be the best size on the held-out data. Figure FIGREF24 shows the results of different chunks sizes. FacTweet performs better than the proposed baselines and obtains the highest macro-F1 value of $0.565$. Our results indicate the importance of taking into account the sequence of the tweets in the accounts' timelines. The sequence of these tweets is better captured by our proposed model sequence-agnostic or non-neural classifiers. Moreover, the results demonstrate that the features at tweet-level do not perform well to detect the Twitter accounts factuality, since they obtain a result near to the majority class ($0.18$). Another finding from our experiments shows that the performance of the Tweet2vec is weak. This demonstrates that tweets' hashtags are not informative to detect non-factual accounts. In Table TABREF25, we present ablation tests so as to quantify the contribution of subset of features. The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement). These performance figures suggest that non-factual accounts use semantic and stylistic hidden signatures mostly while tweeting news, so as to be able to mislead the readers and behave as reputable (i.e., factual) sources. We leave a more fine-grained, diachronic analysis of semantic and stylistic features \u2013 how semantic and stylistic signature evolve across time and change across the accounts' timelines \u2013 for future work." ], "highlighted_evidence": [ "The results indicate that most performance gains come from words embeddings, style, and morality features." ] } ] }, { "question": "How is a \"chunk of posts\" defined in this work?", "answers": [ { "answer": "chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account", "type": "extractive" }, { "answer": "sequence of $s$ tweets", "type": "abstractive" } ], "q_uid": "2317ca8d475b01f6632537b95895608dc40c4415", "evidence": [ { "raw_evidence": [ "The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.", "Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks." ], "highlighted_evidence": [ "Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions.", "Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account." ] }, { "raw_evidence": [ "Input Representation. Let $t$ be a Twitter account that contains $m$ tweets. These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets. Each tweet in $ck_i$ is represented by a vector $v \\in {\\rm I\\!R}^d$ , where $v$ is the concatenation of a set of features' vectors, that is $v = \\langle f_1, \\ldots , f_n \\rangle $. Each feature vector $f_i$ is built by counting the presence of tweet's words in a set of lexical lists. The final representation of the tweet is built by averaging the single word vectors." ], "highlighted_evidence": [ "These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets." ] } ] }, { "question": "What baselines were used in this work?", "answers": [ { "answer": "LR + Bag-of-words, Tweet2vec, LR + All Features (tweet-level), LR + All Features (chunk-level), FacTweet (tweet-level), Top-$k$ replies, likes, or re-tweets", "type": "extractive" }, { "answer": "LR + Bag-of-words, Tweet2vec, LR + All Features (tweet-level), LR + All Features (chunk-level), FacTweet (tweet-level), Top-$k$ replies, likes, or re-tweets", "type": "extractive" } ], "q_uid": "3e88fb3d28593309a307eb97e875575644a01463", "evidence": [ { "raw_evidence": [ "Baselines. We compare our approach (FacTweet) to the following set of baselines:", "LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.", "Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.", "LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.", "LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.", "FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.", "Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline." ], "highlighted_evidence": [ "Baselines. We compare our approach (FacTweet) to the following set of baselines:", "LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.\n\nTweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.\n\nLR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.\n\nLR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.\n\nFacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.\n\nTop-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline." ] }, { "raw_evidence": [ "Baselines. We compare our approach (FacTweet) to the following set of baselines:", "[leftmargin=4mm]", "LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.", "Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.", "LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.", "LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.", "FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.", "Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline." ], "highlighted_evidence": [ "Baselines. We compare our approach (FacTweet) to the following set of baselines:\n\n[leftmargin=4mm]\n\nLR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.\n\nTweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20.", "LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. ", "LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.\n\nFacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. ", "Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21." ] } ] } ], "1706.01678": [ { "question": "Which evaluation methods are used?", "answers": [ { "answer": "Quantitative evaluation methods using ROUGE, Recall, Precision and F1.", "type": "abstractive" }, { "answer": "standard ROGUE metric, Recall, Precision and INLINEFORM0 scores for ROGUE-1, INLINEFORM2 scores for ROGUE-2 and ROGUE-L", "type": "extractive" } ], "q_uid": "e8f969ffd637b82d04d3be28c51f0f3ca6b3883e", "evidence": [ { "raw_evidence": [ "For the evaluation of summaries we use the standard ROGUE metric. For comparison with previous AMR based summarization methods, we report the Recall, Precision and INLINEFORM0 scores for ROGUE-1. Since most of the literature on summarization uses INLINEFORM1 scores for ROGUE-2 and ROGUE-L for comparison, we also report INLINEFORM2 scores for ROGUE-2 and ROGUE-L for our method. ROGUE-1 Recall and Precision are measured for uni-gram overlap between the reference and the predicted summary. On the other hand, ROGUE-2 uses bi-gram overlap while ROGUE-L uses the longest common sequence between the target and the predicted summaries for evaluation. In rest of this section, we provide methods to analyze and evaluate our pipeline at each step." ], "highlighted_evidence": [ "For the evaluation of summaries we use the standard ROGUE metric. For comparison with previous AMR based summarization methods, we report the Recall, Precision and INLINEFORM0 scores for ROGUE-1." ] }, { "raw_evidence": [ "For the evaluation of summaries we use the standard ROGUE metric. For comparison with previous AMR based summarization methods, we report the Recall, Precision and INLINEFORM0 scores for ROGUE-1. Since most of the literature on summarization uses INLINEFORM1 scores for ROGUE-2 and ROGUE-L for comparison, we also report INLINEFORM2 scores for ROGUE-2 and ROGUE-L for our method. ROGUE-1 Recall and Precision are measured for uni-gram overlap between the reference and the predicted summary. On the other hand, ROGUE-2 uses bi-gram overlap while ROGUE-L uses the longest common sequence between the target and the predicted summaries for evaluation. In rest of this section, we provide methods to analyze and evaluate our pipeline at each step." ], "highlighted_evidence": [ "For the evaluation of summaries we use the standard ROGUE metric. For comparison with previous AMR based summarization methods, we report the Recall, Precision and INLINEFORM0 scores for ROGUE-1. Since most of the literature on summarization uses INLINEFORM1 scores for ROGUE-2 and ROGUE-L for comparison, we also report INLINEFORM2 scores for ROGUE-2 and ROGUE-L for our method." ] } ] }, { "question": "What dataset is used in this paper?", "answers": [ { "answer": "AMR Bank, CNN-Dailymail", "type": "extractive" }, { "answer": "AMR Bank BIBREF10, CNN-Dailymail ( BIBREF11 BIBREF12 )", "type": "extractive" } ], "q_uid": "46227b4265f1d300a5ed71bf40822829de662bc2", "evidence": [ { "raw_evidence": [ "We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 ). We use the proxy report section of the AMR Bank, as it is the only one that is relevant for the task because it contains the gold-standard (human generated) AMR graphs for news articles, and the summaries. In the training set the stories and summaries contain 17.5 sentences and 1.5 sentences on an average respectively. The training and test sets contain 298 and 33 summary document pairs respectively." ], "highlighted_evidence": [ "We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 ). " ] }, { "raw_evidence": [ "We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 ). We use the proxy report section of the AMR Bank, as it is the only one that is relevant for the task because it contains the gold-standard (human generated) AMR graphs for news articles, and the summaries. In the training set the stories and summaries contain 17.5 sentences and 1.5 sentences on an average respectively. The training and test sets contain 298 and 33 summary document pairs respectively." ], "highlighted_evidence": [ "We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 )." ] } ] }, { "question": "Which other methods do they compare with?", "answers": [ { "answer": "Lead-3, Lead-1-AMR", "type": "extractive" }, { "answer": "Lead-3 model, Lead-1-AMR, BIBREF0 ", "type": "extractive" } ], "q_uid": "a6a48de63c1928238b37c2a01c924b852fe752f8", "evidence": [ { "raw_evidence": [ "For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally. The Lead-3 model simply produces the leading three sentences of the document as its summary.", "For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline. For this dataset we already have the gold-standard AMR graphs of the sentences. Therefore, we only need to nullify the error introduced by the generator." ], "highlighted_evidence": [ "For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally.", "For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline." ] }, { "raw_evidence": [ "For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally. The Lead-3 model simply produces the leading three sentences of the document as its summary.", "For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline. For this dataset we already have the gold-standard AMR graphs of the sentences. Therefore, we only need to nullify the error introduced by the generator.", "In order to compare our summary graph extraction step with the previous work ( BIBREF0 ), we generate the final summary using the same generation method as used by them. Their method uses a simple module based on alignments for generating summary after step-2. The alignments simply map the words in the original sentence with the node or edge in the AMR graph. To generate the summary we find the words aligned with the sentence in the selected graph and output them in no particular order as the predicted summary. Though this does not generate grammatically correct sentences, we can still use the ROGUE-1 metric similar to BIBREF0 , as it is based on comparing uni-grams between the target and predicted summaries." ], "highlighted_evidence": [ "For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally.", "For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline. ", "In order to compare our summary graph extraction step with the previous work ( BIBREF0 ), we generate the final summary using the same generation method as used by them." ] } ] }, { "question": "How are sentences selected from the summary graph?", "answers": [ { "answer": " finding the important sentences from the story, extracting the key information from those sentences using their AMR graphs", "type": "extractive" }, { "answer": " Two methods: first is to simply pick initial few sentences, second is to capture the relation between the two most important entities (select the first sentence which contains both these entities).", "type": "abstractive" } ], "q_uid": "b65a83a24fc66728451bb063cf6ec50134c8bfb0", "evidence": [ { "raw_evidence": [ "After parsing (Step 1) we have the AMR graphs for the story sentences. In this step we extract the AMR graphs of the summary sentences using story sentence AMRs. We divide this task in two parts. First is finding the important sentences from the story and then extracting the key information from those sentences using their AMR graphs." ], "highlighted_evidence": [ "In this step we extract the AMR graphs of the summary sentences using story sentence AMRs. We divide this task in two parts. First is finding the important sentences from the story and then extracting the key information from those sentences using their AMR graphs." ] }, { "raw_evidence": [ "Using this idea of picking important sentences from the beginning, we propose two methods, first is to simply pick initial few sentences, we call this first-n method where n stands for the number of sentences. We pick initial 3 sentences for the CNN-Dailymail corpus i.e. first-3 and only the first sentence for the proxy report section (AMR Bank) i.e. first-1 as they produce the best scores on the ROGUE metric compared to any other first-n. Second, we try to capture the relation between the two most important entities (we define importance by the number of occurrences of the entity in the story) of the document. For this we simply find the first sentence which contains both these entities. We call this the first co-occurrence based sentence selection. We also select the first sentence along with first co-occurrence based sentence selection as the important sentences. We call this the first co-occurrence+first based sentence selection." ], "highlighted_evidence": [ "Using this idea of picking important sentences from the beginning, we propose two methods, first is to simply pick initial few sentences, we call this first-n method where n stands for the number of sentences. We pick initial 3 sentences for the CNN-Dailymail corpus i.e. first-3 and only the first sentence for the proxy report section (AMR Bank) i.e. first-1 as they produce the best scores on the ROGUE metric compared to any other first-n. Second, we try to capture the relation between the two most important entities (we define importance by the number of occurrences of the entity in the story) of the document. For this we simply find the first sentence which contains both these entities. We call this the first co-occurrence based sentence selection. We also select the first sentence along with first co-occurrence based sentence selection as the important sentences. We call this the first co-occurrence+first based sentence selection." ] } ] } ], "1902.09666": [ { "question": "What models are used in the experiment?", "answers": [ { "answer": "linear SVM, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN)", "type": "extractive" }, { "answer": "linear SVM, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN)", "type": "extractive" }, { "answer": "linear SVM trained on word unigrams, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN) ", "type": "extractive" } ], "q_uid": "8c852fc29bda014d28c3ee5b5a7e449ab9152d35", "evidence": [ { "raw_evidence": [ "We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM." ], "highlighted_evidence": [ "We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM." ] }, { "raw_evidence": [ "We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM." ], "highlighted_evidence": [ "Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead.", "Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM." ] }, { "raw_evidence": [ "We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM." ], "highlighted_evidence": [ "We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM." ] } ] }, { "question": "What are the differences between this dataset and pre-existing ones?", "answers": [ { "answer": "no prior work has explored the target of the offensive language", "type": "extractive" } ], "q_uid": "682e26262abba473412f68cbeb5f69aa3b9968d7", "evidence": [ { "raw_evidence": [ "Recently, Waseem et. al. ( BIBREF12 ) acknowledged the similarities among prior work and discussed the need for a typology that differentiates between whether the (abusive) language is directed towards a specific individual or entity or towards a generalized group and whether the abusive content is explicit or implicit. Wiegand et al. ( BIBREF11 ) followed this trend as well on German tweets. In their evaluation, they have a task to detect offensive vs not offensive tweets and a second task for distinguishing between the offensive tweets as profanity, insult, or abuse. However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target." ], "highlighted_evidence": [ "However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target." ] } ] }, { "question": "In what language are the tweets?", "answers": [ { "answer": "English", "type": "extractive" }, { "answer": "English ", "type": "extractive" }, { "answer": "English", "type": "extractive" } ], "q_uid": "5daeb8d4d6f3b8543ec6309a7a35523e160437eb", "evidence": [ { "raw_evidence": [ "Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows:" ], "highlighted_evidence": [ "Using this annotation model, we create a new large publicly available dataset of English tweets." ] }, { "raw_evidence": [ "Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows:" ], "highlighted_evidence": [ "Using this annotation model, we create a new large publicly available dataset of English tweets. " ] }, { "raw_evidence": [ "Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows:" ], "highlighted_evidence": [ "Using this annotation model, we create a new large publicly available dataset of English tweets. " ] } ] }, { "question": "What kinds of offensive content are explored?", "answers": [ { "answer": "non-targeted profanity and swearing, targeted insults such as cyberbullying, offensive content related to ethnicity, gender or sexual orientation, political affiliation, religious belief, and anything belonging to hate speech", "type": "abstractive" }, { "answer": "Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others , Untargeted (UNT): Posts containing non-targeted profanity and swearing.", "type": "extractive" }, { "answer": "offensive (OFF) and non-offensive (NOT), targeted (TIN) and untargeted (INT) insults, targets of insults and threats as individual (IND), group (GRP), and other (OTH)", "type": "extractive" } ], "q_uid": "d015faf0f8dcf2e15c1690bbbe2bf1e7e0ce3751", "evidence": [ { "raw_evidence": [ "Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.", "Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer);", "Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language.", "Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH).", "Individual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling.", "Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech.", "Other (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue)." ], "highlighted_evidence": [ "Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.\n\nTargeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer);\n\nUntargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language.", "Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH).\n\nIndividual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling.\n\nGroup (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech.\n\nOther (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue)." ] }, { "raw_evidence": [ "Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.", "Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer);", "Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language." ], "highlighted_evidence": [ "Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.\n\nTargeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer);\n\nUntargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language." ] }, { "raw_evidence": [ "In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 .", "Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets.", "Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.", "Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH)." ], "highlighted_evidence": [ "In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language.", "Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets.", "Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.", "Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH)." ] } ] }, { "question": "What is the best performing model?", "answers": [ { "answer": "CNN ", "type": "extractive" } ], "q_uid": "55bd59076a49b19d3283af41c5e3ccb875f3eb0c", "evidence": [ { "raw_evidence": [ "The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80.", "The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT)." ], "highlighted_evidence": [ "The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80.", "The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT)." ] } ] }, { "question": "How many annotators participated?", "answers": [ { "answer": "five annotators", "type": "extractive" } ], "q_uid": "521280a87c43fcdf9f577da235e7072a23f0673e", "evidence": [ { "raw_evidence": [ "The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ." ], "highlighted_evidence": [ "We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. " ] } ] }, { "question": "What is the definition of offensive language?", "answers": [ { "answer": " Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 .", "type": "extractive" } ], "q_uid": "5a8cc8f80509ea77d8213ed28c5ead501c68c725", "evidence": [ { "raw_evidence": [ "Offensive content has become pervasive in social media and a reason of concern for government organizations, online communities, and social media platforms. One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which then can be deleted or set aside for human moderation. In the last few years, there have been several studies published on the application of computational methods to deal with this problem. Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 ." ], "highlighted_evidence": [ "Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 ." ] } ] }, { "question": "What are the three layers of the annotation scheme?", "answers": [ { "answer": "Level A: Offensive language Detection\n, Level B: Categorization of Offensive Language\n, Level C: Offensive Language Target Identification\n", "type": "extractive" } ], "q_uid": "290ee79b5e3872e0496a6a0fc9b103ab7d8f6c30", "evidence": [ { "raw_evidence": [ "In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 .", "Level A: Offensive language Detection", "Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets.", "Level B: Categorization of Offensive Language", "Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.", "Level C: Offensive Language Target Identification", "Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH)." ], "highlighted_evidence": [ "n the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 .", "Level A: Offensive language Detection\nLevel A discriminates between offensive (OFF) and non-offensive (NOT) tweets.", "Level B: Categorization of Offensive Language\nLevel B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.", "Level C: Offensive Language Target Identification\nLevel C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH)." ] } ] } ], "1604.00400": [ { "question": "Do the authors report results only on English data?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "c49ee6ac4dc812ff84d255886fd5aff794f53c39", "evidence": [ { "raw_evidence": [ "To the best of our knowledge, the only scientific summarization benchmark is from TAC 2014 summarization track. For evaluating the effectiveness of Rouge variants and our metric (Sera), we use this benchmark, which consists of 20 topics each with a biomedical journal article and 4 gold human written summaries.", "To analyze the quality of the evaluation metrics, following the pyramid framework, we design an annotation scheme that is based on identification of important content units. Consider the following example:", "Endogeneous small RNAs (miRNA) were genetically screened and studied to find the miRNAs which are related to tumorigenesis." ], "highlighted_evidence": [ "To the best of our knowledge, the only scientific summarization benchmark is from TAC 2014 summarization track. For evaluating the effectiveness of Rouge variants and our metric (Sera), we use this benchmark, which consists of 20 topics each with a biomedical journal article and 4 gold human written summaries.", "Consider the following example:\n\nEndogeneous small RNAs (miRNA) were genetically screened and studied to find the miRNAs which are related to tumorigenesis." ] } ] }, { "question": "In the proposed metric, how is content relevance measured?", "answers": [ { "answer": "The content relevance between the candidate summary and the human summary is evaluated using information retrieval - using the summaries as search queries and compare the overlaps of the retrieved results. ", "type": "abstractive" }, { "answer": "On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval.", "type": "extractive" } ], "q_uid": "3f856097be2246bde8244add838e83a2c793bd17", "evidence": [ { "raw_evidence": [ "Our proposed metric is based on analysis of the content relevance between a system generated summary and the corresponding human written gold-standard summaries. On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. Larger number of overlaps, suggest that the candidate summary has higher content quality with respect to the gold-standard. This method, enables us to also reward for terms that are not lexically equivalent but semantically related. Our method is based on the well established linguistic premise that semantically related words occur in similar contexts BIBREF5 . The context of the words can be considered as surrounding words, sentences in which they appear or the documents. For scientific summarization, we consider the context of the words as the scientific articles in which they appear. Thus, if two concepts appear in identical set of articles, they are semantically related. We consider the two summaries as similar if they refer to same set of articles even if the two summaries do not have high lexical overlaps. To capture if a summary relates to a article, we use information retrieval by considering the summaries as queries and the articles as documents and we rank the articles based on their relatedness to a given summary. For a given pair of system summary and the gold summary, similar rankings of the retrieved articles suggest that the summaries are semantically related, and thus the system summary is of higher quality." ], "highlighted_evidence": [ " On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. " ] }, { "raw_evidence": [ "Our proposed metric is based on analysis of the content relevance between a system generated summary and the corresponding human written gold-standard summaries. On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. Larger number of overlaps, suggest that the candidate summary has higher content quality with respect to the gold-standard. This method, enables us to also reward for terms that are not lexically equivalent but semantically related. Our method is based on the well established linguistic premise that semantically related words occur in similar contexts BIBREF5 . The context of the words can be considered as surrounding words, sentences in which they appear or the documents. For scientific summarization, we consider the context of the words as the scientific articles in which they appear. Thus, if two concepts appear in identical set of articles, they are semantically related. We consider the two summaries as similar if they refer to same set of articles even if the two summaries do not have high lexical overlaps. To capture if a summary relates to a article, we use information retrieval by considering the summaries as queries and the articles as documents and we rank the articles based on their relatedness to a given summary. For a given pair of system summary and the gold summary, similar rankings of the retrieved articles suggest that the summaries are semantically related, and thus the system summary is of higher quality." ], "highlighted_evidence": [ "On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. Larger number of overlaps, suggest that the candidate summary has higher content quality with respect to the gold-standard." ] } ] }, { "question": "What manual Pyramid scores are used?", "answers": [ { "answer": " higher tiers of the pyramid", "type": "extractive" }, { "answer": "following the pyramid framework, we design an annotation scheme", "type": "extractive" } ], "q_uid": "74e866137b3452ec50fb6feaf5753c8637459e62", "evidence": [ { "raw_evidence": [ "In the TAC 2014 summarization track, Rouge was suggested as the evaluation metric for summarization and no human assessment was provided for the topics. Therefore, to study the effectiveness of the evaluation metrics, we use the semi-manual Pyramid evaluation framework BIBREF7 , BIBREF8 . In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. The content quality of a given candidate summary is evaluated with respect to this pyramid." ], "highlighted_evidence": [ " In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. " ] }, { "raw_evidence": [ "In the TAC 2014 summarization track, Rouge was suggested as the evaluation metric for summarization and no human assessment was provided for the topics. Therefore, to study the effectiveness of the evaluation metrics, we use the semi-manual Pyramid evaluation framework BIBREF7 , BIBREF8 . In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. The content quality of a given candidate summary is evaluated with respect to this pyramid.", "To analyze the quality of the evaluation metrics, following the pyramid framework, we design an annotation scheme that is based on identification of important content units. Consider the following example:" ], "highlighted_evidence": [ "Therefore, to study the effectiveness of the evaluation metrics, we use the semi-manual Pyramid evaluation framework BIBREF7 , BIBREF8 . In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. The content quality of a given candidate summary is evaluated with respect to this pyramid.", "To analyze the quality of the evaluation metrics, following the pyramid framework, we design an annotation scheme that is based on identification of important content units. " ] } ] }, { "question": "What is the common belief that this paper refutes? (c.f. 'contrary to the common belief, ROUGE is not much [sic] reliable'", "answers": [ { "answer": "correlations between Rouge and the Pyramid scores are weak, which challenges its effectiveness for scientific summarization", "type": "extractive" } ], "q_uid": "184b0082e10ce191940c1d24785b631828a9f714", "evidence": [ { "raw_evidence": [ "Scientific summarization has attracted more attention recently (examples include works by abu2011coherent, qazvinian2013generating, and cohan2015scientific). Thus, it is important to study the validity of existing methodologies applied to the evaluation of news article summarization for this task. In particular, we raise the important question of how effective is Rouge, as an evaluation metric for scientific summarization? We answer this question by comparing Rouge scores with semi-manual evaluation score (Pyramid) in TAC 2014 scientific summarization dataset[1]. Results reveal that, contrary to the common belief, correlations between Rouge and the Pyramid scores are weak, which challenges its effectiveness for scientific summarization. Furthermore, we show a large variance of correlations between different Rouge variants and the manual evaluations which further makes the reliability of Rouge for evaluating scientific summaries less clear. We then propose an evaluation metric based on relevance analysis of summaries which aims to overcome the limitation of high lexical dependence in Rouge. We call our metric Sera (Summarization Evaluation by Relevance Analysis). Results show that the proposed metric achieves higher and more consistent correlations with semi-manual assessment scores." ], "highlighted_evidence": [ "Results reveal that, contrary to the common belief, correlations between Rouge and the Pyramid scores are weak, which challenges its effectiveness for scientific summarization. Furthermore, we show a large variance of correlations between different Rouge variants and the manual evaluations which further makes the reliability of Rouge for evaluating scientific summaries less clear." ] } ] } ], "1905.12801": [ { "question": "which existing strategies are compared?", "answers": [ { "answer": "CDA, REG", "type": "extractive" } ], "q_uid": "c59078efa7249acfb9043717237c96ae762c0a8c", "evidence": [ { "raw_evidence": [ "Initially, we measure the co-occurrence bias in the training data. After training the baseline model, we implement our loss function and tune for the INLINEFORM0 hyperparameter. We test the existing debiasing approaches, CDA and REG, as well but since BIBREF5 reported that results fluctuate substantially with different REG regularization coefficients, we perform hyperparameter tuning and report the best results in Table TABREF12 . Additionally, we implement a combination of our loss function and CDA and tune for INLINEFORM1 . Finally, bias evaluation is performed for all the trained models. Causal occupation bias is measured directly from the models using template datasets discussed above and co-occurrence bias is measured from the model-generated texts, which consist of 10,000 documents of 500 words each." ], "highlighted_evidence": [ "We test the existing debiasing approaches, CDA and REG, as well but since BIBREF5 reported that results fluctuate substantially with different REG regularization coefficients, we perform hyperparameter tuning and report the best results in Table TABREF12 ." ] } ] }, { "question": "what dataset was used?", "answers": [ { "answer": "Daily Mail news articles released by BIBREF9 ", "type": "extractive" }, { "answer": "Daily Mail news articles", "type": "extractive" } ], "q_uid": "73bddaaf601a4f944a3182ca0f4de85a19cdc1d2", "evidence": [ { "raw_evidence": [ "For the training data, we use Daily Mail news articles released by BIBREF9 . This dataset is composed of 219,506 articles covering a diverse range of topics including business, sports, travel, etc., and is claimed to be biased and sensational BIBREF5 . For manageability, we randomly subsample 5% of the text. The subsample has around 8.25 million tokens in total." ], "highlighted_evidence": [ "For the training data, we use Daily Mail news articles released by BIBREF9 . " ] }, { "raw_evidence": [ "For the training data, we use Daily Mail news articles released by BIBREF9 . This dataset is composed of 219,506 articles covering a diverse range of topics including business, sports, travel, etc., and is claimed to be biased and sensational BIBREF5 . For manageability, we randomly subsample 5% of the text. The subsample has around 8.25 million tokens in total." ], "highlighted_evidence": [ "For the training data, we use Daily Mail news articles released by BIBREF9 ." ] } ] }, { "question": "what kinds of male and female words are looked at?", "answers": [ { "answer": "gendered word pairs like he and she", "type": "extractive" } ], "q_uid": "d4e5e3f37679ff68914b55334e822ea18e60a6cf", "evidence": [ { "raw_evidence": [ "Language modelling is a pivotal task in NLP with important downstream applications such as text generation BIBREF4 . Recent studies by BIBREF0 and BIBREF5 have shown that this task is vulnerable to gender bias in the training corpus. Two prior works focused on reducing bias in language modelling by data preprocessing BIBREF0 and word embedding debiasing BIBREF5 . In this study, we investigate the efficacy of bias reduction during training by introducing a new loss function which encourages the language model to equalize the probabilities of predicting gendered word pairs like he and she. Although we recognize that gender is non-binary, for the purpose of this study, we focus on female and male words." ], "highlighted_evidence": [ "In this study, we investigate the efficacy of bias reduction during training by introducing a new loss function which encourages the language model to equalize the probabilities of predicting gendered word pairs like he and she. " ] } ] }, { "question": "how is mitigation of gender bias evaluated?", "answers": [ { "answer": "Using INLINEFORM0 and INLINEFORM1", "type": "abstractive" } ], "q_uid": "5f60defb546f35d25a094ff34781cddd4119e400", "evidence": [ { "raw_evidence": [ "Results for the experiments are listed in Table TABREF12 . It is interesting to observe that the baseline model amplifies the bias in the training data set as measured by INLINEFORM0 and INLINEFORM1 . From measurements using the described bias metrics, our method effectively mitigates bias in language modelling without a significant increase in perplexity. At INLINEFORM2 value of 1, it reduces INLINEFORM3 by 58.95%, INLINEFORM4 by 45.74%, INLINEFORM5 by 100%, INLINEFORM6 by 98.52% and INLINEFORM7 by 98.98%. Compared to the results of CDA and REG, it achieves the best results in both occupation biases, INLINEFORM8 and INLINEFORM9 , and INLINEFORM10 . We notice that all methods result in INLINEFORM11 around 1, indicating that there are near equal amounts of female and male words in the generated texts. In our experiments we note that with increasing INLINEFORM12 , the bias steadily decreases and perplexity tends to slightly increase. This indicates that there is a trade-off between bias and perplexity." ], "highlighted_evidence": [ "It is interesting to observe that the baseline model amplifies the bias in the training data set as measured by INLINEFORM0 and INLINEFORM1 . From measurements using the described bias metrics, our method effectively mitigates bias in language modelling without a significant increase in perplexity." ] } ] }, { "question": "what bias evaluation metrics are used?", "answers": [ { "answer": "gender bias, normalized version of INLINEFORM0, ratio of occurrence of male and female words in the model generated text, Causal occupation bias conditioned on occupation, causal occupation bias conditioned on gender, INLINEFORM1", "type": "extractive" } ], "q_uid": "90d946ccc3abf494890e147dd85bd489b8f3f0e8", "evidence": [ { "raw_evidence": [ "Co-occurrence bias is computed from the model-generated texts by comparing the occurrences of all gender-neutral words with female and male words. A word is considered to be biased towards a certain gender if it occurs more frequently with words of that gender. This definition was first used by BIBREF7 and later adapted by BIBREF5 . Using the definition of gender bias similar to the one used by BIBREF5 , we define gender bias as INLINEFORM0", "where INLINEFORM0 is a set of gender-neutral words, and INLINEFORM1 is the occurrences of a word INLINEFORM2 with words of gender INLINEFORM3 in the same window. This score is designed to capture unequal co-occurrences of neutral words with male and female words. Co-occurrences are computed using a sliding window of size 10 extending equally in both directions. Furthermore, we only consider words that occur more than 20 times with gendered words to exclude random effects.", "We also evaluate a normalized version of INLINEFORM0 which we denote by conditional co-occurrence bias, INLINEFORM1 . This is defined as INLINEFORM2", "INLINEFORM0 is less affected by the disparity in the general distribution of male and female words in the text. The disparity between the occurrences of the two genders means that text is more inclined to mention one over the other, so it can also be considered a form of bias. We report the ratio of occurrence of male and female words in the model generated text, INLINEFORM1 , as INLINEFORM2", "Causal occupation bias conditioned on occupation is represented as INLINEFORM0", "Here, the vertical bar separates the seed sequence that is fed into the language models from the target occupation, for which we observe the output softmax probability. We measure causal occupation bias conditioned on gender as INLINEFORM0", "where INLINEFORM0 is a set of gender-neutral occupations and INLINEFORM1 is the size of the gender pairs set. For example, INLINEFORM2 is the softmax probability of the word INLINEFORM3 where the seed sequence is He is a. The second set of templates like below, aims to capture how the probabilities of gendered words depend on the occupation words in the seed. INLINEFORM4", "Our debiasing approach does not explicitly address the bias in the embedding layer. Therefore, we use gender-neutral occupations to measure the embedding bias to observe if debiasing the output layer also decreases the bias in the embedding. We define the embedding bias, INLINEFORM0 , as the difference between the Euclidean distance of an occupation word to male words and the distance of the occupation word to the female counterparts. This definition is equivalent to bias by projection described by BIBREF6 . We define INLINEFORM1 as INLINEFORM2", "where INLINEFORM0 is a set of gender-neutral occupations, INLINEFORM1 is the size of the gender pairs set and INLINEFORM2 is the word-to-vector dictionary." ], "highlighted_evidence": [ "Using the definition of gender bias similar to the one used by BIBREF5 , we define gender bias as INLINEFORM0\n\nwhere INLINEFORM0 is a set of gender-neutral words, and INLINEFORM1 is the occurrences of a word INLINEFORM2 with words of gender INLINEFORM3 in the same window. ", "We also evaluate a normalized version of INLINEFORM0 which we denote by conditional co-occurrence bias, INLINEFORM1 . ", "We report the ratio of occurrence of male and female words in the model generated text, INLINEFORM1 , as INLINEFORM2", "Causal occupation bias conditioned on occupation is represented as INLINEFORM0", "We measure causal occupation bias conditioned on gender as INLINEFORM0\n\nwhere INLINEFORM0 is a set of gender-neutral occupations and INLINEFORM1 is the size of the gender pairs set.", "We define INLINEFORM1 as INLINEFORM2\n\nwhere INLINEFORM0 is a set of gender-neutral occupations, INLINEFORM1 is the size of the gender pairs set and INLINEFORM2 is the word-to-vector dictionary." ] } ] } ], "1810.12196": [ { "question": "What kind of questions are present in the dataset?", "answers": [ { "answer": "These 8 tasks require different competencies and a different level of understanding of the document to be well answered", "type": "extractive" } ], "q_uid": "b962cc817a4baf6c56150f0d97097f18ad6cd9ed", "evidence": [ { "raw_evidence": [ "We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task. We also provide the expected type of the answer (Yes/No question, rating question...). It can be an additional tool to analyze the errors of the readers." ], "highlighted_evidence": [ "We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task." ] } ] }, { "question": "What baselines are presented?", "answers": [ { "answer": "Logistic regression, LSTM, End-to-end memory networks, Deep projective reader", "type": "extractive" }, { "answer": "Logistic regression, LSTM, End-to-end memory networks, Deep projective reader", "type": "extractive" } ], "q_uid": "fb5fb11e7d01b9f9efe3db3417b8faf4f8d6931f", "evidence": [ { "raw_evidence": [ "Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question. It produces an array of size INLINEFORM0 where INLINEFORM1 is the vocabulary size. Then we use a logistic regression to select the most probable answer among the INLINEFORM2 possibilities.", "LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision.", "End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document. A controller, initialized with the encoding of the question, is used to calculate an attention between this controller and the representation of the document in the input memory. This attention is them used to re-weight the representation of the document in the output memory. This response from the output memory is them utilized to update the controller. After that, either a matrix is used to project this representation into the answer space either the controller is used to go through an over hop of memory. This architecture allows the model to sequentially look into the initial document seeking for important information regarding the current state of its controller. This model achieves very good performances on the 20 bAbI tasks dataset.", "Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 . The overall architecture is composed of 4 stacked layers: an encoding layer, a question/document attention, a self-attention layer and a projection layer. The following paragraphs briefly describe the overall utility of each of these layers." ], "highlighted_evidence": [ "Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question.", "LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision.", "End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document.", "Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 ." ] }, { "raw_evidence": [ "In this section, we present the performance of four different models on our dataset: a logistic regression and three neural models. The first one is a basic LSTM BIBREF20 , the second a MemN2N BIBREF18 and the third one is a model of our own design. This fourth model reuses the encoding layers of the R-net BIBREF12 and we modify the final layers with a projection layer that will be able to select the answer among the set of candidates instead of pointing the answerer directly into the source document.", "Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question. It produces an array of size INLINEFORM0 where INLINEFORM1 is the vocabulary size. Then we use a logistic regression to select the most probable answer among the INLINEFORM2 possibilities.", "LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision.", "End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document. A controller, initialized with the encoding of the question, is used to calculate an attention between this controller and the representation of the document in the input memory. This attention is them used to re-weight the representation of the document in the output memory. This response from the output memory is them utilized to update the controller. After that, either a matrix is used to project this representation into the answer space either the controller is used to go through an over hop of memory. This architecture allows the model to sequentially look into the initial document seeking for important information regarding the current state of its controller. This model achieves very good performances on the 20 bAbI tasks dataset.", "Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 . The overall architecture is composed of 4 stacked layers: an encoding layer, a question/document attention, a self-attention layer and a projection layer. The following paragraphs briefly describe the overall utility of each of these layers." ], "highlighted_evidence": [ "In this section, we present the performance of four different models on our dataset: a logistic regression and three neural models. ", "Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question. ", "LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision.\n\n", "End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document. ", "Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 ." ] } ] }, { "question": "What language are the reviews in?", "answers": [ { "answer": "English", "type": "extractive" } ], "q_uid": "2236386729105f5cf42f73cc055ce3acdea2d452", "evidence": [ { "raw_evidence": [ "In order to generate more paraphrases of the questions, we used a backtranslation method to enrich them. The idea is to use a translation model that will translate our human-generated questions into another language, and then translate them back to English. This double translation will introduce rewordings of the questions that we will be able to integrate into this dataset. This approach has been used in BIBREF7 to perform data augmentation on the training set. For this purpose, we have trained a fairseq BIBREF19 model to translate sentences from English to French and for French to English. In order to preserve the quality of the sentences we have so far, we only keep the most probable translation of each original sentence. Indeed a beam search is used during the translation to predict the most probable translations which mean that we each translation comes with an associated probability. By selecting only the first translations, we almost double the number of questions without degrading the quality of the questions proposed in the dataset." ], "highlighted_evidence": [ "In order to generate more paraphrases of the questions, we used a backtranslation method to enrich them. The idea is to use a translation model that will translate our human-generated questions into another language, and then translate them back to English. " ] } ] }, { "question": "Where are the hotel reviews from?", "answers": [ { "answer": "TripAdvisor", "type": "extractive" }, { "answer": "TripAdvisor", "type": "extractive" } ], "q_uid": "18942ab8c365955da3fd8fc901dfb1a3b65c1be1", "evidence": [ { "raw_evidence": [ "Following concepts proposed in the 20 bAbI tasks BIBREF4 or in the visual question-answering dataset CLEVR BIBREF9 , we think that the challenge, limited to the detection of relevant passages in a document, is only the first step in building systems that truly understand text. The second step is the ability of reasoning with the relevant information extracted from a document. To set up this challenge, we propose to leverage on a hotel reviews corpus that requires reasoning skills to answer natural language questions. The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 . In the original data, each review comes with a set of rated aspects among the seventh available: Business service, Check in / Front Desk, Cleanliness, Location, Room, Sleep Quality, Value and for all the reviews an Overall rating. In this articles we propose to exploit these data to create a dataset of question-answering that will challenge 8 competencies of the reader." ], "highlighted_evidence": [ "The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 . " ] }, { "raw_evidence": [ "Following concepts proposed in the 20 bAbI tasks BIBREF4 or in the visual question-answering dataset CLEVR BIBREF9 , we think that the challenge, limited to the detection of relevant passages in a document, is only the first step in building systems that truly understand text. The second step is the ability of reasoning with the relevant information extracted from a document. To set up this challenge, we propose to leverage on a hotel reviews corpus that requires reasoning skills to answer natural language questions. The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 . In the original data, each review comes with a set of rated aspects among the seventh available: Business service, Check in / Front Desk, Cleanliness, Location, Room, Sleep Quality, Value and for all the reviews an Overall rating. In this articles we propose to exploit these data to create a dataset of question-answering that will challenge 8 competencies of the reader." ], "highlighted_evidence": [ " The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 ." ] } ] } ], "1707.05236": [ { "question": "What was the baseline used?", "answers": [ { "answer": "error detection system by Rei2016", "type": "extractive" }, { "answer": "error detection system by Rei2016", "type": "extractive" } ], "q_uid": "7b4992e2d26577246a16ac0d1efc995ab4695d24", "evidence": [ { "raw_evidence": [ "The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision \u2013 this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset." ], "highlighted_evidence": [ "For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset." ] }, { "raw_evidence": [ "The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision \u2013 this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset." ], "highlighted_evidence": [ "For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset." ] } ] }, { "question": "What textual patterns are extracted?", "answers": [ { "answer": "(VVD shop_VV0 II, VVD shopping_VVG II)", "type": "extractive" }, { "answer": "patterns for generating all types of errors", "type": "extractive" } ], "q_uid": "9a9d225f9ac35ed35ea02f554f6056af3b42471d", "evidence": [ { "raw_evidence": [ "For example, the original sentence `We went shop on Saturday' and the corrected version `We went shopping on Saturday' would produce the following pattern:", "(VVD shop_VV0 II, VVD shopping_VVG II)", "After collecting statistics from the background corpus, errors can be inserted into error-free text. The learned patterns are now reversed, looking for the correct side of the tuple in the input sentence. We only use patterns with frequency INLINEFORM0 , which yields a total of 35,625 patterns from our training data. For each input sentence, we first decide how many errors will be generated (using probabilities from the background corpus) and attempt to create them by sampling from the collection of applicable patterns. This process is repeated until all the required errors have been generated or the sentence is exhausted. During generation, we try to balance the distribution of error types as well as keeping the same proportion of incorrect and correct sentences as in the background corpus BIBREF10 . The required POS tags were generated with RASP BIBREF11 , using the CLAWS2 tagset." ], "highlighted_evidence": [ "For example, the original sentence `We went shop on Saturday' and the corrected version `We went shopping on Saturday' would produce the following pattern:\n\n(VVD shop_VV0 II, VVD shopping_VVG II)\n\nAfter collecting statistics from the background corpus, errors can be inserted into error-free text. The learned patterns are now reversed, looking for the correct side of the tuple in the input sentence. We only use patterns with frequency INLINEFORM0 , which yields a total of 35,625 patterns from our training data. " ] }, { "raw_evidence": [ "We also describe a method for AEG using patterns over words and part-of-speech (POS) tags, extracting known incorrect sequences from a corpus of annotated corrections. This approach is based on the best method identified by Felice2014a, using error type distributions; while they covered only 5 error types, we relax this restriction and learn patterns for generating all types of errors.", "The original and corrected sentences in the corpus are aligned and used to identify short transformation patterns in the form of (incorrect phrase, correct phrase). The length of each pattern is the affected phrase, plus up to one token of context on both sides. If a word form changes between the incorrect and correct text, it is fully saved in the pattern, otherwise the POS tags are used for matching." ], "highlighted_evidence": [ "We also describe a method for AEG using patterns over words and part-of-speech (POS) tags, extracting known incorrect sequences from a corpus of annotated corrections. This approach is based on the best method identified by Felice2014a, using error type distributions; while they covered only 5 error types, we relax this restriction and learn patterns for generating all types of errors.", "The original and corrected sentences in the corpus are aligned and used to identify short transformation patterns in the form of (incorrect phrase, correct phrase). The length of each pattern is the affected phrase, plus up to one token of context on both sides. If a word form changes between the incorrect and correct text, it is fully saved in the pattern, otherwise the POS tags are used for matching." ] } ] }, { "question": "Which annotated corpus did they use?", "answers": [ { "answer": " FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) ", "type": "extractive" }, { "answer": "FCE , two alternative annotations of the CoNLL 2014 Shared Task dataset", "type": "extractive" } ], "q_uid": "ea56148a8356a1918bedcf0a99ae667c27792cfe", "evidence": [ { "raw_evidence": [ "We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . Each artificial error generation system was used to generate 3 different versions of the artificial data, which were then combined with the original annotated dataset and used for training an error detection system. Table TABREF1 contains example sentences from the error generation systems, highlighting each of the edits that are marked as errors." ], "highlighted_evidence": [ "We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . " ] }, { "raw_evidence": [ "We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data.", "We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . Each artificial error generation system was used to generate 3 different versions of the artificial data, which were then combined with the original annotated dataset and used for training an error detection system. Table TABREF1 contains example sentences from the error generation systems, highlighting each of the edits that are marked as errors." ], "highlighted_evidence": [ "We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. ", "We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 ." ] } ] }, { "question": "Which languages are explored in this paper?", "answers": [ { "answer": "English ", "type": "extractive" }, { "answer": "English ", "type": "extractive" } ], "q_uid": "cd32a38e0f33b137ab590e1677e8fb073724df7f", "evidence": [ { "raw_evidence": [ "We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data." ], "highlighted_evidence": [ ". Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens)." ] }, { "raw_evidence": [ "We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data." ], "highlighted_evidence": [ "We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data." ] } ] } ], "1810.04428": [ { "question": "what language does this paper focus on?", "answers": [ { "answer": "English", "type": "extractive" }, { "answer": "Simple English", "type": "extractive" } ], "q_uid": "2c6b50877133a499502feb79a682f4023ddab63e", "evidence": [ { "raw_evidence": [ "We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K." ], "highlighted_evidence": [ "We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 ." ] }, { "raw_evidence": [ "We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K." ], "highlighted_evidence": [ "We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . " ] } ] }, { "question": "what evaluation metrics did they use?", "answers": [ { "answer": "BLEU , FKGL , SARI ", "type": "extractive" }, { "answer": "BLEU, FKGL, SARI, Simplicity", "type": "extractive" } ], "q_uid": "f651cd144b7749e82aa1374779700812f64c8799", "evidence": [ { "raw_evidence": [ "Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 ." ], "highlighted_evidence": [ "Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 ." ] }, { "raw_evidence": [ "Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 .", "We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence." ], "highlighted_evidence": [ "Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 .", "We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 ." ] } ] }, { "question": "by how much did their model improve?", "answers": [ { "answer": "For the WikiLarge dataset, the improvement over baseline NMT is 2.11 BLEU, 1.7 FKGL and 1.07 SARI.\nFor the WikiSmall dataset, the improvement over baseline NMT is 8.37 BLEU.", "type": "abstractive" }, { "answer": "6.37 BLEU", "type": "extractive" } ], "q_uid": "4625cfba3083346a96e573af5464bc26c34ec943", "evidence": [ { "raw_evidence": [ "Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output.", "Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data." ], "highlighted_evidence": [ " Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. ", "Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. " ] }, { "raw_evidence": [ "Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data." ], "highlighted_evidence": [ "We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences." ] } ] }, { "question": "what state of the art methods did they compare with?", "answers": [ { "answer": "OpenNMT, PBMT-R, Hybrid, SBMT-SARI, Dress", "type": "extractive" } ], "q_uid": "326588b1de9ba0fd049ab37c907e6e5413e14acd", "evidence": [ { "raw_evidence": [ "Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K CPU@3.50GHz, 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0.", "We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic." ], "highlighted_evidence": [ "We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 .", "PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI).", "Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 ." ] } ] }, { "question": "what are the sizes of both datasets?", "answers": [ { "answer": "training set has 89,042 sentence pairs, and the test set has 100 pairs, training set contains 296,402, 2,000 for development and 359 for testing", "type": "extractive" }, { "answer": "WikiSmall 89 142 sentence pair and WikiLarge 298 761 sentence pairs. ", "type": "abstractive" } ], "q_uid": "ebf0d9f9260ed61cbfd79b962df3899d05f9ebfb", "evidence": [ { "raw_evidence": [ "Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing." ], "highlighted_evidence": [ "WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing." ] }, { "raw_evidence": [ "Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing." ], "highlighted_evidence": [ "We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing." ] } ] } ], "2004.02192": [ { "question": "What are the distinctive characteristics of how Arabic speakers use offensive language?", "answers": [ { "answer": "Frequent use of direct animal name calling, using simile and metaphors, through indirect speech like sarcasm, wishing evil to others, name alteration, societal stratification, immoral behavior and sexually related uses.", "type": "abstractive" }, { "answer": "Direct name calling, Simile and metaphor, Indirect speech, Wishing Evil, Name alteration, Societal stratification, Immoral behavior, Sexually related", "type": "extractive" } ], "q_uid": "55507f066073b29c1736b684c09c045064053ba9", "evidence": [ { "raw_evidence": [ "Next, we analyzed all tweets labeled as offensive to better understand how Arabic speakers use offensive language. Here is a breakdown of usage:", "Direct name calling: The most frequent attack is to call a person an animal name, and the most used animals were \u0643\u0644\u0628> (\u201cklb\u201d \u2013 \u201cdog\u201d), \u062d\u0645\u0627\u0631> (\u201cHmAr\u201d \u2013 \u201cdonkey\u201d), and \u0628\u0647\u064a\u0645> (\u201cbhym\u201d \u2013 \u201cbeast\u201d). The second most common was insulting mental abilities using words such as \u063a\u0628\u064a> (\u201cgby\u201d \u2013 \u201cstupid\u201d) and \u0639\u0628\u064a\u0637> (\u201cEbyT\u201d \u2013\u201cidiot\u201d). Some culture-specific differences should be considered. Not all animal names are used as insults. For example, animals such as \u0623\u0633\u062f> (\u201cAsd\u201d \u2013 \u201clion\u201d), \u0635\u0642\u0631> (\u201cSqr\u201d \u2013 \u201cfalcon\u201d), and \u063a\u0632\u0627\u0644> (\u201cgzAl\u201d \u2013 \u201cgazelle\u201d) are typically used for praise. For other insults, people use: some bird names such as \u062f\u062c\u0627\u062c\u0629> (\u201cdjAjp\u201d \u2013 \u201cchicken\u201d), \u0628\u0648\u0645\u0629> (\u201cbwmp\u201d \u2013 \u201cowl\u201d), and \u063a\u0631\u0627\u0628> (\u201cgrAb\u201d \u2013 \u201ccrow\u201d); insects such as \u0630\u0628\u0627\u0628\u0629> (\u201c*bAbp\u201d \u2013 \u201cfly\u201d), \u0635\u0631\u0635\u0648\u0631> (\u201cSrSwr\u201d \u2013 \u201ccockroach\u201d), and \u062d\u0634\u0631\u0629> (\u201cH$rp\u201d \u2013 \u201cinsect\u201d); microorganisms such as \u062c\u0631\u062b\u0648\u0645\u0629> (\u201cjrvwmp\u201d \u2013 \u201cmicrobe\u201d) and \u0637\u062d\u0627\u0644\u0628> (\u201cTHAlb\u201d \u2013 \u201calgae\u201d); inanimate objects such as \u062c\u0632\u0645\u0629> (\u201cjzmp\u201d \u2013 \u201cshoes\u201d) and \u0633\u0637\u0644> (\u201csTl\u201d \u2013 \u201cbucket\u201d) among other usages.", "Simile and metaphor: Users use simile and metaphor were they would compare a person to: an animal as in \u0632\u064a \u0627\u0644\u062b\u0648\u0631> (\u201czy Alvwr\u201d \u2013 \u201clike a bull\u201d), \u0633\u0645\u0639\u0646\u064a \u0646\u0647\u064a\u0642\u0643> (\u201csmEny nhyqk\u201d \u2013 \u201clet me hear your braying\u201d), and \u0647\u0632 \u062f\u064a\u0644\u0643> (\u201chz dylk\u201d \u2013 \u201cwag your tail\u201d); a person with mental or physical disability such as \u0645\u0646\u063a\u0648\u0644\u064a> (\u201cmngwly\u201d \u2013 \u201cMongolian (down-syndrome)\u201d), \u0645\u0639\u0648\u0642> (\u201cmEwq\u201d \u2013 \u201cdisabled\u201d), and \u0642\u0632\u0645> (\u201cqzm\u201d \u2013 \u201cdwarf\u201d); and to the opposite gender such as \u062c\u064a\u0634 \u0646\u0648\u0627\u0644> (\u201cjy$ nwAl\u201d \u2013 \u201cNawal's army (Nawal is female name)\u201d) and \u0646\u0627\u062f\u064a \u0632\u064a\u0632\u064a> (\u201cnAdy zyzy\u201d \u2013 \u201cZizi's club (Zizi is a female pet name)\u201d).", "Indirect speech: This type of offensive language includes: sarcasm such as \u0623\u0630\u0643\u0649 \u0625\u062e\u0648\u0627\u062a\u0643> (\u201cA*kY AxwAtk\u201d \u2013 \u201csmartest one of your siblings\u201d) and \u0641\u064a\u0644\u0633\u0648\u0641 \u0627\u0644\u062d\u0645\u064a\u0631> (\u201cfylswf AlHmyr\u201d \u2013 \u201cthe donkeys' philosopher\u201d); questions such as \u0627\u064a\u0647 \u0643\u0644 \u0627\u0644\u063a\u0628\u0627\u0621 \u062f\u0647> (\u201cAyh kl AlgbA dh\u201d \u2013 \u201cwhat is all this stupidity\u201d); and indirect speech such as \u0627\u0644\u0646\u0642\u0627\u0634 \u0645\u0639 \u0627\u0644\u0628\u0647\u0627\u064a\u0645 \u063a\u064a\u0631 \u0645\u062b\u0645\u0631> (\u201cAlnqA$ mE AlbhAym gyr mvmr\u201d \u2013 \u201cno use talking to cattle\u201d).", "Wishing Evil: This entails wishing death or major harm to befall someone such as \u0631\u0628\u0646\u0627 \u064a\u0627\u062e\u062f\u0643> (\u201crbnA yAxdk\u201d \u2013 \u201cMay God take (kill) you\u201d), \u0627\u0644\u0644\u0647 \u064a\u0644\u0639\u0646\u0643> (\u201cAllh ylEnk\u201d \u2013 \u201cmay Allah/God curse you\u201d), and \u0631\u0648\u062d \u0641\u064a \u062f\u0627\u0647\u064a\u0629> (\u201crwH fy dAhyp\u201d \u2013 equivalent to \u201cgo to hell\u201d).", "Name alteration: One common way to insult others is to change a letter or two in their names to produce new offensive words that rhyme with the original names. Some examples of such include changing \u0627\u0644\u062c\u0632\u064a\u0631\u0629> (\u201cAljzyrp\u201d \u2013 \u201cAljazeera (channel)\u201d) to \u0627\u0644\u062e\u0646\u0632\u064a\u0631\u0629> (\u201cAlxnzyrp\u201d \u2013 \u201cthe pig\u201d) and \u062e\u0644\u0641\u0627\u0646> (\u201cxlfAn\u201d \u2013 \u201cKhalfan (person name)\u201d) to \u062e\u0631\u0641\u0627\u0646> (\u201cxrfAn\u201d \u2013 \u201ccrazed\u201d).", "Societal stratification: Some insults are associated with: certain jobs such as \u0628\u0648\u0627\u0628> (\u201cbwAb\u201d \u2013 \u201cdoorman\u201d) or \u062e\u0627\u062f\u0645> (\u201cxAdm\u201d \u2013 \u201cservant\u201d); and specific societal components such \u0628\u062f\u0648\u064a> (\u201cbdwy\u201d \u2013 \u201cbedouin\u201d) and \u0641\u0644\u0627\u062d> (\u201cflAH\u201d \u2013 \u201cfarmer\u201d).", "Immoral behavior: These insults are associated with negative moral traits or behaviors such as \u062d\u0642\u064a\u0631> (\u201cHqyr\u201d \u2013 \u201cvile\u201d), \u062e\u0627\u064a\u0646> (\u201cxAyn\u201d \u2013 \u201ctraitor\u201d), and \u0645\u0646\u0627\u0641\u0642> (\u201cmnAfq\u201d \u2013 \u201chypocrite\u201d).", "Sexually related: They include expressions such as \u062e\u0648\u0644> (\u201cxwl\u201d \u2013 \u201cgay\u201d), \u0648\u0633\u062e\u0629> (\u201cwsxp\u201d \u2013 \u201cprostitute\u201d), and \u0639\u0631\u0635> (\u201cErS\u201d \u2013 \u201cpimp\u201d)." ], "highlighted_evidence": [ "Next, we analyzed all tweets labeled as offensive to better understand how Arabic speakers use offensive language. Here is a breakdown of usage:\n\nDirect name calling: The most frequent attack is to call a person an animal name, and the most used animals were \u0643\u0644\u0628> (\u201cklb\u201d \u2013 \u201cdog\u201d), \u062d\u0645\u0627\u0631> (\u201cHmAr\u201d \u2013 \u201cdonkey\u201d), and \u0628\u0647\u064a\u0645> (\u201cbhym\u201d \u2013 \u201cbeast\u201d). The second most common was insulting mental abilities using words such as \u063a\u0628\u064a> (\u201cgby\u201d \u2013 \u201cstupid\u201d) and \u0639\u0628\u064a\u0637> (\u201cEbyT\u201d \u2013\u201cidiot\u201d). Some culture-specific differences should be considered. Not all animal names are used as insults. For example, animals such as \u0623\u0633\u062f> (\u201cAsd\u201d \u2013 \u201clion\u201d), \u0635\u0642\u0631> (\u201cSqr\u201d \u2013 \u201cfalcon\u201d), and \u063a\u0632\u0627\u0644> (\u201cgzAl\u201d \u2013 \u201cgazelle\u201d) are typically used for praise. For other insults, people use: some bird names such as \u062f\u062c\u0627\u062c\u0629> (\u201cdjAjp\u201d \u2013 \u201cchicken\u201d), \u0628\u0648\u0645\u0629> (\u201cbwmp\u201d \u2013 \u201cowl\u201d), and \u063a\u0631\u0627\u0628> (\u201cgrAb\u201d \u2013 \u201ccrow\u201d); insects such as \u0630\u0628\u0627\u0628\u0629> (\u201c*bAbp\u201d \u2013 \u201cfly\u201d), \u0635\u0631\u0635\u0648\u0631> (\u201cSrSwr\u201d \u2013 \u201ccockroach\u201d), and \u062d\u0634\u0631\u0629> (\u201cH$rp\u201d \u2013 \u201cinsect\u201d); microorganisms such as \u062c\u0631\u062b\u0648\u0645\u0629> (\u201cjrvwmp\u201d \u2013 \u201cmicrobe\u201d) and \u0637\u062d\u0627\u0644\u0628> (\u201cTHAlb\u201d \u2013 \u201calgae\u201d); inanimate objects such as \u062c\u0632\u0645\u0629> (\u201cjzmp\u201d \u2013 \u201cshoes\u201d) and \u0633\u0637\u0644> (\u201csTl\u201d \u2013 \u201cbucket\u201d) among other usages.\n\nSimile and metaphor: Users use simile and metaphor were they would compare a person to: an animal as in \u0632\u064a \u0627\u0644\u062b\u0648\u0631> (\u201czy Alvwr\u201d \u2013 \u201clike a bull\u201d), \u0633\u0645\u0639\u0646\u064a \u0646\u0647\u064a\u0642\u0643> (\u201csmEny nhyqk\u201d \u2013 \u201clet me hear your braying\u201d), and \u0647\u0632 \u062f\u064a\u0644\u0643> (\u201chz dylk\u201d \u2013 \u201cwag your tail\u201d); a person with mental or physical disability such as \u0645\u0646\u063a\u0648\u0644\u064a> (\u201cmngwly\u201d \u2013 \u201cMongolian (down-syndrome)\u201d), \u0645\u0639\u0648\u0642> (\u201cmEwq\u201d \u2013 \u201cdisabled\u201d), and \u0642\u0632\u0645> (\u201cqzm\u201d \u2013 \u201cdwarf\u201d); and to the opposite gender such as \u062c\u064a\u0634 \u0646\u0648\u0627\u0644> (\u201cjy$ nwAl\u201d \u2013 \u201cNawal's army (Nawal is female name)\u201d) and \u0646\u0627\u062f\u064a \u0632\u064a\u0632\u064a> (\u201cnAdy zyzy\u201d \u2013 \u201cZizi's club (Zizi is a female pet name)\u201d).\n\nIndirect speech: This type of offensive language includes: sarcasm such as \u0623\u0630\u0643\u0649 \u0625\u062e\u0648\u0627\u062a\u0643> (\u201cA*kY AxwAtk\u201d \u2013 \u201csmartest one of your siblings\u201d) and \u0641\u064a\u0644\u0633\u0648\u0641 \u0627\u0644\u062d\u0645\u064a\u0631> (\u201cfylswf AlHmyr\u201d \u2013 \u201cthe donkeys' philosopher\u201d); questions such as \u0627\u064a\u0647 \u0643\u0644 \u0627\u0644\u063a\u0628\u0627\u0621 \u062f\u0647> (\u201cAyh kl AlgbA dh\u201d \u2013 \u201cwhat is all this stupidity\u201d); and indirect speech such as \u0627\u0644\u0646\u0642\u0627\u0634 \u0645\u0639 \u0627\u0644\u0628\u0647\u0627\u064a\u0645 \u063a\u064a\u0631 \u0645\u062b\u0645\u0631> (\u201cAlnqA$ mE AlbhAym gyr mvmr\u201d \u2013 \u201cno use talking to cattle\u201d).\n\nWishing Evil: This entails wishing death or major harm to befall someone such as \u0631\u0628\u0646\u0627 \u064a\u0627\u062e\u062f\u0643> (\u201crbnA yAxdk\u201d \u2013 \u201cMay God take (kill) you\u201d), \u0627\u0644\u0644\u0647 \u064a\u0644\u0639\u0646\u0643> (\u201cAllh ylEnk\u201d \u2013 \u201cmay Allah/God curse you\u201d), and \u0631\u0648\u062d \u0641\u064a \u062f\u0627\u0647\u064a\u0629> (\u201crwH fy dAhyp\u201d \u2013 equivalent to \u201cgo to hell\u201d).\n\nName alteration: One common way to insult others is to change a letter or two in their names to produce new offensive words that rhyme with the original names. Some examples of such include changing \u0627\u0644\u062c\u0632\u064a\u0631\u0629> (\u201cAljzyrp\u201d \u2013 \u201cAljazeera (channel)\u201d) to \u0627\u0644\u062e\u0646\u0632\u064a\u0631\u0629> (\u201cAlxnzyrp\u201d \u2013 \u201cthe pig\u201d) and \u062e\u0644\u0641\u0627\u0646> (\u201cxlfAn\u201d \u2013 \u201cKhalfan (person name)\u201d) to \u062e\u0631\u0641\u0627\u0646> (\u201cxrfAn\u201d \u2013 \u201ccrazed\u201d).\n\nSocietal stratification: Some insults are associated with: certain jobs such as \u0628\u0648\u0627\u0628> (\u201cbwAb\u201d \u2013 \u201cdoorman\u201d) or \u062e\u0627\u062f\u0645> (\u201cxAdm\u201d \u2013 \u201cservant\u201d); and specific societal components such \u0628\u062f\u0648\u064a> (\u201cbdwy\u201d \u2013 \u201cbedouin\u201d) and \u0641\u0644\u0627\u062d> (\u201cflAH\u201d \u2013 \u201cfarmer\u201d).\n\nImmoral behavior: These insults are associated with negative moral traits or behaviors such as \u062d\u0642\u064a\u0631> (\u201cHqyr\u201d \u2013 \u201cvile\u201d), \u062e\u0627\u064a\u0646> (\u201cxAyn\u201d \u2013 \u201ctraitor\u201d), and \u0645\u0646\u0627\u0641\u0642> (\u201cmnAfq\u201d \u2013 \u201chypocrite\u201d).\n\nSexually related: They include expressions such as \u062e\u0648\u0644> (\u201cxwl\u201d \u2013 \u201cgay\u201d), \u0648\u0633\u062e\u0629> (\u201cwsxp\u201d \u2013 \u201cprostitute\u201d), and \u0639\u0631\u0635> (\u201cErS\u201d \u2013 \u201cpimp\u201d)." ] }, { "raw_evidence": [ "Next, we analyzed all tweets labeled as offensive to better understand how Arabic speakers use offensive language. Here is a breakdown of usage:", "Direct name calling: The most frequent attack is to call a person an animal name, and the most used animals were \u0643\u0644\u0628> (\u201cklb\u201d \u2013 \u201cdog\u201d), \u062d\u0645\u0627\u0631> (\u201cHmAr\u201d \u2013 \u201cdonkey\u201d), and \u0628\u0647\u064a\u0645> (\u201cbhym\u201d \u2013 \u201cbeast\u201d). The second most common was insulting mental abilities using words such as \u063a\u0628\u064a> (\u201cgby\u201d \u2013 \u201cstupid\u201d) and \u0639\u0628\u064a\u0637> (\u201cEbyT\u201d \u2013\u201cidiot\u201d). Some culture-specific differences should be considered. Not all animal names are used as insults. For example, animals such as \u0623\u0633\u062f> (\u201cAsd\u201d \u2013 \u201clion\u201d), \u0635\u0642\u0631> (\u201cSqr\u201d \u2013 \u201cfalcon\u201d), and \u063a\u0632\u0627\u0644> (\u201cgzAl\u201d \u2013 \u201cgazelle\u201d) are typically used for praise. For other insults, people use: some bird names such as \u062f\u062c\u0627\u062c\u0629> (\u201cdjAjp\u201d \u2013 \u201cchicken\u201d), \u0628\u0648\u0645\u0629> (\u201cbwmp\u201d \u2013 \u201cowl\u201d), and \u063a\u0631\u0627\u0628> (\u201cgrAb\u201d \u2013 \u201ccrow\u201d); insects such as \u0630\u0628\u0627\u0628\u0629> (\u201c*bAbp\u201d \u2013 \u201cfly\u201d), \u0635\u0631\u0635\u0648\u0631> (\u201cSrSwr\u201d \u2013 \u201ccockroach\u201d), and \u062d\u0634\u0631\u0629> (\u201cH$rp\u201d \u2013 \u201cinsect\u201d); microorganisms such as \u062c\u0631\u062b\u0648\u0645\u0629> (\u201cjrvwmp\u201d \u2013 \u201cmicrobe\u201d) and \u0637\u062d\u0627\u0644\u0628> (\u201cTHAlb\u201d \u2013 \u201calgae\u201d); inanimate objects such as \u062c\u0632\u0645\u0629> (\u201cjzmp\u201d \u2013 \u201cshoes\u201d) and \u0633\u0637\u0644> (\u201csTl\u201d \u2013 \u201cbucket\u201d) among other usages.", "Simile and metaphor: Users use simile and metaphor were they would compare a person to: an animal as in \u0632\u064a \u0627\u0644\u062b\u0648\u0631> (\u201czy Alvwr\u201d \u2013 \u201clike a bull\u201d), \u0633\u0645\u0639\u0646\u064a \u0646\u0647\u064a\u0642\u0643> (\u201csmEny nhyqk\u201d \u2013 \u201clet me hear your braying\u201d), and \u0647\u0632 \u062f\u064a\u0644\u0643> (\u201chz dylk\u201d \u2013 \u201cwag your tail\u201d); a person with mental or physical disability such as \u0645\u0646\u063a\u0648\u0644\u064a> (\u201cmngwly\u201d \u2013 \u201cMongolian (down-syndrome)\u201d), \u0645\u0639\u0648\u0642> (\u201cmEwq\u201d \u2013 \u201cdisabled\u201d), and \u0642\u0632\u0645> (\u201cqzm\u201d \u2013 \u201cdwarf\u201d); and to the opposite gender such as \u062c\u064a\u0634 \u0646\u0648\u0627\u0644> (\u201cjy$ nwAl\u201d \u2013 \u201cNawal's army (Nawal is female name)\u201d) and \u0646\u0627\u062f\u064a \u0632\u064a\u0632\u064a> (\u201cnAdy zyzy\u201d \u2013 \u201cZizi's club (Zizi is a female pet name)\u201d).", "Indirect speech: This type of offensive language includes: sarcasm such as \u0623\u0630\u0643\u0649 \u0625\u062e\u0648\u0627\u062a\u0643> (\u201cA*kY AxwAtk\u201d \u2013 \u201csmartest one of your siblings\u201d) and \u0641\u064a\u0644\u0633\u0648\u0641 \u0627\u0644\u062d\u0645\u064a\u0631> (\u201cfylswf AlHmyr\u201d \u2013 \u201cthe donkeys' philosopher\u201d); questions such as \u0627\u064a\u0647 \u0643\u0644 \u0627\u0644\u063a\u0628\u0627\u0621 \u062f\u0647> (\u201cAyh kl AlgbA dh\u201d \u2013 \u201cwhat is all this stupidity\u201d); and indirect speech such as \u0627\u0644\u0646\u0642\u0627\u0634 \u0645\u0639 \u0627\u0644\u0628\u0647\u0627\u064a\u0645 \u063a\u064a\u0631 \u0645\u062b\u0645\u0631> (\u201cAlnqA$ mE AlbhAym gyr mvmr\u201d \u2013 \u201cno use talking to cattle\u201d).", "Wishing Evil: This entails wishing death or major harm to befall someone such as \u0631\u0628\u0646\u0627 \u064a\u0627\u062e\u062f\u0643> (\u201crbnA yAxdk\u201d \u2013 \u201cMay God take (kill) you\u201d), \u0627\u0644\u0644\u0647 \u064a\u0644\u0639\u0646\u0643> (\u201cAllh ylEnk\u201d \u2013 \u201cmay Allah/God curse you\u201d), and \u0631\u0648\u062d \u0641\u064a \u062f\u0627\u0647\u064a\u0629> (\u201crwH fy dAhyp\u201d \u2013 equivalent to \u201cgo to hell\u201d).", "Name alteration: One common way to insult others is to change a letter or two in their names to produce new offensive words that rhyme with the original names. Some examples of such include changing \u0627\u0644\u062c\u0632\u064a\u0631\u0629> (\u201cAljzyrp\u201d \u2013 \u201cAljazeera (channel)\u201d) to \u0627\u0644\u062e\u0646\u0632\u064a\u0631\u0629> (\u201cAlxnzyrp\u201d \u2013 \u201cthe pig\u201d) and \u062e\u0644\u0641\u0627\u0646> (\u201cxlfAn\u201d \u2013 \u201cKhalfan (person name)\u201d) to \u062e\u0631\u0641\u0627\u0646> (\u201cxrfAn\u201d \u2013 \u201ccrazed\u201d).", "Societal stratification: Some insults are associated with: certain jobs such as \u0628\u0648\u0627\u0628> (\u201cbwAb\u201d \u2013 \u201cdoorman\u201d) or \u062e\u0627\u062f\u0645> (\u201cxAdm\u201d \u2013 \u201cservant\u201d); and specific societal components such \u0628\u062f\u0648\u064a> (\u201cbdwy\u201d \u2013 \u201cbedouin\u201d) and \u0641\u0644\u0627\u062d> (\u201cflAH\u201d \u2013 \u201cfarmer\u201d).", "Immoral behavior: These insults are associated with negative moral traits or behaviors such as \u062d\u0642\u064a\u0631> (\u201cHqyr\u201d \u2013 \u201cvile\u201d), \u062e\u0627\u064a\u0646> (\u201cxAyn\u201d \u2013 \u201ctraitor\u201d), and \u0645\u0646\u0627\u0641\u0642> (\u201cmnAfq\u201d \u2013 \u201chypocrite\u201d).", "Sexually related: They include expressions such as \u062e\u0648\u0644> (\u201cxwl\u201d \u2013 \u201cgay\u201d), \u0648\u0633\u062e\u0629> (\u201cwsxp\u201d \u2013 \u201cprostitute\u201d), and \u0639\u0631\u0635> (\u201cErS\u201d \u2013 \u201cpimp\u201d)." ], "highlighted_evidence": [ "Next, we analyzed all tweets labeled as offensive to better understand how Arabic speakers use offensive language. Here is a breakdown of usage:\n\nDirect name calling: The most frequent attack is to call a person an animal name, and the most used animals were \u0643\u0644\u0628> (\u201cklb\u201d \u2013 \u201cdog\u201d), \u062d\u0645\u0627\u0631> (\u201cHmAr\u201d \u2013 \u201cdonkey\u201d), and \u0628\u0647\u064a\u0645> (\u201cbhym\u201d \u2013 \u201cbeast\u201d).", "Simile and metaphor: Users use simile and metaphor were they would compare a person to: an animal as in \u0632\u064a \u0627\u0644\u062b\u0648\u0631> (\u201czy Alvwr\u201d \u2013 \u201clike a bull\u201d), \u0633\u0645\u0639\u0646\u064a \u0646\u0647\u064a\u0642\u0643> (\u201csmEny nhyqk\u201d \u2013 \u201clet me hear your braying\u201d), and \u0647\u0632 \u062f\u064a\u0644\u0643> (\u201chz dylk\u201d \u2013 \u201cwag your tail\u201d); a person with mental or physical disability such as \u0645\u0646\u063a\u0648\u0644\u064a> (\u201cmngwly\u201d \u2013 \u201cMongolian (down-syndrome)\u201d), \u0645\u0639\u0648\u0642> (\u201cmEwq\u201d \u2013 \u201cdisabled\u201d), and \u0642\u0632\u0645> (\u201cqzm\u201d \u2013 \u201cdwarf\u201d); and to the opposite gender such as \u062c\u064a\u0634 \u0646\u0648\u0627\u0644> (\u201cjy$ nwAl\u201d \u2013 \u201cNawal's army (Nawal is female name)\u201d) and \u0646\u0627\u062f\u064a \u0632\u064a\u0632\u064a> (\u201cnAdy zyzy\u201d \u2013 \u201cZizi's club (Zizi is a female pet name)\u201d).", "Indirect speech: This type of offensive language includes: sarcasm such as \u0623\u0630\u0643\u0649 \u0625\u062e\u0648\u0627\u062a\u0643> (\u201cA*kY AxwAtk\u201d \u2013 \u201csmartest one of your siblings\u201d) and \u0641\u064a\u0644\u0633\u0648\u0641 \u0627\u0644\u062d\u0645\u064a\u0631> (\u201cfylswf AlHmyr\u201d \u2013 \u201cthe donkeys' philosopher\u201d); questions such as \u0627\u064a\u0647 \u0643\u0644 \u0627\u0644\u063a\u0628\u0627\u0621 \u062f\u0647> (\u201cAyh kl AlgbA dh\u201d \u2013 \u201cwhat is all this stupidity\u201d); and indirect speech such as \u0627\u0644\u0646\u0642\u0627\u0634 \u0645\u0639 \u0627\u0644\u0628\u0647\u0627\u064a\u0645 \u063a\u064a\u0631 \u0645\u062b\u0645\u0631> (\u201cAlnqA$ mE AlbhAym gyr mvmr\u201d \u2013 \u201cno use talking to cattle\u201d).", "Wishing Evil: This entails wishing death or major harm to befall someone such as \u0631\u0628\u0646\u0627 \u064a\u0627\u062e\u062f\u0643> (\u201crbnA yAxdk\u201d \u2013 \u201cMay God take (kill) you\u201d), \u0627\u0644\u0644\u0647 \u064a\u0644\u0639\u0646\u0643> (\u201cAllh ylEnk\u201d \u2013 \u201cmay Allah/God curse you\u201d), and \u0631\u0648\u062d \u0641\u064a \u062f\u0627\u0647\u064a\u0629> (\u201crwH fy dAhyp\u201d \u2013 equivalent to \u201cgo to hell\u201d).", "Name alteration: One common way to insult others is to change a letter or two in their names to produce new offensive words that rhyme with the original names.", "Societal stratification: Some insults are associated with: certain jobs such as \u0628\u0648\u0627\u0628> (\u201cbwAb\u201d \u2013 \u201cdoorman\u201d) or \u062e\u0627\u062f\u0645> (\u201cxAdm\u201d \u2013 \u201cservant\u201d); and specific societal components such \u0628\u062f\u0648\u064a> (\u201cbdwy\u201d \u2013 \u201cbedouin\u201d) and \u0641\u0644\u0627\u062d> (\u201cflAH\u201d \u2013 \u201cfarmer\u201d).", "Immoral behavior: These insults are associated with negative moral traits or behaviors such as \u062d\u0642\u064a\u0631> (\u201cHqyr\u201d \u2013 \u201cvile\u201d), \u062e\u0627\u064a\u0646> (\u201cxAyn\u201d \u2013 \u201ctraitor\u201d), and \u0645\u0646\u0627\u0641\u0642> (\u201cmnAfq\u201d \u2013 \u201chypocrite\u201d).\n\nSexually related: They include expressions such as \u062e\u0648\u0644> (\u201cxwl\u201d \u2013 \u201cgay\u201d), \u0648\u0633\u062e\u0629> (\u201cwsxp\u201d \u2013 \u201cprostitute\u201d), and \u0639\u0631\u0635> (\u201cErS\u201d \u2013 \u201cpimp\u201d)." ] } ] }, { "question": "How did they analyze which topics, dialects and gender are most associated with tweets?", "answers": [ { "answer": "ascertain the distribution of: types of offensive language, genres where it is used, the dialects used, and the gender of users using such language", "type": "extractive" } ], "q_uid": "e838275bb0673fba0d67ac00e4307944a2c17be3", "evidence": [ { "raw_evidence": [ "Given the annotated tweets, we wanted to ascertain the distribution of: types of offensive language, genres where it is used, the dialects used, and the gender of users using such language.", "Figure FIGREF13 shows the distribution of topics associated with offensive tweets. As the figure shows, sports and politics are most dominant for offensive language including vulgar and hate speech. As for dialect, we looked at MSA and four major dialects, namely Egyptian (EGY), Leventine (LEV), Maghrebi (MGR), and Gulf (GLF). Figure FIGREF14 shows that 71% of vulgar tweets were written in EGY followed by GLF, which accounted for 13% of vulgar tweets. MSA was not used in any of the vulgar tweets. As for offensive tweets in general, EGY and GLF were used in 36% and 35% of the offensive tweets respectively. Unlike the case of vulgar language where MSA was non-existent, 15% of the offensive tweets were in fact written in MSA. For hate speech, GLF and EGY were again dominant and MSA consistuted 21% of the tweets. This is consistent with findings for other languages such as English and Italian where vulgar language was more frequently associated with colloquial language BIBREF24, BIBREF25. Regarding the gender, Figure FIGREF15 shows that the vast majority of offensive tweets, including vulgar and hate speech, were authored by males. Female Twitter users accounted for 14% of offensive tweets in general and 6% and 9% of vulgar and hate speech respectively. Figure FIGREF16 shows a detailed categorization of hate speech types, where the top three include insulting groups based on their political ideology, origin, and sport affiliation. Religious hate speech appeared in only 15% of all hate speech tweets." ], "highlighted_evidence": [ "Given the annotated tweets, we wanted to ascertain the distribution of: types of offensive language, genres where it is used, the dialects used, and the gender of users using such language.", "As the figure shows, sports and politics are most dominant for offensive language including vulgar and hate speech. As for dialect, we looked at MSA and four major dialects, namely Egyptian (EGY), Leventine (LEV), Maghrebi (MGR), and Gulf (GLF). Figure FIGREF14 shows that 71% of vulgar tweets were written in EGY followed by GLF, which accounted for 13% of vulgar tweets. MSA was not used in any of the vulgar tweets. As for offensive tweets in general, EGY and GLF were used in 36% and 35% of the offensive tweets respectively. Unlike the case of vulgar language where MSA was non-existent, 15% of the offensive tweets were in fact written in MSA. For hate speech, GLF and EGY were again dominant and MSA consistuted 21% of the tweets. This is consistent with findings for other languages such as English and Italian where vulgar language was more frequently associated with colloquial language BIBREF24, BIBREF25. Regarding the gender, Figure FIGREF15 shows that the vast majority of offensive tweets, including vulgar and hate speech, were authored by males. Female Twitter users accounted for 14% of offensive tweets in general and 6% and 9% of vulgar and hate speech respectively. Figure FIGREF16 shows a detailed categorization of hate speech types, where the top three include insulting groups based on their political ideology, origin, and sport affiliation. Religious hate speech appeared in only 15% of all hate speech tweets." ] } ] }, { "question": "How many annotators tagged each tweet?", "answers": [ { "answer": "One", "type": "abstractive" }, { "answer": "One experienced annotator tagged all tweets", "type": "abstractive" } ], "q_uid": "8dda1ef371933811e2a25a286529c31623cca0c6", "evidence": [ { "raw_evidence": [ "We developed the annotation guidelines jointly with an experienced annotator, who is a native Arabic speaker with a good knowledge of various Arabic dialects. We made sure that our guidelines were compatible with those of OffensEval2019. The annotator carried out all annotation. Tweets were given one or more of the following four labels: offensive, vulgar, hate speech, or clean. Since the offensive label covers both vulgar and hate speech and vulgarity and hate speech are not mutually exclusive, a tweet can be just offensive or offensive and vulgar and/or hate speech. The annotation adhered to the following guidelines:" ], "highlighted_evidence": [ "The annotator carried out all annotation." ] }, { "raw_evidence": [ "We developed the annotation guidelines jointly with an experienced annotator, who is a native Arabic speaker with a good knowledge of various Arabic dialects. We made sure that our guidelines were compatible with those of OffensEval2019. The annotator carried out all annotation. Tweets were given one or more of the following four labels: offensive, vulgar, hate speech, or clean. Since the offensive label covers both vulgar and hate speech and vulgarity and hate speech are not mutually exclusive, a tweet can be just offensive or offensive and vulgar and/or hate speech. The annotation adhered to the following guidelines:" ], "highlighted_evidence": [ "We developed the annotation guidelines jointly with an experienced annotator, who is a native Arabic speaker with a good knowledge of various Arabic dialects. We made sure that our guidelines were compatible with those of OffensEval2019. The annotator carried out all annotation. Tweets were given one or more of the following four labels: offensive, vulgar, hate speech, or clean. Since the offensive label covers both vulgar and hate speech and vulgarity and hate speech are not mutually exclusive, a tweet can be just offensive or offensive and vulgar and/or hate speech. " ] } ] }, { "question": "How many tweets are in the dataset?", "answers": [ { "answer": "10,000 Arabic tweet dataset ", "type": "extractive" }, { "answer": "10,000", "type": "extractive" } ], "q_uid": "b3de9357c569fb1454be8f2ac5fcecaea295b967", "evidence": [ { "raw_evidence": [ "Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM)." ], "highlighted_evidence": [ "Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. " ] }, { "raw_evidence": [ "Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM)." ], "highlighted_evidence": [ " Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets." ] } ] }, { "question": "In what way is the offensive dataset not biased by topic, dialect or target?", "answers": [ { "answer": "It does not use a seed list to gather tweets so the dataset does not skew to specific topics, dialect, targets.", "type": "abstractive" }, { "answer": "our methodology does not use a seed list of offensive words", "type": "extractive" } ], "q_uid": "59e58c6fc63cf5b54b632462465bfbd85b1bf3dd", "evidence": [ { "raw_evidence": [ "Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM)." ], "highlighted_evidence": [ "Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. " ] }, { "raw_evidence": [ "Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM)." ], "highlighted_evidence": [ "Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect." ] } ] } ], "1909.06200": [ { "question": "What experiments are conducted?", "answers": [ { "answer": "Irony Classifier, Sentiment Classifier for Irony, Sentiment Classifier for Non-irony, transformation from ironic sentences to non-ironic sentences", "type": "extractive" } ], "q_uid": "5c3e98e3cebaecd5d3e75ec2c9fc3dd267ac3c83", "evidence": [ { "raw_evidence": [ "Irony Classifier: We implement a CNN classifier trained with our irony dataset. All the CNN classifiers we utilize in this paper use the same parameters as BIBREF20 .", "Sentiment Classifier for Irony: We first implement a one-layer LSTM network to classify ironic sentences in our dataset into positive and negative ironies. The LSTM network is trained with the dataset of Semeval 2015 Task 11 BIBREF0 which is used for the sentiment analysis of figurative language in twitter. Then, we use the positive ironies and negative ironies to train the CNN sentiment classifier for irony.", "Sentiment Classifier for Non-irony: Similar to the training process of the sentiment classifier for irony, we first implement a one-layer LSTM network trained with the dataset for the sentiment analysis of common twitters to classify the non-ironies into positive and negative non-ironies. Then we use the positive and negative non-ironies to train the sentiment classifier for non-irony.", "In this section, we describe some additional experiments on the transformation from ironic sentences to non-ironic sentences. Sometimes ironies are hard to understand and may cause misunderstanding, for which our task also explores the transformation from ironic sentences to non-ironic sentences." ], "highlighted_evidence": [ "Irony Classifier: We implement a CNN classifier trained with our irony dataset.", "Sentiment Classifier for Irony: We first implement a one-layer LSTM network to classify ironic sentences in our dataset into positive and negative ironies.", "Sentiment Classifier for Non-irony: Similar to the training process of the sentiment classifier for irony, we first implement a one-layer LSTM network trained with the dataset for the sentiment analysis of common twitters to classify the non-ironies into positive and negative non-ironies.", "In this section, we describe some additional experiments on the transformation from ironic sentences to non-ironic sentences." ] } ] }, { "question": "What is the combination of rewards for reinforcement learning?", "answers": [ { "answer": "irony accuracy, sentiment preservation", "type": "extractive" }, { "answer": " irony accuracy and sentiment preservation", "type": "extractive" } ], "q_uid": "3f0ae9b772eeddfbfd239b7e3196dc6dfa21365f", "evidence": [ { "raw_evidence": [ "Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively." ], "highlighted_evidence": [ "Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively." ] }, { "raw_evidence": [ "Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively." ], "highlighted_evidence": [ "Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively." ] } ] }, { "question": "What are the difficulties in modelling the ironic pattern?", "answers": [ { "answer": "obscure and hard to understand, lack of previous work and baselines on irony generation", "type": "extractive" }, { "answer": "ironies are often obscure and hard to understand", "type": "abstractive" } ], "q_uid": "14b8ae5656e7d4ee02237288372d9e682b24fdb8", "evidence": [ { "raw_evidence": [ "Although some previous studies focus on irony detection, little attention is paid to irony generation. As ironies can strengthen sentiments and express stronger emotions, we mainly focus on generating ironic sentences. Given a non-ironic sentence, we implement a neural network to transfer it to an ironic sentence and constrain the sentiment polarity of the two sentences to be the same. For example, the input is \u201cI hate it when my plans get ruined\" which is negative in sentiment polarity and the output should be ironic and negative in sentiment as well, such as \u201cI like it when my plans get ruined\". The speaker uses \u201clike\" to be ironic and express his or her negative sentiment. At the same time, our model can preserve contents which are irrelevant to sentiment polarity and irony. According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. For example, our work can be specifically described as: given a sentence \u201cI hate to be ignored\", we train our model to generate an ironic sentence such as \u201cI love to be ignored\". Although there is \u201clove\" in the generated sentence, the speaker still expresses his or her negative sentiment by irony. We also make some explorations in the transformation from ironic sentences to non-ironic sentences at the end of our work. Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. Our work will not only provide the first large-scale irony dataset but also make our model as a benchmark for the irony generation." ], "highlighted_evidence": [ "According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. ", " Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. " ] }, { "raw_evidence": [ "Although some previous studies focus on irony detection, little attention is paid to irony generation. As ironies can strengthen sentiments and express stronger emotions, we mainly focus on generating ironic sentences. Given a non-ironic sentence, we implement a neural network to transfer it to an ironic sentence and constrain the sentiment polarity of the two sentences to be the same. For example, the input is \u201cI hate it when my plans get ruined\" which is negative in sentiment polarity and the output should be ironic and negative in sentiment as well, such as \u201cI like it when my plans get ruined\". The speaker uses \u201clike\" to be ironic and express his or her negative sentiment. At the same time, our model can preserve contents which are irrelevant to sentiment polarity and irony. According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. For example, our work can be specifically described as: given a sentence \u201cI hate to be ignored\", we train our model to generate an ironic sentence such as \u201cI love to be ignored\". Although there is \u201clove\" in the generated sentence, the speaker still expresses his or her negative sentiment by irony. We also make some explorations in the transformation from ironic sentences to non-ironic sentences at the end of our work. Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. Our work will not only provide the first large-scale irony dataset but also make our model as a benchmark for the irony generation." ], "highlighted_evidence": [ "According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work." ] } ] }, { "question": "How did the authors find ironic data on twitter?", "answers": [ { "answer": "They developed a classifier to find ironic sentences in twitter data", "type": "abstractive" }, { "answer": "by crawling", "type": "abstractive" } ], "q_uid": "e3a2d8886f03e78ed5e138df870f48635875727e", "evidence": [ { "raw_evidence": [ "As neural networks are proved effective in irony detection, we decide to implement a neural classifier in order to classify the sentences into ironic and non-ironic sentences. However, the only high-quality irony dataset we can obtain is the dataset of Semeval-2018 Task 3 and the dataset is pretty small, which will cause overfitting to complex models. Therefore, we just implement a simple one-layer RNN with LSTM cell to classify pre-processed sentences into ironic sentences and non-ironic sentences because LSTM networks are widely used in irony detection. We train the model with the dataset of Semeval-2018 Task 3. After classification, we get 262,755 ironic sentences and 399,775 non-ironic sentences. According to our observation, not all non-ironic sentences are suitable to be transferred into ironic sentences. For example, \u201cjust hanging out . watching . is it monday yet\" is hard to transfer because it does not have an explicit sentiment polarity. So we remove all interrogative sentences from the non-ironic sentences and only obtain the sentences which have words expressing strong sentiments. We evaluate the sentiment polarity of each word with TextBlob and we view those words with sentiment scores greater than 0.5 or less than -0.5 as words expressing strong sentiments. Finally, we build our irony dataset with 262,755 ironic sentences and 102,330 non-ironic sentences." ], "highlighted_evidence": [ "Therefore, we just implement a simple one-layer RNN with LSTM cell to classify pre-processed sentences into ironic sentences and non-ironic sentences because LSTM networks are widely used in irony detection." ] }, { "raw_evidence": [ "In this paper, in order to address the lack of irony data, we first crawl over 2M tweets from twitter to build a dataset with 262,755 ironic and 112,330 non-ironic tweets. Then, due to the lack of parallel data, we propose a novel model to transfer non-ironic sentences to ironic sentences in an unsupervised way. As ironic style is hard to model and describe, we implement our model with the control of classifiers and reinforcement learning. Different from other studies in style transfer, the transformation from non-ironic to ironic sentences has to preserve sentiment polarity as mentioned above. Therefore, we not only design an irony reward to control the irony accuracy and implement denoising auto-encoder and back-translation to control content preservation but also design a sentiment reward to control sentiment preservation." ], "highlighted_evidence": [ "In this paper, in order to address the lack of irony data, we first crawl over 2M tweets from twitter to build a dataset with 262,755 ironic and 112,330 non-ironic tweets. " ] } ] }, { "question": "Who judged the irony accuracy, sentiment preservation and content preservation?", "answers": [ { "answer": "Irony accuracy is judged only by human ; senriment preservation and content preservation are judged both by human and using automatic metrics (ACC and BLEU).", "type": "abstractive" }, { "answer": "four annotators who are proficient in English", "type": "extractive" } ], "q_uid": "62f27fe08ddb67f16857fab2a8a721926ecbb6fb", "evidence": [ { "raw_evidence": [ "In order to evaluate sentiment preservation, we use the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. We call the value as sentiment delta (senti delta). Besides, we report the sentiment accuracy (Senti ACC) which measures whether the output sentence has the same sentiment polarity as the input sentence based on our standardized sentiment classifiers. The BLEU score BIBREF25 between the input sentences and the output sentences is calculated to evaluate the content preservation performance. In order to evaluate the overall performance of different models, we also report the geometric mean (G2) and harmonic mean (H2) of the sentiment accuracy and the BLEU score. As for the irony accuracy, we only report it in human evaluation results because it is more accurate for the human to evaluate the quality of irony as it is very complicated." ], "highlighted_evidence": [ "Besides, we report the sentiment accuracy (Senti ACC) which measures whether the output sentence has the same sentiment polarity as the input sentence based on our standardized sentiment classifiers. The BLEU score BIBREF25 between the input sentences and the output sentences is calculated to evaluate the content preservation performance. In order to evaluate the overall performance of different models, we also report the geometric mean (G2) and harmonic mean (H2) of the sentiment accuracy and the BLEU score. As for the irony accuracy, we only report it in human evaluation results because it is more accurate for the human to evaluate the quality of irony as it is very complicated." ] }, { "raw_evidence": [ "We first sample 50 non-ironic input sentences and their corresponding output sentences of different models. Then, we ask four annotators who are proficient in English to evaluate the qualities of the generated sentences of different models. They are required to rank the output sentences of our model and baselines from the best to the worst in terms of irony accuracy (Irony), Sentiment preservation (Senti) and content preservation (Content). The best output is ranked with 1 and the worst output is ranked with 6. That means that the smaller our human evaluation value is, the better the corresponding model is." ], "highlighted_evidence": [ "Then, we ask four annotators who are proficient in English to evaluate the qualities of the generated sentences of different models. They are required to rank the output sentences of our model and baselines from the best to the worst in terms of irony accuracy (Irony), Sentiment preservation (Senti) and content preservation (Content)." ] } ] } ], "1706.06894": [ { "question": "How were the tweets annotated?", "answers": [ { "answer": "tweets are annotated with only Favor or Against for two targets - Galatasaray and Fenerbah\u00e7e", "type": "abstractive" } ], "q_uid": "9ca447c8959a693a3f7bdd0a2c516f4b86f95718", "evidence": [ { "raw_evidence": [ "We have decided to consider tweets about popular sports clubs as our domain for stance detection. Considerable amounts of tweets are being published for sports-related events at every instant. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbah\u00e7e (namely, Target-2) which are two of the most popular football clubs in Turkey. As is the case for the sentiment analysis tools, the outputs of the stance detection systems on a stream of tweets about these clubs can facilitate the use of the opinions of the football followers by these clubs.", "In a previous study on the identification of public health-related tweets, two tweet data sets in Turkish (each set containing 1 million random tweets) have been compiled where these sets belong to two different periods of 20 consecutive days BIBREF11 . We have decided to use one of these sets (corresponding to the period between August 18 and September 6, 2015) and firstly filtered the tweets using the possible names used to refer to the target clubs. Then, we have annotated the stance information in the tweets for these targets as Favor or Against. Within the course of this study, we have not considered those tweets in which the target is not explicitly mentioned, as our initial filtering process reveals.", "For the purposes of the current study, we have not annotated any tweets with the Neither class. This stance class and even finer-grained classes can be considered in further annotation studies. We should also note that in a few tweets, the target of the stance was the management of the club while in some others a particular footballer of the club is praised or criticised. Still, we have considered the club as the target of the stance in all of the cases and carried out our annotations accordingly." ], "highlighted_evidence": [ "Fenerbah\u00e7e", "We have decided to consider tweets about popular sports clubs as our domain for stance detection. ", "Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbah\u00e7e (namely, Target-2) which are two of the most popular football clubs in Turkey. ", "Then, we have annotated the stance information in the tweets for these targets as Favor or Against.", "For the purposes of the current study, we have not annotated any tweets with the Neither class." ] } ] }, { "question": "Which SVM approach resulted in the best performance?", "answers": [ { "answer": "Target-1", "type": "extractive" } ], "q_uid": "05887a8466e0a2f0df4d6a5ffc5815acd7d9066a", "evidence": [ { "raw_evidence": [ "The evaluation results are quite favorable for both targets and particularly higher for Target-1, considering the fact that they are the initial experiments on the data set. The performance of the classifiers is better for the Favor class for both targets when compared with the performance results for the Against class. This outcome may be due to the common use of some terms when expressing positive stance towards sports clubs in Turkish tweets. The same percentage of common terms may not have been observed in tweets during the expression of negative stances towards the targets. Yet, completely the opposite pattern is observed in stance detection results of baseline systems given in BIBREF0 , i.e., better F-Measure rates have been obtained for the Against class when compared with the Favor class BIBREF0 . Some of the baseline systems reported in BIBREF0 are SVM-based systems using unigrams and ngrams as features similar to our study, but their data sets include all three stance classes of Favor, Against, and Neither, while our data set comprises only tweets classified as belonging to Favor or Against classes. Another difference is that the data sets in BIBREF0 have been divided into training and test sets, while in our study we provide 10-fold cross-validation results on the whole data set. On the other hand, we should also note that SVM-based sentiment analysis systems (such as those given in BIBREF16 ) have been reported to achieve better F-Measure rates for the Positive sentiment class when compared with the results obtained for the Negative class. Therefore, our evaluation results for each stance class seem to be in line with such sentiment analysis systems. Yet, further experiments on the extended versions of our data set should be conducted and the results should again be compared with the stance detection results given in the literature." ], "highlighted_evidence": [ "The evaluation results are quite favorable for both targets and particularly higher for Target-1, considering the fact that they are the initial experiments on the data set." ] } ] }, { "question": "What are hashtag features?", "answers": [ { "answer": "hashtag features contain whether there is any hashtag in the tweet", "type": "abstractive" } ], "q_uid": "c87fcc98625e82fdb494ff0f5309319620d69040", "evidence": [ { "raw_evidence": [ "With an intention to exploit the contribution of hashtag use to stance detection, we have also used the existence of hashtags in tweets as an additional feature to unigrams. The corresponding evaluation results of the SVM classifiers using unigrams together the existence of hashtags as features are provided in Table TABREF2 ." ], "highlighted_evidence": [ "With an intention to exploit the contribution of hashtag use to stance detection, we have also used the existence of hashtags in tweets as an additional feature to unigrams." ] } ] }, { "question": "How many tweets did they collect?", "answers": [ { "answer": "700 ", "type": "extractive" }, { "answer": "700", "type": "extractive" } ], "q_uid": "500a8ec1c56502529d6e59ba6424331f797f31f0", "evidence": [ { "raw_evidence": [ "At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. Hence, our data set is a balanced one although it is currently limited in size. The corresponding stance annotations are made publicly available at http://ceng.metu.edu.tr/ INLINEFORM0 e120329/ Turkish_Stance_Detection_Tweet_Dataset.csv in Comma Separated Values (CSV) format. The file contains three columns with the corresponding headers. The first column is the tweet id of the corresponding tweet, the second column contains the name of the stance target, and the last column includes the stance of the tweet for the target as Favor or Against." ], "highlighted_evidence": [ "At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. " ] }, { "raw_evidence": [ "At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. Hence, our data set is a balanced one although it is currently limited in size. The corresponding stance annotations are made publicly available at http://ceng.metu.edu.tr/ INLINEFORM0 e120329/ Turkish_Stance_Detection_Tweet_Dataset.csv in Comma Separated Values (CSV) format. The file contains three columns with the corresponding headers. The first column is the tweet id of the corresponding tweet, the second column contains the name of the stance target, and the last column includes the stance of the tweet for the target as Favor or Against." ], "highlighted_evidence": [ "At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. " ] } ] }, { "question": "Which sports clubs are the targets?", "answers": [ { "answer": "Galatasaray, Fenerbah\u00e7e", "type": "extractive" }, { "answer": "Galatasaray , Fenerbah\u00e7e ", "type": "extractive" } ], "q_uid": "ff6c9af28f0e2bb4fb6a69f124665f8ceb966fbc", "evidence": [ { "raw_evidence": [ "We have decided to consider tweets about popular sports clubs as our domain for stance detection. Considerable amounts of tweets are being published for sports-related events at every instant. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbah\u00e7e (namely, Target-2) which are two of the most popular football clubs in Turkey. As is the case for the sentiment analysis tools, the outputs of the stance detection systems on a stream of tweets about these clubs can facilitate the use of the opinions of the football followers by these clubs." ], "highlighted_evidence": [ "Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbah\u00e7e (namely, Target-2) which are two of the most popular football clubs in Turkey." ] }, { "raw_evidence": [ "We have decided to consider tweets about popular sports clubs as our domain for stance detection. Considerable amounts of tweets are being published for sports-related events at every instant. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbah\u00e7e (namely, Target-2) which are two of the most popular football clubs in Turkey. As is the case for the sentiment analysis tools, the outputs of the stance detection systems on a stream of tweets about these clubs can facilitate the use of the opinions of the football followers by these clubs." ], "highlighted_evidence": [ "Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbah\u00e7e (namely, Target-2) which are two of the most popular football clubs in Turkey. " ] } ] } ], "1908.11047": [ { "question": "Which syntactic features are obtained automatically on downstream task data?", "answers": [ { "answer": "token-level chunk label embeddings, chunk boundary information is passed into the task model via BIOUL encoding of the labels", "type": "extractive" } ], "q_uid": "9132d56e26844dc13b3355448d0f14b95bd2178a", "evidence": [ { "raw_evidence": [ "Our second approach incorporates shallow syntactic information in downstream tasks via token-level chunk label embeddings. Task training (and test) data is automatically chunked, and chunk boundary information is passed into the task model via BIOUL encoding of the labels. We add randomly initialized chunk label embeddings to task-specific input encoders, which are then fine-tuned for task-specific objectives. This approach does not require a shallow syntactic encoder or chunk annotations for pretraining cwrs, only a chunker. Hence, this can more directly measure the impact of shallow syntax for a given task." ], "highlighted_evidence": [ "Our second approach incorporates shallow syntactic information in downstream tasks via token-level chunk label embeddings. Task training (and test) data is automatically chunked, and chunk boundary information is passed into the task model via BIOUL encoding of the labels. We add randomly initialized chunk label embeddings to task-specific input encoders, which are then fine-tuned for task-specific objectives." ] } ] } ], "1908.09246": [ { "question": "What baseline approaches does this approach out-perform?", "answers": [ { "answer": "K-means, LEM BIBREF13, DPEMM BIBREF14", "type": "extractive" }, { "answer": "K-means, LEM, DPEMM", "type": "extractive" } ], "q_uid": "0602a974a879e6eae223cdf048410b5a0111665e", "evidence": [ { "raw_evidence": [ "We choose the following three models as the baselines:", "K-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.", "LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration.", "DPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration." ], "highlighted_evidence": [ "We choose the following three models as the baselines:\n\nK-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.\n\nLEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration.\n\nDPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration." ] }, { "raw_evidence": [ "We choose the following three models as the baselines:", "K-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.", "LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration.", "DPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration.", "It can be observed that K-means performs the worst over all three datasets. On the social media datasets, AEM outpoerforms both LEM and DPEMM by 6.5% and 1.7% respectively in F-measure on the FSD dataset, and 4.4% and 3.7% in F-measure on the Twitter dataset. We can also observe that apart from K-means, all the approaches perform worse on the Twitter dataset compared to FSD, possibly due to the limited size of the Twitter dataset. Moreover, on the Google dataset, the proposed AEM performs significantly better than LEM and DPEMM. It improves upon LEM by 15.5% and upon DPEMM by more than 30% in F-measure. This is because: (1) the assumption made by LEM and DPEMM that all words in a document are generated from a single event is not suitable for long text such as news articles; (2) DPEMM generates too many irrelevant events which leads to a very low precision score. Overall, we see the superior performance of AEM across all datasets, with more significant improvement on the for Google datasets (long text)." ], "highlighted_evidence": [ "We choose the following three models as the baselines:\n\nK-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.", "LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. ", "DPEMM BIBREF14 is a non-parametric mixture model for event extraction. ", "It can be observed that K-means performs the worst over all three datasets. On the social media datasets, AEM outpoerforms both LEM and DPEMM by 6.5% and 1.7% respectively in F-measure on the FSD dataset, and 4.4% and 3.7% in F-measure on the Twitter dataset. ", "Moreover, on the Google dataset, the proposed AEM performs significantly better than LEM and DPEMM." ] } ] }, { "question": "What datasets are used?", "answers": [ { "answer": "FSD BIBREF12 , Twitter, and Google datasets", "type": "extractive" }, { "answer": "FSD dataset, Twitter dataset, Google dataset", "type": "extractive" } ], "q_uid": "56b034c303983b2e276ed6518d6b080f7b8abe6a", "evidence": [ { "raw_evidence": [ "To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. Details are summarized below:" ], "highlighted_evidence": [ "To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed." ] }, { "raw_evidence": [ "To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. Details are summarized below:" ], "highlighted_evidence": [ "To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. " ] } ] }, { "question": "What alternative to Gibbs sampling is used?", "answers": [ { "answer": "generator network to capture the event-related patterns", "type": "extractive" } ], "q_uid": "15e481e668114e4afe0c78eefb716ffe1646b494", "evidence": [ { "raw_evidence": [ "Although various GAN based approaches have been explored for many applications, none of these approaches tackles open-domain event extraction from online texts. We propose a novel GAN-based event extraction model called AEM. Compared with the previous models, AEM has the following differences: (1) Unlike most GAN-based text generation approaches, a generator network is employed in AEM to learn the projection function between an event distribution and the event-related word distributions (entity, location, keyword, date). The learned generator captures event-related patterns rather than generating text sequence; (2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration; (3) The discriminative features learned by the discriminator of AEM provide a straightforward way to visualize the extracted events." ], "highlighted_evidence": [ "(2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration;" ] } ] }, { "question": "How does this model overcome the assumption that all words in a document are generated from a single event?", "answers": [ { "answer": "flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions, supervision signal provided by the discriminator will help generator to capture the event-related patterns", "type": "extractive" }, { "answer": "by learning a projection function between the document-event distribution and four event related word distributions ", "type": "abstractive" } ], "q_uid": "3d7a982c718ea6bc7e770d8c5da564fbb9d11951", "evidence": [ { "raw_evidence": [ "To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns. Furthermore, the discriminator also provides low-dimensional discriminative features which can be used to visualize documents and events." ], "highlighted_evidence": [ "Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns." ] }, { "raw_evidence": [ "To extract structured representations of events such as who did what, when, where and why, Bayesian approaches have made some progress. Assuming that each document is assigned to a single event, which is modeled as a joint distribution over the named entities, the date and the location of the event, and the event-related keywords, Zhou et al. zhou2014simple proposed an unsupervised Latent Event Model (LEM) for open-domain event extraction. To address the limitation that LEM requires the number of events to be pre-set, Zhou et al. zhou2017event further proposed the Dirichlet Process Event Mixture Model (DPEMM) in which the number of events can be learned automatically from data. However, both LEM and DPEMM have two limitations: (1) they assume that all words in a document are generated from a single event which can be represented by a quadruple INLINEFORM0 entity, location, keyword, date INLINEFORM1 . However, long texts such news articles often describe multiple events which clearly violates this assumption; (2) During the inference process of both approaches, the Gibbs sampler needs to compute the conditional posterior distribution and assigns an event for each document. This is time consuming and takes long time to converge.", "To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns. Furthermore, the discriminator also provides low-dimensional discriminative features which can be used to visualize documents and events." ], "highlighted_evidence": [ "However, both LEM and DPEMM have two limitations: (1) they assume that all words in a document are generated from a single event which can be represented by a quadruple INLINEFORM0 entity, location, keyword, date INLINEFORM1 .", "To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). " ] } ] } ], "1612.08205": [ { "question": "How many users do they look at?", "answers": [ { "answer": "22,880 users", "type": "extractive" }, { "answer": "20,000", "type": "extractive" } ], "q_uid": "692c9c5d9ff9cd3e0ce8b5e4fa68dda9bd23dec1", "evidence": [ { "raw_evidence": [ "The final set of categories is shown in Table TABREF1 , along with the number of users in each category. The resulting dataset consists of 22,880 users, 41,094 blogs, and 561,003 posts. Table TABREF2 presents additional statistics of our dataset." ], "highlighted_evidence": [ "The resulting dataset consists of 22,880 users, 41,094 blogs, and 561,003 posts. Table TABREF2 presents additional statistics of our dataset." ] }, { "raw_evidence": [ "Specifically, this paper makes four main contributions. First, we build a large, industry-annotated dataset that contains over 20,000 blog users. In addition to their posted text, we also link a number of user metadata including their gender, location, occupation, introduction and interests." ], "highlighted_evidence": [ " First, we build a large, industry-annotated dataset that contains over 20,000 blog users. " ] } ] }, { "question": "What do they mean by a person's industry?", "answers": [ { "answer": "the aggregate of enterprises in a particular field", "type": "extractive" }, { "answer": "the aggregate of enterprises in a particular field", "type": "extractive" } ], "q_uid": "935d6a6187e6a0c9c0da8e53a42697f853f5c248", "evidence": [ { "raw_evidence": [ "This paper explores the potential of predicting a user's industry \u2013the aggregate of enterprises in a particular field\u2013 by identifying industry indicative text in social media. The accurate prediction of users' industry can have a big impact on targeted advertising by minimizing wasted advertising BIBREF4 and improved personalized user experience. A number of studies in the social sciences have associated language use with social factors such as occupation, social class, education, and income BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . An additional goal of this paper is to examine such findings, and in particular the link between language and occupational class, through a data-driven approach." ], "highlighted_evidence": [ "This paper explores the potential of predicting a user's industry \u2013the aggregate of enterprises in a particular field\u2013 by identifying industry indicative text in social media. " ] }, { "raw_evidence": [ "This paper explores the potential of predicting a user's industry \u2013the aggregate of enterprises in a particular field\u2013 by identifying industry indicative text in social media. The accurate prediction of users' industry can have a big impact on targeted advertising by minimizing wasted advertising BIBREF4 and improved personalized user experience. A number of studies in the social sciences have associated language use with social factors such as occupation, social class, education, and income BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . An additional goal of this paper is to examine such findings, and in particular the link between language and occupational class, through a data-driven approach." ], "highlighted_evidence": [ "This paper explores the potential of predicting a user's industry \u2013the aggregate of enterprises in a particular field\u2013 by identifying industry indicative text in social media. " ] } ] }, { "question": "What model did they use for their system?", "answers": [ { "answer": "AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier", "type": "extractive" } ], "q_uid": "3b77b4defc8a139992bd0b07b5cf718382cb1a5f", "evidence": [ { "raw_evidence": [ "After excluding all the words that are not used by at least three separate users in our training set, we build our AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier. As seen in Figure FIGREF3 , we can far exceed the Majority baseline performance by incorporating basic language signals into machine learning algorithms (173% INLINEFORM0 improvement)." ], "highlighted_evidence": [ "After excluding all the words that are not used by at least three separate users in our training set, we build our AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier. " ] } ] }, { "question": "What social media platform did they look at?", "answers": [ { "answer": " http://www.blogger.com", "type": "extractive" }, { "answer": "http://www.blogger.com", "type": "extractive" } ], "q_uid": "01a41c0a4a7365cd37d28690735114f2ff5229f2", "evidence": [ { "raw_evidence": [ "We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed." ], "highlighted_evidence": [ "We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed." ] }, { "raw_evidence": [ "We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed." ], "highlighted_evidence": [ "We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed." ] } ] } ], "1907.09369": [ { "question": "What baseline is used?", "answers": [ { "answer": " Wang et al. BIBREF21, paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets", "type": "extractive" }, { "answer": "Wang et al. , maximum entropy classifier with bag of words model", "type": "extractive" } ], "q_uid": "de3b1145cb4111ea2d4e113f816b537d052d9814", "evidence": [ { "raw_evidence": [ "We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.", "In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. To assess the quality of using hashtags as labels, the sampled 400 tweets randomly and after comparing human annotations by hashtag labels they came up with simple heuristics to increase the quality of labeling by ignoring tweets with quotations and URLs and only keeping tweets with 5 terms or more that have the emotional hashtags at the end of the tweets. Using these rules they extracted around 2.5M tweets. After sampling another 400 random tweets and comparing it to human annotation the saw that hashtags can classify the tweets with 95% precision. They did some pre-processing by making all words lower-case, replaced user mentions with @user, replaced letters/punctuation that is repeated more than twice with the same two letters/punctuation (e.g., ooooh INLINEFORM0 ooh, !!!!! INLINEFORM1 !!); normalized some frequently used informal expressions (e.g., ll \u2192 will, dnt INLINEFORM2 do not); and stripped hash symbols. They used a sub-sample of their dataset to figure out the best approaches for classification, and after trying two different classifiers (multinomial Naive Bayes and LIBLINEAR) and 12 different feature sets, they got their best results using logistic regression branch for LIBLINEAR classifier and a feature set consist of n-gram(n=1,2), LIWC and MPQA lexicons, WordNet-Affect and POS tags.", "In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels." ], "highlighted_evidence": [ "We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.\n\nIn the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. ", "In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels." ] }, { "raw_evidence": [ "In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. To assess the quality of using hashtags as labels, the sampled 400 tweets randomly and after comparing human annotations by hashtag labels they came up with simple heuristics to increase the quality of labeling by ignoring tweets with quotations and URLs and only keeping tweets with 5 terms or more that have the emotional hashtags at the end of the tweets. Using these rules they extracted around 2.5M tweets. After sampling another 400 random tweets and comparing it to human annotation the saw that hashtags can classify the tweets with 95% precision. They did some pre-processing by making all words lower-case, replaced user mentions with @user, replaced letters/punctuation that is repeated more than twice with the same two letters/punctuation (e.g., ooooh INLINEFORM0 ooh, !!!!! INLINEFORM1 !!); normalized some frequently used informal expressions (e.g., ll \u2192 will, dnt INLINEFORM2 do not); and stripped hash symbols. They used a sub-sample of their dataset to figure out the best approaches for classification, and after trying two different classifiers (multinomial Naive Bayes and LIBLINEAR) and 12 different feature sets, they got their best results using logistic regression branch for LIBLINEAR classifier and a feature set consist of n-gram(n=1,2), LIWC and MPQA lexicons, WordNet-Affect and POS tags.", "In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels." ], "highlighted_evidence": [ "In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. ", "In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. " ] } ] }, { "question": "What data is used in experiments?", "answers": [ { "answer": "Wang et al., CrowdFlower dataset ", "type": "extractive" }, { "answer": " tweet dataset created by Wang et al. , CrowdFlower dataset", "type": "extractive" } ], "q_uid": "132f752169adf6dc5ade3e4ca773c11044985da4", "evidence": [ { "raw_evidence": [ "We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.", "In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. To assess the quality of using hashtags as labels, the sampled 400 tweets randomly and after comparing human annotations by hashtag labels they came up with simple heuristics to increase the quality of labeling by ignoring tweets with quotations and URLs and only keeping tweets with 5 terms or more that have the emotional hashtags at the end of the tweets. Using these rules they extracted around 2.5M tweets. After sampling another 400 random tweets and comparing it to human annotation the saw that hashtags can classify the tweets with 95% precision. They did some pre-processing by making all words lower-case, replaced user mentions with @user, replaced letters/punctuation that is repeated more than twice with the same two letters/punctuation (e.g., ooooh INLINEFORM0 ooh, !!!!! INLINEFORM1 !!); normalized some frequently used informal expressions (e.g., ll \u2192 will, dnt INLINEFORM2 do not); and stripped hash symbols. They used a sub-sample of their dataset to figure out the best approaches for classification, and after trying two different classifiers (multinomial Naive Bayes and LIBLINEAR) and 12 different feature sets, they got their best results using logistic regression branch for LIBLINEAR classifier and a feature set consist of n-gram(n=1,2), LIWC and MPQA lexicons, WordNet-Affect and POS tags." ], "highlighted_evidence": [ "We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.", "In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. " ] }, { "raw_evidence": [ "There are not many free datasets available for emotion classification. Most datasets are subject-specific (i.e. news headlines, fairy tails, etc.) and not big enough to train deep neural networks. Here we use the tweet dataset created by Wang et al. As mentioned in the previous section, they have collected over 2 million tweets by using hashtags for labeling their data. They created a list of words associated with 7 emotions (six emotions from BIBREF34 love, joy, surprise, anger, sadness fear plus thankfulness (See Table TABREF3 ), and used the list as their guide to label the sampled tweets with acceptable quality.", "In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels." ], "highlighted_evidence": [ "Here we use the tweet dataset created by Wang et al. ", "Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels." ] } ] }, { "question": "What meaningful information does the GRU model capture, which traditional ML models do not?", "answers": [ { "answer": " the context and sequential nature of the text", "type": "extractive" }, { "answer": "information about the context and sequential nature of the text", "type": "extractive" } ], "q_uid": "1d9aeeaa6efa1367c22be0718f5a5635a73844bd", "evidence": [ { "raw_evidence": [ "Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature. As mentioned in the Introduction, Recurrent Neural Networks (RNNs) have been shown to perform well for the verity of tasks in NLP, especially classification tasks. And as our goal was to capture more information about the context and sequential nature of the text, we decided to use a model based on bidirectional RNN, specifically a bidirectional GRU network to analyze the tweets." ], "highlighted_evidence": [ "Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature." ] }, { "raw_evidence": [ "Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature. As mentioned in the Introduction, Recurrent Neural Networks (RNNs) have been shown to perform well for the verity of tasks in NLP, especially classification tasks. And as our goal was to capture more information about the context and sequential nature of the text, we decided to use a model based on bidirectional RNN, specifically a bidirectional GRU network to analyze the tweets." ], "highlighted_evidence": [ "Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature." ] } ] } ], "1911.07555": [ { "question": "What is the approach of previous work?", "answers": [ { "answer": "'shallow' naive Bayes, SVM, hierarchical stacked classifiers, bidirectional recurrent neural networks", "type": "extractive" }, { "answer": "BIBREF11 that uses a character level n-gram language model, 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15, BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features, The winning approach for DSL 2015 used an ensemble naive Bayes classifier, The fasttext classifier BIBREF17, hierarchical stacked classifiers (including lexicons), bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24", "type": "extractive" } ], "q_uid": "012b8a89aea27485797373adbcda32f16f9d7b54", "evidence": [ { "raw_evidence": [ "Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .", "Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0. Some work has also been done on classifying surnames between Tshivenda, Xitsonga and Sepedi BIBREF20. Additionally, data augmentation BIBREF21 and adversarial training BIBREF22 approaches are potentially very useful to reduce the requirement for data.", "Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task. In these models text features can include character and word n-grams as well as informative character and word-level features learnt BIBREF25 from the training data. The neural methods seem to work well in tasks where more training data is available." ], "highlighted_evidence": [ "Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .", "Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0.", "Researchers have investigated deeper LID models li" ] }, { "raw_evidence": [ "Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .", "Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0. Some work has also been done on classifying surnames between Tshivenda, Xitsonga and Sepedi BIBREF20. Additionally, data augmentation BIBREF21 and adversarial training BIBREF22 approaches are potentially very useful to reduce the requirement for data.", "Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task. In these models text features can include character and word n-grams as well as informative character and word-level features learnt BIBREF25 from the training data. The neural methods seem to work well in tasks where more training data is available." ], "highlighted_evidence": [ "Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .", "Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0.", "Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task." ] } ] }, { "question": "Is the lexicon the same for all languages?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "c598028815066089cc1e131b96d6966d2610467a", "evidence": [ { "raw_evidence": [ "The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets." ], "highlighted_evidence": [ "The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets." ] }, { "raw_evidence": [ "The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets." ], "highlighted_evidence": [ "The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets." ] } ] }, { "question": "How do they obtain the lexicon?", "answers": [ { "answer": "built over all the data and therefore includes the vocabulary from both the training and testing sets", "type": "extractive" } ], "q_uid": "ca4daafdc23f4e23d933ebabe682e1fe0d4b95ed", "evidence": [ { "raw_evidence": [ "The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets." ], "highlighted_evidence": [ "The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets." ] } ] }, { "question": "What evaluation metric is used?", "answers": [ { "answer": "average classification accuracy", "type": "extractive" }, { "answer": "average classification accuracy, execution performance", "type": "extractive" } ], "q_uid": "0ab3df10f0b7203e859e9b62ffa7d6d79ffbbe50", "evidence": [ { "raw_evidence": [ "The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8." ], "highlighted_evidence": [ "The average classification accuracy results are summarised in Table TABREF9." ] }, { "raw_evidence": [ "The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8.", "The execution performance of some of the LID implementations are shown in Table TABREF10. Results were generated on an early 2015 13-inch Retina MacBook Pro with a 2.9 GHz CPU (Turbo Boosted to 3.4 GHz) and 8GB RAM. The C++ implementation in BIBREF17 is the fastest. The implementation in BIBREF8 makes use of un-hashed feature representations which causes it to be slower than the proposed sklearn implementation. The execution performance of BIBREF23 might improve by a factor of five to ten when executed on a GPU." ], "highlighted_evidence": [ "The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label.", "The execution performance of some of the LID implementations are shown in Table TABREF10." ] } ] }, { "question": "Which languages are similar to each other?", "answers": [ { "answer": "Nguni languages (zul, xho, nbl, ssw), Sotho languages (nso, sot, tsn)", "type": "extractive" }, { "answer": "The Nguni languages are similar to each other, The same is true of the Sotho languages", "type": "extractive" } ], "q_uid": "92dfacbbfa732ecea006e251be415a6f89fb4ec6", "evidence": [ { "raw_evidence": [ "Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages." ], "highlighted_evidence": [ "These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages.", "Similar languages are to each other are:\n- Nguni languages: zul, xho, nbl, ssw\n- Sotho languages: nso, sot, tsn" ] }, { "raw_evidence": [ "Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages." ], "highlighted_evidence": [ "Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages." ] } ] }, { "question": "Which datasets are employed for South African languages LID?", "answers": [ { "answer": "DSL 2015, DSL 2017, JW300 parallel corpus , NCHLT text corpora", "type": "extractive" } ], "q_uid": "c8541ff10c4e0c8e9eb37d9d7ea408d1914019a9", "evidence": [ { "raw_evidence": [ "The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0.", "The datasets for the DSL 2015 & DSL 2017 shared tasks BIBREF1 are often used in LID benchmarks and also available on Kaggle . The DSL datasets, like other LID datasets, consists of text sentences labelled by language. The 2017 dataset, for example, contains 14 languages over 6 language groups with 18000 training samples and 1000 testing samples per language.", "The recently published JW300 parallel corpus BIBREF2 covers over 300 languages with around 100 thousand parallel sentences per language pair on average. In South Africa, a multilingual corpus of academic texts produced by university students with different mother tongues is being developed BIBREF3. The WiLI-2018 benchmark dataset BIBREF4 for monolingual written natural language identification includes around 1000 paragraphs of 235 languages. A possibly useful link can also be made BIBREF5 between Native Language Identification (NLI) (determining the native language of the author of a text) and Language Variety Identification (LVI) (classification of different varieties of a single language) which opens up more datasets. The Leipzig Corpora Collection BIBREF6, the Universal Declaration of Human Rights and Tatoeba are also often used sources of data.", "The NCHLT text corpora BIBREF7 is likely a good starting point for a shared LID task dataset for the South African languages BIBREF8. The NCHLT text corpora contains enough data to have 3500 training samples and 600 testing samples of 300+ character sentences per language. Researchers have recently started applying existing algorithms for tasks like neural machine translation in earnest to such South African language datasets BIBREF9." ], "highlighted_evidence": [ "The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0.\n\nThe datasets for the DSL 2015 & DSL 2017 shared tasks BIBREF1 are often used in LID benchmarks and also available on Kaggle .", "The recently published JW300 parallel corpus BIBREF2 covers over 300 languages with around 100 thousand parallel sentences per language pair on average.", "The WiLI-2018 benchmark dataset BIBREF4 for monolingual written natural language identification includes around 1000 paragraphs of 235 languages.", "The NCHLT text corpora BIBREF7 is likely a good starting point for a shared LID task dataset for the South African languages BIBREF8." ] } ] } ], "1503.00841": [ { "question": "What background knowledge do they leverage?", "answers": [ { "answer": "labeled features", "type": "extractive" }, { "answer": "labelled features, which are words whose presence strongly indicates a specific class or topic", "type": "abstractive" } ], "q_uid": "50be4a737dc0951b35d139f51075011095d77f2a", "evidence": [ { "raw_evidence": [ "We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification." ], "highlighted_evidence": [ "We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge." ] }, { "raw_evidence": [ "We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification.", "As described in BIBREF0 , there are two ways to obtain labeled features. The first way is to use information gain. We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool. Note that using information gain requires the document label, but this is only to simulate how we human provide prior knowledge to the model. The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ ).", "We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features." ], "highlighted_evidence": [ "We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier.", "As described in BIBREF0 , there are two ways to obtain labeled features.", "We use bag-of-words feature and remove stopwords in the preprocess stage. ", "We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool.", "The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ )." ] } ] }, { "question": "What are the three regularization terms?", "answers": [ { "answer": "a regularization term associated with neutral features, the maximum entropy of class distribution regularization term, the KL divergence between reference and predicted class distribution", "type": "extractive" }, { "answer": "a regularization term associated with neutral features, the maximum entropy of class distribution, KL divergence between reference and predicted class distribution", "type": "extractive" } ], "q_uid": "6becff2967fe7c5256fe0b00231765be5b9db9f1", "evidence": [ { "raw_evidence": [ "More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later." ], "highlighted_evidence": [ "More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution." ] }, { "raw_evidence": [ "More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later." ], "highlighted_evidence": [ "More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution." ] } ] }, { "question": "What NLP tasks do they consider?", "answers": [ { "answer": "text classification for themes including sentiment, web-page, science, medical and healthcare", "type": "abstractive" } ], "q_uid": "76121e359dfe3f16c2a352bd35f28005f2a40da3", "evidence": [ { "raw_evidence": [ "In this section, we first justify the approach when there exists unbalance in the number of labeled features or in class distribution. Then, to test the influence of $\\lambda $ , we conduct some experiments with the method which incorporates the KL divergence of class distribution. Last, we evaluate our approaches in 9 commonly used text classification datasets. We set $\\lambda = 5|K|$ by default in all experiments unless there is explicit declaration. The baseline we choose here is GE-FL BIBREF0 , a method based on generalization expectation criteria.", "We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features." ], "highlighted_evidence": [ " Last, we evaluate our approaches in 9 commonly used text classification datasets.", "We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare." ] } ] }, { "question": "How do they define robustness of a model?", "answers": [ { "answer": "ability to accurately classify texts even when the amount of prior knowledge for different classes is unbalanced, and when the class distribution of the dataset is unbalanced", "type": "abstractive" }, { "answer": "Low sensitivity to bias in prior knowledge", "type": "abstractive" } ], "q_uid": "02428a8fec9788f6dc3a86b5d5f3aa679935678d", "evidence": [ { "raw_evidence": [ "GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. But as mentioned before, it is often the case that we are not able to provide enough knowledge for some of the classes. For the baseball-hockey classification task, as shown before, GE-FL will predict most of the instances as baseball. In this section, we will show three terms to make the model more robust.", "(a) We randomly select $t \\in [1, 20]$ features from the feature pool for one class, and only one feature for the other. The original balanced movie dataset is used (positive:negative=1:1).", "Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. The original balanced movie dataset is used as a control group. We test with both balanced and unbalanced labeled features. For the balanced case, we randomly select 10 features from the feature pool for each class, and for the unbalanced case, we select 10 features for one class, and 1 feature for the other. Results are shown in Figure 3 .", "Figure 3 (b) shows that when the labeled features are unbalanced, our methods significantly outperforms GE-FL. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive." ], "highlighted_evidence": [ "GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied.", "We randomly select $t \\in [1, 20]$ features from the feature pool for one class, and only one feature for the other.", "Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents.", "Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive." ] }, { "raw_evidence": [ "However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge? Or, which kind of knowledge is appropriate for the task? Let's see an example: we may be a baseball fan but unfamiliar with hockey so that we can provide a few number of feature words of baseball, but much less of hockey for a baseball-hockey classification task. Such prior knowledge may mislead the model with heavy bias to baseball. If the model cannot handle this situation appropriately, the performance may be undesirable.", "In this paper, we investigate into the problem in the framework of Generalized Expectation Criteria BIBREF7 . The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical. To this end, we introduce auxiliary regularization terms in which our prior knowledge is formalized as distribution over output variables. Recall the example just mentioned, though we do not have enough knowledge to provide features for class hockey, it is easy for us to provide some neutral words, namely words that are not strong indicators of any class, like player here. As one of the factors revealed in this paper, supplying neutral feature words can boost the performance remarkably, making the model more robust." ], "highlighted_evidence": [ "However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge?", "The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical." ] } ] } ], "1804.11346": [ { "question": "Are the annotations automatic or manually created?", "answers": [ { "answer": "Automatic", "type": "abstractive" }, { "answer": "We performed the annotation with freely available tools for the Portuguese language.", "type": "extractive" } ], "q_uid": "7793805982354947ea9fc742411bec314a6998f6", "evidence": [ { "raw_evidence": [ "We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 ." ], "highlighted_evidence": [ " We performed the annotation with freely available tools for the Portuguese language." ] }, { "raw_evidence": [ "We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 ." ], "highlighted_evidence": [ "We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 ." ] } ] } ], "1611.08661": [ { "question": "What neural models are used to encode the text?", "answers": [ { "answer": "NBOW, LSTM, attentive LSTM", "type": "abstractive" }, { "answer": "neural bag-of-words (NBOW) model, bidirectional long short-term memory network (LSTM), attention-based encoder", "type": "extractive" } ], "q_uid": "b49598b05358117ab1471b8ebd0b042d2f04b2a4", "evidence": [ { "raw_evidence": [ "In this paper, we use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions." ], "highlighted_evidence": [ "In this paper, we use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions." ] }, { "raw_evidence": [ "A simple and intuitive method is the neural bag-of-words (NBOW) model, in which the representation of text can be generated by summing up its constituent word representations.", "To address some of the modelling issues with NBOW, we consider using a bidirectional long short-term memory network (LSTM) BIBREF14 , BIBREF15 to model the text description.", "Given a relation for an entity, not all of words/phrases in its text description are useful to model a specific fact. Some of them may be important for the given relation, but may be useless for other relations. Therefore, we introduce an attention mechanism BIBREF20 to utilize an attention-based encoder that constructs contextual text encodings according to different relations." ], "highlighted_evidence": [ "A simple and intuitive method is the neural bag-of-words (NBOW) model, in which the representation of text can be generated by summing up its constituent word representations.", "To address some of the modelling issues with NBOW, we consider using a bidirectional long short-term memory network (LSTM) BIBREF14 , BIBREF15 to model the text description.", "Therefore, we introduce an attention mechanism BIBREF20 to utilize an attention-based encoder that constructs contextual text encodings according to different relations." ] } ] }, { "question": "What baselines are used for comparison?", "answers": [ { "answer": "TransE", "type": "extractive" }, { "answer": "TransE", "type": "extractive" } ], "q_uid": "932b39fd6c47c6a880621a62e6a978491d881d60", "evidence": [ { "raw_evidence": [ "Experimental results on both WN18 and FB15k are shown in Table 2 , where we use \u201cJointly(CBOW)\u201d, \u201cJointly(LSTM)\u201d and \u201cJointly(A-LSTM)\u201d to represent our jointly encoding models with CBOW, LSTM and attentive LSTM text encoders. Our baseline is TransE since that the score function of our models is based on TransE." ], "highlighted_evidence": [ "Our baseline is TransE since that the score function of our models is based on TransE." ] }, { "raw_evidence": [ "Experimental results on both WN18 and FB15k are shown in Table 2 , where we use \u201cJointly(CBOW)\u201d, \u201cJointly(LSTM)\u201d and \u201cJointly(A-LSTM)\u201d to represent our jointly encoding models with CBOW, LSTM and attentive LSTM text encoders. Our baseline is TransE since that the score function of our models is based on TransE." ], "highlighted_evidence": [ "Our baseline is TransE since that the score function of our models is based on TransE." ] } ] }, { "question": "What datasets are used to evaluate this paper?", "answers": [ { "answer": "WordNet BIBREF0, Freebase BIBREF1, WN18 (a subset of WordNet) BIBREF24 , FB15K (a subset of Freebase) BIBREF2", "type": "extractive" } ], "q_uid": "b36f867fcda5ad62c46d23513369337352aa01d2", "evidence": [ { "raw_evidence": [ "We use two popular knowledge bases: WordNet BIBREF0 and Freebase BIBREF1 in this paper. Specifically, we use WN18 (a subset of WordNet) BIBREF24 and FB15K (a subset of Freebase) BIBREF2 since their text descriptions are easily publicly available. Table 1 lists statistics of the two datasets." ], "highlighted_evidence": [ "We use two popular knowledge bases: WordNet BIBREF0 and Freebase BIBREF1 in this paper. Specifically, we use WN18 (a subset of WordNet) BIBREF24 and FB15K (a subset of Freebase) BIBREF2 since their text descriptions are easily publicly available. Table 1 lists statistics of the two datasets." ] } ] } ], "1910.07601": [ { "question": "Which approach out of two proposed in the paper performed better in experiments?", "answers": [ { "answer": "CJFA encoder ", "type": "extractive" }, { "answer": "CJFA encoder", "type": "extractive" } ], "q_uid": "c6a0b9b5dabcefda0233320dd1548518a0ae758e", "evidence": [ { "raw_evidence": [ "Table TABREF17 shows phone classification and speaker recognition results for the three model configurations: the VAE baseline, the CJFS encoder and the CJFA encoder. In our experiments the window size was set to 30 frames, namely 10 frames for the target and 10 frames for left and right neighbours, and an embedding dimension of 150. This was used for both CJFS and CJFA models alike. Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks. The CJFA encoder performs better on all tasks than the VAE baseline by a significant margin. It is noteworthy that performance on Task b is generally significantly lower than for Task a, for reasons of training overlap but also smaller training set sizes." ], "highlighted_evidence": [ "Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks." ] }, { "raw_evidence": [ "Table TABREF17 shows phone classification and speaker recognition results for the three model configurations: the VAE baseline, the CJFS encoder and the CJFA encoder. In our experiments the window size was set to 30 frames, namely 10 frames for the target and 10 frames for left and right neighbours, and an embedding dimension of 150. This was used for both CJFS and CJFA models alike. Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks. The CJFA encoder performs better on all tasks than the VAE baseline by a significant margin. It is noteworthy that performance on Task b is generally significantly lower than for Task a, for reasons of training overlap but also smaller training set sizes." ], "highlighted_evidence": [ "Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks." ] } ] }, { "question": "What classification baselines are used for comparison?", "answers": [ { "answer": "VAE", "type": "extractive" }, { "answer": "VAE based phone classification", "type": "extractive" } ], "q_uid": "1e185a3b8cac1da939427b55bf1ba7e768c5dae4", "evidence": [ { "raw_evidence": [ "Work on VAE in BIBREF17 to learn acoustic embeddings conducted experiments using the TIMIT data set. In particular the tasks of phone classification and speaker recognition where chosen. As work here is an extension of such work we we follow the experimentation, however with significant extensions (see Section SECREF13). With guidance from the authors of the original workBIBREF17 our own implementation of VAE was created and compared with the published performance - yielding near identical results. This implementation then was also used as the basis for CJFS and CJFA, as introduced in \u00a7 SECREF6.", "For the assessment of embedded vector quality our work also follows the same task types, namely phone classification and speaker recognition (details in \u00a7SECREF13), with identical task implementations as in the reference paper. It is important to note that phone classification differs from the widely reported phone recognition experiments on TIMIT. Classification uses phone boundaries which are assumed to be known. However, no contextual information is available, which is typically used in the recognition setups, by means of triphone models, or bigram language models. Therefore the task is often more difficult than recognition. The baseline performance for VAE based phone classification experiments in BIBREF17 report an accuracy of 72.2%. The re-implementation forming the basis for our work gave an accuracy of 72.0%, a result that was considered to provide a credible basis for further work." ], "highlighted_evidence": [ "Work on VAE in BIBREF17 to learn acoustic embeddings conducted experiments using the TIMIT data set.", "The baseline performance for VAE based phone classification experiments in BIBREF17 report an accuracy of 72.2%. The re-implementation forming the basis for our work gave an accuracy of 72.0%, a result that was considered to provide a credible basis for further work." ] }, { "raw_evidence": [ "For the assessment of embedded vector quality our work also follows the same task types, namely phone classification and speaker recognition (details in \u00a7SECREF13), with identical task implementations as in the reference paper. It is important to note that phone classification differs from the widely reported phone recognition experiments on TIMIT. Classification uses phone boundaries which are assumed to be known. However, no contextual information is available, which is typically used in the recognition setups, by means of triphone models, or bigram language models. Therefore the task is often more difficult than recognition. The baseline performance for VAE based phone classification experiments in BIBREF17 report an accuracy of 72.2%. The re-implementation forming the basis for our work gave an accuracy of 72.0%, a result that was considered to provide a credible basis for further work." ], "highlighted_evidence": [ "The baseline performance for VAE based phone classification experiments in BIBREF17 report an accuracy of 72.2%. The re-implementation forming the basis for our work gave an accuracy of 72.0%, a result that was considered to provide a credible basis for further work." ] } ] }, { "question": "What TIMIT datasets are used for testing?", "answers": [ { "answer": "Once split into 8 subsets (A-H), the test set used are blocks D+H and blocks F+H", "type": "abstractive" }, { "answer": " this paper makes use of the official training and test sets, covering in total 630 speakers with 8 utterances each", "type": "extractive" } ], "q_uid": "26e2d4d0e482e6963a76760323b8e1c26b6eee91", "evidence": [ { "raw_evidence": [ "In order to achieve these configuration the TIMIT data was split. Fig. FIGREF12 illustrates the split of the data into 8 subsets (A\u2013H). The TIMIT dataset contains speech from 462 speakers in training and 168 speakers in the test set, with 8 utterances for each speaker. The TIMIT training and test set are split into 8 blocks, where each block contains 2 utterances per speaker, randomly chosen. Thus each block A,B,C,D contains data from 462 speakers with 924 utterances taken from the training sets, and each block E,F,G,H contains speech from 168 test set speakers with 336 utterances.", "For Task a training of embeddings and the classifier is identical, namely consisting of data from blocks (A+B+C+E+F+G). The test data is the remainder, namely blocks (D+H). For Task b the training of embeddings and classifiers uses (A+B+E+F) and (C+G) respectively, while again using (D+H) for test. Task c keeps both separate: embeddings are trained on (A+B+C+D), classifiers on (E+G) and tests are conducted on (F+H). Note that H is part of all tasks, and that Task c is considerably easier as the number of speakers to separate is only 168, although training conditions are more difficult." ], "highlighted_evidence": [ "In order to achieve these configuration the TIMIT data was split. Fig. FIGREF12 illustrates the split of the data into 8 subsets (A\u2013H). The TIMIT dataset contains speech from 462 speakers in training and 168 speakers in the test set, with 8 utterances for each speaker. The TIMIT training and test set are split into 8 blocks, where each block contains 2 utterances per speaker, randomly chosen. Thus each block A,B,C,D contains data from 462 speakers with 924 utterances taken from the training sets, and each block E,F,G,H contains speech from 168 test set speakers with 336 utterances.\n\nFor Task a training of embeddings and the classifier is identical, namely consisting of data from blocks (A+B+C+E+F+G). The test data is the remainder, namely blocks (D+H). For Task b the training of embeddings and classifiers uses (A+B+E+F) and (C+G) respectively, while again using (D+H) for test. Task c keeps both separate: embeddings are trained on (A+B+C+D), classifiers on (E+G) and tests are conducted on (F+H). Note that H is part of all tasks, and that Task c is considerably easier as the number of speakers to separate is only 168, although training conditions are more difficult." ] }, { "raw_evidence": [ "Taking the VAE experiments as baseline, the TIMIT data is used for this workBIBREF25. TIMIT contains studio recordings from a large number of speakers with detailed phoneme segment information. Work in this paper makes use of the official training and test sets, covering in total 630 speakers with 8 utterances each. There is no speaker overlap between training and test set, which comprise of 462 and 168 speakers, respectively. All work presented here use of 80 dimensional Mel-scale filter bank coefficients." ], "highlighted_evidence": [ "Taking the VAE experiments as baseline, the TIMIT data is used for this workBIBREF25. TIMIT contains studio recordings from a large number of speakers with detailed phoneme segment information. Work in this paper makes use of the official training and test sets, covering in total 630 speakers with 8 utterances each." ] } ] } ], "1909.00175": [ { "question": "What datasets are used in evaluation?", "answers": [ { "answer": "The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun.", "type": "extractive" }, { "answer": "A homographic and heterographic benchmark datasets by BIBREF9.", "type": "abstractive" } ], "q_uid": "f56d07f73b31a9c72ea737b40103d7004ef6a079", "evidence": [ { "raw_evidence": [ "We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end." ], "highlighted_evidence": [ "We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end." ] }, { "raw_evidence": [ "We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end." ], "highlighted_evidence": [ "We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. " ] } ] }, { "question": "What is the tagging scheme employed?", "answers": [ { "answer": "A new tagging scheme that tags the words before and after the pun as well as the pun words.", "type": "abstractive" }, { "answer": "a new tagging scheme consisting of three tags, namely { INLINEFORM0 }", "type": "extractive" } ], "q_uid": "38e4aaeabf06a63a067b272f8950116733a7895c", "evidence": [ { "raw_evidence": [ "The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }.", "INLINEFORM0 tag indicates that the current word appears before the pun in the given context.", "INLINEFORM0 tag highlights the current word is a pun.", "INLINEFORM0 tag indicates that the current word appears after the pun." ], "highlighted_evidence": [ "To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }.\n\nINLINEFORM0 tag indicates that the current word appears before the pun in the given context.\n\nINLINEFORM0 tag highlights the current word is a pun.\n\nINLINEFORM0 tag indicates that the current word appears after the pun." ] }, { "raw_evidence": [ "The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }.", "INLINEFORM0 tag indicates that the current word appears before the pun in the given context.", "INLINEFORM0 tag highlights the current word is a pun.", "INLINEFORM0 tag indicates that the current word appears after the pun.", "We empirically show that the INLINEFORM0 scheme can guarantee the context property that there exists a maximum of one pun residing in the text." ], "highlighted_evidence": [ "The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }.\n\nINLINEFORM0 tag indicates that the current word appears before the pun in the given context.\n\nINLINEFORM0 tag highlights the current word is a pun.\n\nINLINEFORM0 tag indicates that the current word appears after the pun.\n\nWe empirically show that the INLINEFORM0 scheme can guarantee the context property that there exists a maximum of one pun residing in the text." ] } ] } ], "1910.06036": [ { "question": "How they extract \"structured answer-relevant relation\"?", "answers": [ { "answer": "Using the OpenIE toolbox and applying heuristic rules to select the most relevant relation.", "type": "abstractive" }, { "answer": "off-the-shelf toolbox of OpenIE", "type": "extractive" } ], "q_uid": "1d197cbcac7b3f4015416f0152a6692e881ada6c", "evidence": [ { "raw_evidence": [ "To address this issue, we extract the structured answer-relevant relations from sentences and propose a method to jointly model such structured relation and the unstructured sentence for question generation. The structured answer-relevant relation is likely to be to the point context and thus can help keep the generated question to the point. For example, Figure FIGREF1 shows our framework can extract the right answer-relevant relation (\u201cThe daily mean temperature in January\u201d, \u201cis\u201d, \u201c32.6$^\\circ $F (0.3$^\\circ $C)\u201d) among multiple facts. With the help of such structured information, our model is less likely to be confused by sentences with a complex structure. Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules." ], "highlighted_evidence": [ "Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules." ] }, { "raw_evidence": [ "We utilize an off-the-shelf toolbox of OpenIE to the derive structured answer-relevant relations from sentences as to the point contexts. Relations extracted by OpenIE can be represented either in a triple format or in an n-ary format with several secondary arguments, and we employ the latter to keep the extractions as informative as possible and avoid extracting too many similar relations in different granularities from one sentence. We join all arguments in the extracted n-ary relation into a sequence as our to the point context. Figure FIGREF5 shows n-ary relations extracted from OpenIE. As we can see, OpenIE extracts multiple relations for complex sentences. Here we select the most informative relation according to three criteria in the order of descending importance: (1) having the maximal number of overlapped tokens between the answer and the relation; (2) being assigned the highest confidence score by OpenIE; (3) containing maximum non-stop words. As shown in Figure FIGREF5, our criteria can select answer-relevant relations (waved in Figure FIGREF5), which is especially useful for sentences with extraneous information. In rare cases, OpenIE cannot extract any relation, we treat the sentence itself as the to the point context." ], "highlighted_evidence": [ "We utilize an off-the-shelf toolbox of OpenIE to the derive structured answer-relevant relations from sentences as to the point contexts. Relations extracted by OpenIE can be represented either in a triple format or in an n-ary format with several secondary arguments, and we employ the latter to keep the extractions as informative as possible and avoid extracting too many similar relations in different granularities from one sentence." ] } ] }, { "question": "What metrics do they use?", "answers": [ { "answer": "BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19", "type": "extractive" }, { "answer": "BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4), METEOR (MET), ROUGE-L (R-L)", "type": "extractive" } ], "q_uid": "477d9d3376af4d938bb01280fe48d9ae7c9cf7f7", "evidence": [ { "raw_evidence": [ "We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC." ], "highlighted_evidence": [ "We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC." ] }, { "raw_evidence": [ "We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC." ], "highlighted_evidence": [ "We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19." ] } ] }, { "question": "On what datasets are experiments performed?", "answers": [ { "answer": "SQuAD", "type": "extractive" }, { "answer": "SQuAD", "type": "extractive" } ], "q_uid": "f225a9f923e4cdd836dd8fe097848da06ec3e0cc", "evidence": [ { "raw_evidence": [ "We conduct experiments on the SQuAD dataset BIBREF3. It contains 536 Wikipedia articles and 100k crowd-sourced question-answer pairs. The questions are written by crowd-workers and the answers are spans of tokens in the articles. We employ two different data splits by following Zhou2017NeuralQG and Du2017LearningTA . In Zhou2017NeuralQG, the original SQuAD development set is evenly divided into dev and test sets, while Du2017LearningTA treats SQuAD development set as its development set and splits original SQuAD training set into a training set and a test set. We also filter out questions which do not have any overlapped non-stop words with the corresponding sentences and perform some preprocessing steps, such as tokenization and sentence splitting. The data statistics are given in Table TABREF27." ], "highlighted_evidence": [ "We conduct experiments on the SQuAD dataset BIBREF3." ] }, { "raw_evidence": [ "Question Generation (QG) is the task of automatically creating questions from a range of inputs, such as natural language text BIBREF0, knowledge base BIBREF1 and image BIBREF2. QG is an increasingly important area in NLP with various application scenarios such as intelligence tutor systems, open-domain chatbots and question answering dataset construction. In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. As shown in Figure FIGREF1, given a sentence in the reading comprehension paragraph and the text fragment (i.e., the answer) that we want to ask about, we aim to generate a question that is asked about the specified answer." ], "highlighted_evidence": [ "In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. " ] } ] } ], "2002.01984": [ { "question": "What was the baseline model?", "answers": [ { "answer": "by answering always YES (in batch 2 and 3) ", "type": "extractive" } ], "q_uid": "ff338921e34c15baf1eae0074938bf79ee65fdd2", "evidence": [ { "raw_evidence": [ "We started by answering always YES (in batch 2 and 3) to get the baseline performance. For batch 4 we used entailment. Our algorithm was very simple: Given a question we iterate through the candidate sentences and try to find any candidate sentence is contradicting the question (with confidence over 50%), if so 'No' is returned as answer, else 'Yes' is returned. In batch 4 this strategy produced better than the BioAsq baseline performance, and compared to our other systems, the use of entailment increased the performance by about 13% (macro F1 score). We used 'AllenNlp' BIBREF13 entailment library to find entailment of the candidate sentences with question." ], "highlighted_evidence": [ "We started by answering always YES (in batch 2 and 3) to get the baseline performance. For batch 4 we used entailment." ] } ] }, { "question": "What dataset did they use?", "answers": [ { "answer": "BioASQ dataset", "type": "abstractive" }, { "answer": "A dataset provided by BioASQ consisting of questions, gold standard documents, snippets, concepts and ideal and ideal answers.", "type": "abstractive" } ], "q_uid": "e807d347742b2799bc347c0eff19b4c270449fee", "evidence": [ { "raw_evidence": [ "BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure." ], "highlighted_evidence": [ "BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types." ] }, { "raw_evidence": [ "BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure." ], "highlighted_evidence": [ "BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. " ] } ] }, { "question": "What was their highest recall score?", "answers": [ { "answer": "0.7033", "type": "extractive" }, { "answer": "0.7033", "type": "extractive" } ], "q_uid": "31b92c03d5b9be96abcc1d588d10651703aff716", "evidence": [ { "raw_evidence": [ "Overall, we followed the similar strategy that's been followed for Factoid Question Answering task. We started our experiment with batch 2, where we submitted 20 best answers (with context from snippets). Starting with batch 3, we performed post processing: once models generate answer predictions (n-best predictions), we do post-processing on the predicted answers. In test batch 4, our system (called FACTOIDS) achieved highest recall score of \u20180.7033\u2019 but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures." ], "highlighted_evidence": [ "In test batch 4, our system (called FACTOIDS) achieved highest recall score of \u20180.7033\u2019 but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures." ] }, { "raw_evidence": [ "Overall, we followed the similar strategy that's been followed for Factoid Question Answering task. We started our experiment with batch 2, where we submitted 20 best answers (with context from snippets). Starting with batch 3, we performed post processing: once models generate answer predictions (n-best predictions), we do post-processing on the predicted answers. In test batch 4, our system (called FACTOIDS) achieved highest recall score of \u20180.7033\u2019 but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures." ], "highlighted_evidence": [ " In test batch 4, our system (called FACTOIDS) achieved highest recall score of \u20180.7033\u2019 but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures." ] } ] } ], "1909.00326": [ { "question": "Does their model suffer exhibit performance drops when incorporating word importance?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "384bf1f55c34b36cb03f916f50bbefade6c86a75", "evidence": [ { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "How do they measure which words are under-translated by NMT models?", "answers": [ { "answer": "They measured the under-translated words with low word importance score as calculated by Attribution.\nmethod", "type": "abstractive" }, { "answer": "we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair", "type": "extractive" } ], "q_uid": "aef607d2ac46024be17b1ddd0ed3f13378c563a6", "evidence": [ { "raw_evidence": [ "In this experiment, we propose to use the estimated word importance to detect the under-translated words by NMT models. Intuitively, under-translated input words should contribute little to the NMT outputs, yielding much smaller word importance. Given 500 Chinese$\\Rightarrow $English sentence pairs translated by the Transformer model (BLEU 23.57), we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair. These annotators have at least six years of English study experience, whose native language is Chinese. Among these sentences, 178 sentences have under-translation errors with 553 under-translated words in total.", "Table TABREF32 lists the accuracy of detecting under-translation errors by comparing words of least importance and human-annotated under-translated words. As seen, our Attribution method consistently and significantly outperforms both Erasure and Attention approaches. By exploiting the word importance calculated by Attribution method, we can identify the under-translation errors automatically without the involvement of human interpreters. Although the accuracy is not high, it is worth noting that our under-translation method is very simple and straightforward. This is potentially useful for debugging NMT models, e.g., automatic post-editing with constraint decoding BIBREF26, BIBREF27." ], "highlighted_evidence": [ "Intuitively, under-translated input words should contribute little to the NMT outputs, yielding much smaller word importance. ", "By exploiting the word importance calculated by Attribution method, we can identify the under-translation errors automatically without the involvement of human interpreters. " ] }, { "raw_evidence": [ "In this experiment, we propose to use the estimated word importance to detect the under-translated words by NMT models. Intuitively, under-translated input words should contribute little to the NMT outputs, yielding much smaller word importance. Given 500 Chinese$\\Rightarrow $English sentence pairs translated by the Transformer model (BLEU 23.57), we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair. These annotators have at least six years of English study experience, whose native language is Chinese. Among these sentences, 178 sentences have under-translation errors with 553 under-translated words in total." ], "highlighted_evidence": [ " Given 500 Chinese$\\Rightarrow $English sentence pairs translated by the Transformer model (BLEU 23.57), we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair. These annotators have at least six years of English study experience, whose native language is Chinese. Among these sentences, 178 sentences have under-translation errors with 553 under-translated words in total." ] } ] }, { "question": "How do their models decide how much improtance to give to the output words?", "answers": [ { "answer": "Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. ", "type": "extractive" }, { "answer": "They compute the gradient of the output at each time step with respect to the input words to decide the importance.", "type": "abstractive" } ], "q_uid": "93beae291b455e5d3ecea6ac73b83632a3ae7ec7", "evidence": [ { "raw_evidence": [ "Following the formula, we can calculate the contribution of every input word makes to every output word, forming a contribution matrix of size $M \\times N$, where $N$ is the output sentence length. Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. To this end, for each input word, we first aggregate its contribution values to all output words by the sum operation, and then normalize all sums through the Softmax function. Figure FIGREF13 illustrates an example of the calculated word importance and the contribution matrix, where an English sentence is translated into a French sentence using the Transformer model. A negative contribution value indicates that the input word has negative effects on the output word." ], "highlighted_evidence": [ "Following the formula, we can calculate the contribution of every input word makes to every output word, forming a contribution matrix of size $M \\times N$, where $N$ is the output sentence length. Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. To this end, for each input word, we first aggregate its contribution values to all output words by the sum operation, and then normalize all sums through the Softmax function." ] }, { "raw_evidence": [ "Formally, let $\\textbf {x} = (x_1, ..., x_M)$ be the input sentence and $\\textbf {x}^{\\prime }$ be a baseline input. $F$ is a well-trained NMT model, and $F(\\textbf {x})_n$ is the model output (i.e., $P(y_n|\\textbf {y}_{