{ "1912.01214": [ { "question": "what language pairs are explored?", "answers": [ { "answer": "De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru", "type": "abstractive" }, { "answer": "French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation", "type": "extractive" } ], "q_uid": "5eda469a8a77f028d0c5f1acd296111085614537", "evidence": [ { "raw_evidence": [ "For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus.", "The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.", "FLOAT SELECTED: Table 1: Data Statistics." ], "highlighted_evidence": [ "For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. ", "The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. ", "FLOAT SELECTED: Table 1: Data Statistics." ] }, { "raw_evidence": [ "The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.", "For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus." ], "highlighted_evidence": [ "For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets.", "For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation." ] } ] } ], "1801.05147": [ { "question": "What accuracy does the proposed system achieve?", "answers": [ { "answer": "F1 scores of 85.99 on the DL-PS data, 75.15 on the EC-MT data and 71.53 on the EC-UQ data ", "type": "abstractive" }, { "answer": "F1 of 85.99 on the DL-PS dataset (dialog domain); 75.15 on EC-MT and 71.53 on EC-UQ (e-commerce domain)", "type": "abstractive" } ], "q_uid": "ef4dba073d24042f24886580ae77add5326f2130", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets." ] } ] } ], "1704.06194": [ { "question": "On which benchmarks they achieve the state of the art?", "answers": [ { "answer": "SimpleQuestions, WebQSP", "type": "extractive" }, { "answer": "WebQSP, SimpleQuestions", "type": "extractive" } ], "q_uid": "9ee07edc371e014df686ced4fb0c3a7b9ce3d5dc", "evidence": [ { "raw_evidence": [ "Finally, like STAGG, which uses multiple relation detectors (see yih2015semantic for the three models used), we also try to use the top-3 relation detectors from Section \"Relation Detection Results\" . As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP.", "FLOAT SELECTED: Table 3: KBQA results on SimpleQuestions (SQ) and WebQSP (WQ) test sets. The numbers in green color are directly comparable to our results since we start with the same entity linking results." ], "highlighted_evidence": [ "As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP", "FLOAT SELECTED: Table 3: KBQA results on SimpleQuestions (SQ) and WebQSP (WQ) test sets. The numbers in green color are directly comparable to our results since we start with the same entity linking results." ] }, { "raw_evidence": [ "Table 2 shows the results on two relation detection tasks. The AMPCNN result is from BIBREF20 , which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from BIBREF4 , where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively)." ], "highlighted_evidence": [ "The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively)." ] } ] } ], "1909.00512": [ { "question": "How do they calculate a static embedding for each word?", "answers": [ { "answer": "They use the first principal component of a word's contextualized representation in a given layer as its static embedding.", "type": "abstractive" }, { "answer": " by taking the first principal component (PC) of its contextualized representations in a given layer", "type": "extractive" } ], "q_uid": "891c2001d6baaaf0da4e65b647402acac621a7d2", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word\u2019s contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT\u2019s contextualized representations often outperform GloVe and FastText vectors." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word\u2019s contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT\u2019s contextualized representations often outperform GloVe and FastText vectors." ] }, { "raw_evidence": [ "As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. In Table TABREF34, we plot the performance of these PC static embeddings on several benchmark tasks. These tasks cover semantic similarity, analogy solving, and concept categorization: SimLex999 BIBREF21, MEN BIBREF22, WS353 BIBREF23, RW BIBREF24, SemEval-2012 BIBREF25, Google analogy solving BIBREF0 MSR analogy solving BIBREF26, BLESS BIBREF27 and AP BIBREF28. We leave out layers 3 - 10 in Table TABREF34 because their performance is between those of Layers 2 and 11." ], "highlighted_evidence": [ "As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. " ] } ] } ], "2003.03106": [ { "question": "What is the performance of BERT on the task?", "answers": [ { "answer": "F1 scores are:\nHUBES-PHI: Detection(0.965), Classification relaxed (0.95), Classification strict (0.937)\nMedoccan: Detection(0.972), Classification (0.967)", "type": "abstractive" }, { "answer": "BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, Table ", "type": "extractive" } ], "q_uid": "66c96c297c2cffdf5013bab5e95b59101cb38655", "evidence": [ { "raw_evidence": [ "To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems. Despite the fact that, in general, the systems obtain high values, BERT outperforms them again. BERT's F1-score is 1.9 points higher than the next most competitive result in the comparison. More remarkably, the recall obtained by BERT is about 5 points above.", "FLOAT SELECTED: Table 5: Results of Experiment A: NUBES-PHI", "The results of the two MEDDOCAN scenarios \u2013detection and classification\u2013 are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN" ], "highlighted_evidence": [ "To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems.", "FLOAT SELECTED: Table 5: Results of Experiment A: NUBES-PHI", "The results of the two MEDDOCAN scenarios \u2013detection and classification\u2013 are shown in Table .", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN" ] }, { "raw_evidence": [ "In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3). Finally, we include the results obtained by mao2019hadoken with a CRF output layer on top of BERT embeddings. MEDDOCAN consists of two scenarios:", "The results of the two MEDDOCAN scenarios \u2013detection and classification\u2013 are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN" ], "highlighted_evidence": [ "In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3).", "The results of the two MEDDOCAN scenarios \u2013detection and classification\u2013 are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN" ] } ] } ], "1909.11687": [ { "question": "What state-of-the-art compression techniques were used in the comparison?", "answers": [ { "answer": "baseline without knowledge distillation (termed NoKD), Patient Knowledge Distillation (PKD)", "type": "extractive" }, { "answer": "NoKD, PKD, BERTBASE teacher model", "type": "extractive" } ], "q_uid": "efe9bad55107a6be7704ed97ecce948a8ca7b1d2", "evidence": [ { "raw_evidence": [ "For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states." ], "highlighted_evidence": [ "For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 3: Results of the distilled models, the teacher model and baselines on the downstream language understanding task test sets, obtained from the GLUE server, along with the size parameters and compression ratios of the respective models compared to the teacher BERTBASE. MNLI-m and MNLI-mm refer to the genre-matched and genre-mismatched test sets for MNLI.", "For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states.", "Table TABREF21 shows results on the downstream language understanding tasks, as well as model sizes, for our approaches, the BERTBASE teacher model, and the PKD and NoKD baselines. We note that models trained with our proposed approaches perform strongly and consistently improve upon the identically parametrized NoKD baselines, indicating that the dual training and shared projection techniques are effective, without incurring significant losses against the BERTBASE teacher model. Comparing with the PKD baseline, our 192-dimensional models, achieving a higher compression rate than either of the PKD models, perform better than the 3-layer PKD baseline and are competitive with the larger 6-layer baseline on task accuracy while being nearly 5 times as small." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Results of the distilled models, the teacher model and baselines on the downstream language understanding task test sets, obtained from the GLUE server, along with the size parameters and compression ratios of the respective models compared to the teacher BERTBASE. MNLI-m and MNLI-mm refer to the genre-matched and genre-mismatched test sets for MNLI.", "For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states.", "Table TABREF21 shows results on the downstream language understanding tasks, as well as model sizes, for our approaches, the BERTBASE teacher model, and the PKD and NoKD baselines" ] } ] } ], "1804.05918": [ { "question": "What discourse relations does it work best/worst for?", "answers": [ { "answer": "explicit discourse relations", "type": "extractive" }, { "answer": "Best: Expansion (Exp). Worst: Comparison (Comp).", "type": "abstractive" } ], "q_uid": "f17ca24b135f9fe6bb25dc5084b13e1637ec7744", "evidence": [ { "raw_evidence": [ "The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).", "After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.", "As we explained in section 4.2, we ran our models for 10 times to obtain stable average performance. Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. Furthermore, the ensemble model achieves the best performance for predicting both implicit and explicit discourse relations simultaneously." ], "highlighted_evidence": [ "the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).", "After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved.", "Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. " ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 3: Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.", "The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.", "Multi-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.", "In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp).", "However, the performance on the three small classes (Comp, Cont and Temp) remains low." ] } ] } ], "2002.01664": [ { "question": "Which 7 Indian languages do they experiment with?", "answers": [ { "answer": "Hindi, English, Kannada, Telugu, Assamese, Bengali and Malayalam", "type": "abstractive" }, { "answer": "Kannada, Hindi, Telugu, Malayalam, Bengali, English and Assamese (in table, missing in text)", "type": "abstractive" } ], "q_uid": "75df70ce7aa714ec4c6456d0c51f82a16227f2cb", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Dataset" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Dataset" ] }, { "raw_evidence": [ "In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. The amount of audio data for training and testing for each of the language is shown in the table bellow.", "FLOAT SELECTED: Table 1: Dataset" ], "highlighted_evidence": [ "We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English.", "The amount of audio data for training and testing for each of the language is shown in the table bellow.", "FLOAT SELECTED: Table 1: Dataset" ] } ] } ], "1809.00540": [ { "question": "Do they use graphical models?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "a99fdd34422f4231442c220c97eafc26c76508dd", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold." ] }, { "raw_evidence": [], "highlighted_evidence": [] } ] }, { "question": "What metric is used for evaluation?", "answers": [ { "answer": "F1, precision, recall, accuracy", "type": "abstractive" }, { "answer": "Precision, recall, F1, accuracy", "type": "abstractive" } ], "q_uid": "d604f5fb114169f75f9a38fab18c1e866c5ac28b", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "FLOAT SELECTED: Table 3: Accuracy of the SVM ranker on the English training set. TOKENS are the word token features, LEMMAS are the lemma features for title and body, ENTS are named entity features and TS are timestamp features. All features are described in detail in \u00a74, and are listed for both the title and the body." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "FLOAT SELECTED: Table 3: Accuracy of the SVM ranker on the English training set. TOKENS are the word token features, LEMMAS are the lemma features for title and body, ENTS are named entity features and TS are timestamp features. All features are described in detail in \u00a74, and are listed for both the title and the body." ] }, { "raw_evidence": [ "To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in \u00a7 SECREF19 . We note that adding features increases the accuracy of the SVM ranker, especially the timestamp features. However, the timestamp feature actually interferes with our optimization of INLINEFORM0 to identify when new clusters are needed, although they improve the SVM reranking accuracy. We speculate this is true because high accuracy in the reranking problem does not necessarily help with identifying when new clusters need to be opened.", "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "Table TABREF35 gives the final monolingual results on the three datasets. For English, we see that the significant improvement we get using our algorithm over the algorithm of aggarwal2006framework is due to an increased recall score. We also note that the trained models surpass the baseline for all languages, and that the timestamp feature (denoted by TS), while not required to beat the baseline, has a very relevant contribution in all cases. Although the results for both the baseline and our models seem to differ across languages, one can verify a consistent improvement from the latter to the former, suggesting that the score differences should be mostly tied to the different difficulty found across the datasets for each language. The presented scores show that our learning framework generalizes well to different languages and enables high quality clustering results." ], "highlighted_evidence": [ "To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in \u00a7 SECREF19 . ", "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "Table TABREF35 gives the final monolingual results on the three datasets." ] } ] } ], "2004.03354": [ { "question": "Which eight NER tasks did they evaluate on?", "answers": [ { "answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "type": "abstractive" }, { "answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "type": "abstractive" } ], "q_uid": "1d3e914d0890fc09311a70de0b20974bf7f0c9fe", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT\u2019s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT\u2019s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT\u2019s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT\u2019s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row." ] } ] } ], "1611.04798": [ { "question": "Do they test their framework performance on commonly used language pairs, such as English-to-German?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "897ba53ef44f658c128125edd26abf605060fb13", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Results of the English\u2192German systems in a simulated under-resourced scenario." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results of the English\u2192German systems in a simulated under-resourced scenario." ] }, { "raw_evidence": [ "A standard NMT system employs parallel data only. While good parallel corpora are limited in number, getting monolingual data of an arbitrary language is trivial. To make use of German monolingual corpus in an English INLINEFORM0 German NMT system, sennrich2016b built a separate German INLINEFORM1 English NMT using the same parallel corpus, then they used that system to translate the German monolingual corpus back to English, forming a synthesis parallel data. gulcehre2015 trained another RNN-based language model to score the monolingual corpus and integrate it to the NMT system through shallow or deep fusion. Both methods requires to train separate systems with possibly different hyperparameters for each. Conversely, by applying mix-source method to the big monolingual data, we need to train only one network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+CommonCrawl) and train a mix-source NMT system from those data." ], "highlighted_evidence": [ " standard NMT system employs parallel data only. While good parallel corpora are limited in number, getting monolingual data of an arbitrary language is trivial. To make use of German monolingual corpus in an English INLINEFORM0 German NMT system, sennrich2016b built a separate German INLINEFORM1 English NMT using the same parallel corpus, then they used that system to translate the German monolingual corpus back to English, forming a synthesis parallel data. gulcehre2015 trained another RNN-based language model to score the monolingual corpus and integrate it to the NMT system through shallow or deep fusion. Both methods requires to train separate systems with possibly different hyperparameters for each. Conversely, by applying mix-source method to the big monolingual data, we need to train only one network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+CommonCrawl) and train a mix-source NMT system from those data." ] } ] } ], "1809.01541": [ { "question": "What languages are evaluated?", "answers": [ { "answer": "German, English, Spanish, Finnish, French, Russian, Swedish.", "type": "abstractive" } ], "q_uid": "c32adef59efcb9d1a5b10e1d7c999a825c9e6d9a", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Official shared task test set results." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Official shared task test set results." ] } ] }, { "question": "What is MSD prediction?", "answers": [ { "answer": "The task of predicting MSD tags: V, PST, V.PCTP, PASS.", "type": "abstractive" }, { "answer": "morphosyntactic descriptions (MSD)", "type": "extractive" } ], "q_uid": "32a3c248b928d4066ce00bbb0053534ee62596e7", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS." ] }, { "raw_evidence": [ "There are two tracks of Task 2 of CoNLL\u2013SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available. See Table TABREF1 for an example. Task 2 is additionally split in three settings based on data size: high, medium and low, with high-resource datasets consisting of up to 70K instances per language, and low-resource datasets consisting of only about 1K instances." ], "highlighted_evidence": [ "There are two tracks of Task 2 of CoNLL\u2013SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available." ] } ] } ], "1809.09194": [ { "question": "What other models do they compare to?", "answers": [ { "answer": "SAN Baseline, BNA, DocQA, R.M-Reader, R.M-Reader+Verifier and DocQA+ELMo", "type": "abstractive" }, { "answer": "BNA, DocQA, R.M-Reader, R.M-Reader + Verifier, DocQA + ELMo, R.M-Reader+Verifier+ELMo", "type": "abstractive" } ], "q_uid": "d3dbb5c22ef204d85707d2d24284cc77fa816b6c", "evidence": [ { "raw_evidence": [ "Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future.", "The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization. By incorporating the output information of classifier into Joint SAN, it obtains a slight improvement, e.g., 72.2 vs 72.66 (+0.46) in terms of F1 score. By analyzing the results, we found that in most cases when our model extract an NULL string answer, the classifier also predicts it as an unanswerable question with a high probability.", "FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). \u2217: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission." ], "highlighted_evidence": [ "Table TABREF21 reports comparison results in literature published .", "The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization.", "FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). \u2217: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). \u2217: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission.", "Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). \u2217: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission.", "Table TABREF21 reports comparison results in literature published ." ] } ] } ], "1802.06024": [ { "question": "How much better than the baseline is LiLi?", "answers": [ { "answer": "In case of Freebase knowledge base, LiLi model had better F1 score than the single model by 0.20 , 0.01, 0.159 for kwn, unk, and all test Rel type. The values for WordNet are 0.25, 0.1, 0.2. \n", "type": "abstractive" } ], "q_uid": "286078813136943dfafb5155ee15d2429e7601d9", "evidence": [ { "raw_evidence": [ "Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.", "Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.", "Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.", "F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .", "BG: The missing or connecting links (when the user does not respond) are filled with \u201c@-RelatedTo-@\" blindly, no guessing mechanism.", "w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.", "Evaluation-I: Strategy Formulation Ability. Table 5 shows the list of inference strategies formulated by LiLi for various INLINEFORM0 and INLINEFORM1 , which control the strategy formulation of LiLi. When INLINEFORM2 , LiLi cannot interact with user and works like a closed-world method. Thus, INLINEFORM3 drops significantly (0.47). When INLINEFORM4 , i.e. with only one interaction per query, LiLi acquires knowledge well for instances where either of the entities or relation is unknown. However, as one unknown entity may appear in multiple test triples, once the entity becomes known, LiLi doesn\u2019t need to ask for it again and can perform inference on future triples causing significant increase in INLINEFORM5 (0.97). When INLINEFORM6 , LiLi is able to perform inference on all instances and INLINEFORM7 becomes 1. For INLINEFORM8 , LiLi uses INLINEFORM9 only once (as only one MLQ satisfies INLINEFORM10 ) compared to INLINEFORM11 . In summary, LiLi\u2019s RL-model can effectively formulate query-specific inference strategies (based on specified parameter values). Evaluation-II: Predictive Performance. Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. Single model performs better than Sep for unknown relations due to the sharing of knowledge (weights) across tasks. However, for known relations, performance drops because, as a new relation arrives to the system, old weights get corrupted and catastrophic forgetting occurs. For unknown relations, as the relations are evaluated just after training, there is no chance for catastrophic forgetting. The performance improvement ( INLINEFORM15 ) of LiLi over F-th on Freebase signifies that the relation-specific threshold INLINEFORM16 works better than fixed threshold 0.5 because, if all prediction values for test instances lie above (or below) 0.5, F-th predicts all instances as +ve (-ve) which degrades its performance. Due to the utilization of contextual similarity (highly correlated with class labels) of entity-pairs, LiLi\u2019s guessing mechanism works better ( INLINEFORM17 ) than blind guessing (BG). The past task selection mechanism of LiLi also improves its performance over w/o PTS, as it acquires more clues during testing for poorly performed tasks (evaluated on validation set). For Freebase, due to a large number of past tasks [9 (25% of 38)], the performance difference is more significant ( INLINEFORM18 ). For WordNet, the number is relatively small [3 (25% of 14)] and hence, the difference is not significant.", "FLOAT SELECTED: Table 6: Comparison of predictive performance of various versions of LiLi [kwn = known, unk = unknown, all = overall]." ], "highlighted_evidence": [ "Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.\n\nSingle: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.\n\nSep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.\n\nF-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .\n\nBG: The missing or connecting links (when the user does not respond) are filled with \u201c@-RelatedTo-@\" blindly, no guessing mechanism.\n\nw/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.", "Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. ", "FLOAT SELECTED: Table 6: Comparison of predictive performance of various versions of LiLi [kwn = known, unk = unknown, all = overall]." ] } ] } ], "1809.00530": [ { "question": "How many labels do the datasets have?", "answers": [ { "answer": "719313", "type": "abstractive" }, { "answer": "Book, Electronics, Beauty and Music each have 6000, IMDB 84919, Yelp 231163, Cell Phone 194792 and Baby 160792 labeled data.", "type": "abstractive" } ], "q_uid": "6aa2a1e2e3666f2b2a1f282d4cbdd1ca325eb9de", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Summary of datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Summary of datasets." ] }, { "raw_evidence": [ "Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 . We keep all reviews in the original datasets and consider a transductive setting where all target examples are used for both training (without label information) and evaluation. We perform sampling to balance the classes of labeled source data in each minibatch INLINEFORM3 during training.", "Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .", "FLOAT SELECTED: Table 1: Summary of datasets." ], "highlighted_evidence": [ "Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 .", "Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .", "FLOAT SELECTED: Table 1: Summary of datasets." ] } ] }, { "question": "What are the source and target domains?", "answers": [ { "answer": "Book, electronics, beauty, music, IMDB, Yelp, cell phone, baby, DVDs, kitchen", "type": "abstractive" }, { "answer": "we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain, Book (BK), Electronics (E), Beauty (BT), and Music (M)", "type": "extractive" } ], "q_uid": "9176d2ba1c638cdec334971c4c7f1bb959495a8e", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Summary of datasets.", "Most previous works BIBREF0 , BIBREF1 , BIBREF6 , BIBREF7 , BIBREF29 carried out experiments on the Amazon benchmark released by Blitzer et al. ( BIBREF0 ). The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). Following their experimental settings, we consider the binary classification task to predict whether a review is positive or negative on the target domain. Each domain consists of 1000 positive and 1000 negative reviews respectively. We also allow 4000 unlabeled reviews to be used for both the source and the target domains, of which the positive and negative reviews are balanced as well, following the settings in previous works. We construct 12 cross-domain sentiment classification tasks and split the labeled data in each domain into a training set of 1600 reviews and a test set of 400 reviews. The classifier is trained on the training set of the source domain and is evaluated on the test set of the target domain. The comparison results are shown in Table TABREF37 ." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Summary of datasets.", "The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). " ] }, { "raw_evidence": [ "Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .", "In all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain. Since we cannot control the label distribution of unlabeled data during training, we consider two different settings:" ], "highlighted_evidence": [ "It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .\n\nIn all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain." ] } ] } ], "1912.08960": [ { "question": "Which datasets are used?", "answers": [ { "answer": "Existential (OneShape, MultiShapes), Spacial (TwoShapes, Multishapes), Quantification (Count, Ratio) datasets are generated from ShapeWorldICE", "type": "abstractive" }, { "answer": "ShapeWorldICE datasets: OneShape, MultiShapes, TwoShapes, MultiShapes, Count, and Ratio", "type": "abstractive" } ], "q_uid": "b1bc9ae9d40e7065343c12f860a461c7c730a612", "evidence": [ { "raw_evidence": [ "We develop a variety of ShapeWorldICE datasets, with a similar idea to the \u201cskill tasks\u201d in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene." ], "highlighted_evidence": [ "We develop a variety of ShapeWorldICE datasets, with a similar idea to the \u201cskill tasks\u201d in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene." ] }, { "raw_evidence": [ "Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE. We empirically demonstrate that the existing metrics BLEU and SPICE do not capture true caption-image agreement in all scenarios, while the GTD framework allows a fine-grained investigation of how well existing models cope with varied visual situations and linguistic constructions.", "We develop a variety of ShapeWorldICE datasets, with a similar idea to the \u201cskill tasks\u201d in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene." ], "highlighted_evidence": [ "Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE.", "We develop a variety of ShapeWorldICE datasets, with a similar idea to the \u201cskill tasks\u201d in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene." ] } ] } ], "2002.11910": [ { "question": "What are previous state of the art results?", "answers": [ { "answer": "Overall F1 score:\n- He and Sun (2017) 58.23\n- Peng and Dredze (2017) 58.99\n- Xu et al. (2018) 59.11", "type": "abstractive" }, { "answer": "For Named entity the maximum precision was 66.67%, and the average 62.58%, same values for Recall was 55.97% and 50.33%, and for F1 57.14% and 55.64%. Where for Nominal Mention had maximum recall of 74.48% and average of 73.67%, Recall had values of 54.55% and 53.7%, and F1 had values of 62.97% and 62.12%. Finally the Overall F1 score had maximum value of 59.11% and average of 58.77%", "type": "abstractive" } ], "q_uid": "9da1e124d28b488b0d94998d32aa2fa8a5ebec51", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models." ] }, { "raw_evidence": [ "Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. This best model performance is achieved with a dropout rate of 0.1, and a learning rate of 0.05. Our results are compared with state-of-the-art models BIBREF15, BIBREF19, BIBREF20 on the same Sina Weibo training and test datasets. Our model shows an absolute improvement of 2% for the overall F1 score.", "FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models." ], "highlighted_evidence": [ "Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. ", "FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models." ] } ] } ], "1909.09587": [ { "question": "What is the model performance on target language reading comprehension?", "answers": [ { "answer": "Table TABREF6, Table TABREF8", "type": "extractive" }, { "answer": "when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, F1 score is only 44.1 for the model training on Zh-En", "type": "extractive" } ], "q_uid": "37be0d479480211291e068d0d3823ad0c13321d3", "evidence": [ { "raw_evidence": [ "Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "FLOAT SELECTED: Table 1: EM/F1 scores over Chinese testing set.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language." ], "highlighted_evidence": [ "Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. ", "Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "FLOAT SELECTED: Table 1: EM/F1 scores over Chinese testing set.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language." ] }, { "raw_evidence": [ "Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language." ], "highlighted_evidence": [ "Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean.", "For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language." ] } ] }, { "question": "What source-target language pairs were used in this work? ", "answers": [ { "answer": "En-Fr, En-Zh, En-Jp, En-Kr, Zh-En, Zh-Fr, Zh-Jp, Zh-Kr to English, Chinese or Korean", "type": "abstractive" }, { "answer": "English , Chinese", "type": "extractive" }, { "answer": "English, Chinese, Korean, we translated the English and Chinese datasets into more languages, with Google Translate", "type": "extractive" } ], "q_uid": "a3d9b101765048f4b61cbd3eaa2439582ebb5c77", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language." ] }, { "raw_evidence": [ "In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language." ], "highlighted_evidence": [ "In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. ", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language." ] }, { "raw_evidence": [ "We have training and testing sets in three different languages: English, Chinese and Korean. The English dataset is SQuAD BIBREF2. The Chinese dataset is DRCD BIBREF14, a Chinese RC dataset with 30,000+ examples in the training set and 10,000+ examples in the development set. The Korean dataset is KorQuAD BIBREF15, a Korean RC dataset with 60,000+ examples in the training set and 10,000+ examples in the development set, created in exactly the same procedure as SQuAD. We always use the development sets of SQuAD, DRCD and KorQuAD for testing since the testing sets of the corpora have not been released yet.", "Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate. An obvious issue with this method is that some examples might no longer have a recoverable span. To solve the problem, we use fuzzy matching to find the most possible answer, which calculates minimal edit distance between translated answer and all possible spans. If the minimal edit distance is larger than min(10, lengths of translated answer - 1), we drop the examples during training, and treat them as noise when testing. In this way, we can recover more than 95% of examples. The following generated datasets are recovered with same setting." ], "highlighted_evidence": [ "We have training and testing sets in three different languages: English, Chinese and Korean.", "Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate." ] } ] } ], "1809.02286": [ { "question": "Which baselines did they compare against?", "answers": [ { "answer": "Various tree structured neural networks including variants of Tree-LSTM, Tree-based CNN, RNTN, and non-tree models including variants of LSTMs, CNNs, residual, and self-attention based networks", "type": "abstractive" }, { "answer": "Sentence classification baselines: RNTN (Socher et al. 2013), AdaMC-RNTN (Dong et al. 2014), TE-RNTN (Qian et al. 2015), TBCNN (Mou et al. 2015), Tree-LSTM (Tai, Socher, and Manning 2015), AdaHT-LSTM-CM (Liu, Qiu, and Huang 2017), DC-TreeLSTM (Liu, Qiu, and Huang 2017), TE-LSTM (Huang, Qian, and Zhu 2017), BiConTree (Teng and Zhang 2017), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), TreeNet (Cheng et al. 2018), CNN (Kim 2014), AdaSent (Zhao, Lu, and Poupart 2015), LSTM-CNN (Zhou et al. 2016), byte-mLSTM (Radford, Jozefowicz, and Sutskever 2017), BCN + Char + CoVe (McCann et al. 2017), BCN + Char + ELMo (Peters et al. 2018). \nStanford Natural Language Inference baselines: Latent Syntax Tree-LSTM (Yogatama et al. 2017), Tree-based CNN (Mou et al. 2016), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), NSE (Munkhdalai and Yu 2017), Reinforced Self- Attention Network (Shen et al. 2018), Residual stacked encoders: (Nie and Bansal 2017), BiLSTM with generalized pooling (Chen, Ling, and Zhu 2018).", "type": "abstractive" } ], "q_uid": "0ad4359e3e7e5e5f261c2668fe84c12bc762b3b8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. \u2020: Models which are pre-trained with large external corpora.", "Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. \u2020: Models which are pre-trained with large external corpora.", "Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).)" ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. \u2020: Models which are pre-trained with large external corpora.", "Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. \u2020: Models which are pre-trained with large external corpora.", "Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).)" ] } ] } ], "1809.01202": [ { "question": "What baselines did they consider?", "answers": [ { "answer": "state-of-the-art PDTB taggers", "type": "extractive" }, { "answer": "Linear SVM, RBF SVM, and Random Forest", "type": "abstractive" } ], "q_uid": "4cbe5a36b492b99f9f9fea8081fe4ba10a7a0e94", "evidence": [ { "raw_evidence": [ "We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message). Then, we compare how models work for each task and disassembled them to inspect how each part of the models can affect their final prediction performances. We conducted McNemar's test to determine whether the performance differences are statistically significant at $p < .05$ ." ], "highlighted_evidence": [ "We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message)." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 5: Causal explanation identification performance. Bold indicates significant imrpovement over next best model (p < .05)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Causal explanation identification performance. Bold indicates significant imrpovement over next best model (p < .05)" ] } ] } ], "1906.01081": [ { "question": "By how much more does PARENT correlate with human judgements in comparison to other text generation metrics?", "answers": [ { "answer": "Best proposed metric has average correlation with human judgement of 0.913 and 0.846 compared to best compared metrics result of 0.758 and 0.829 on WikiBio and WebNLG challenge.", "type": "abstractive" }, { "answer": "Their average correlation tops the best other model by 0.155 on WikiBio.", "type": "abstractive" } ], "q_uid": "ffa7f91d6406da11ddf415ef094aaf28f3c3872d", "evidence": [ { "raw_evidence": [ "We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 .", "FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for \u03b1 = 0.1.", "FLOAT SELECTED: Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge.", "The human ratings were collected on 3 distinct aspects \u2013 grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table TABREF48 . Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well." ], "highlighted_evidence": [ "We report the average correlation across all bootstrap samples for each metric in Table TABREF37 .", "FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for \u03b1 = 0.1.", "FLOAT SELECTED: Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge.", "We report the correlation of several metrics with these ratings in Table TABREF48 ." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for \u03b1 = 0.1." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for \u03b1 = 0.1." ] } ] } ], "1812.10479": [ { "question": "Which stock market sector achieved the best performance?", "answers": [ { "answer": "Energy with accuracy of 0.538", "type": "abstractive" }, { "answer": "Energy", "type": "abstractive" } ], "q_uid": "b634ff1607ce5756655e61b9a6f18bc736f84c83", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 8: Sector-level performance comparison." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 8: Sector-level performance comparison." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has superior performance across the three evaluation metrics and taking into consideration the state-of-the-art volatility proxies, namely Garman-Klass (\u03c3\u0302PK) and Parkinson (\u03c3\u0302PK)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has superior performance across the three evaluation metrics and taking into consideration the state-of-the-art volatility proxies, namely Garman-Klass (\u03c3\u0302PK) and Parkinson (\u03c3\u0302PK)." ] } ] } ], "1909.08089": [ { "question": "How much does their model outperform existing models?", "answers": [ { "answer": "Best proposed model result vs best previous result:\nArxiv dataset: Rouge 1 (43.62 vs 42.81), Rouge L (29.30 vs 31.80), Meteor (21.78 vs 21.35)\nPubmed dataset: Rouge 1 (44.85 vs 44.29), Rouge L (31.48 vs 35.21), Meteor (20.83 vs 20.56)", "type": "abstractive" }, { "answer": "On arXiv dataset, the proposed model outperforms baselie model by (ROUGE-1,2,L) 0.67 0.72 0.77 respectively and by Meteor 0.31.\n", "type": "abstractive" } ], "q_uid": "de5b6c25e35b3a6c5e40e350fc5e52c160b33490", "evidence": [ { "raw_evidence": [ "The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively. Follow the work BIBREF18 , we use the approximate randomization as the statistical significance test method BIBREF32 with a Bonferroni correction for multiple comparisons, at the confidence level 0.01 ( INLINEFORM0 ). As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L. Presumably, this is due to the neural training process, which relies on a goal standard based on ROUGE-1. Exploring other training schemes and/or a combination of traditional and neural approaches is left as future work. Similarly, the neural extractive models also dominate the neural abstractive models on ROUGE-1,2, but these abstractive models tend to have the highest ROUGE-L scores, possibly because they are trained directly on gold standard abstract summaries.", "FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an \u2217, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an \u2217, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold." ], "highlighted_evidence": [ "The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively.", "As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L.", "FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an \u2217, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an \u2217, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an \u2217, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an \u2217, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an \u2217, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an \u2217, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold." ] } ] } ], "1609.00559": [ { "question": "What embedding techniques are explored in the paper?", "answers": [ { "answer": "Skip\u2013gram, CBOW", "type": "extractive" }, { "answer": "integrated vector-res, vector-faith, Skip\u2013gram, CBOW", "type": "extractive" } ], "q_uid": "8b3d3953454c88bde88181897a7a2c0c8dd87e23", "evidence": [ { "raw_evidence": [ "muneeb2015evalutating trained both the Skip\u2013gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip\u2013gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip\u2013gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible.", "FLOAT SELECTED: Table 4: Comparison with Previous Work" ], "highlighted_evidence": [ "chiu2016how evaluated both the the Skip\u2013gram and CBOW models over the PMC corpus and PubMed.", "FLOAT SELECTED: Table 4: Comparison with Previous Work" ] }, { "raw_evidence": [ "Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith. The results show that integrating semantic similarity measures into second\u2013order co\u2013occurrence vectors obtains a higher or on\u2013par correlation with human judgments as the previous works reported results with the exception of the UMNSRS rel dataset. The results reported by Pakhomov2016corpus and chiu2016how obtain a higher correlation although the results can not be directly compared because both works used different subsets of the term pairs from the UMNSRS dataset.", "muneeb2015evalutating trained both the Skip\u2013gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip\u2013gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip\u2013gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible." ], "highlighted_evidence": [ "Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith.", "chiu2016how evaluated both the the Skip\u2013gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia)." ] } ] } ], "1904.10503": [ { "question": "Which other approaches do they compare their model with?", "answers": [ { "answer": "Akbik et al. (2018), Link et al. (2012)", "type": "abstractive" }, { "answer": "They compare to Akbik et al. (2018) and Link et al. (2012).", "type": "abstractive" } ], "q_uid": "5a65ad10ff954d0f27bb3ccd9027e3d8f7f6bb76", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Comparison with existing models." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Comparison with existing models." ] }, { "raw_evidence": [ "In this paper, we present a deep neural network model for the task of fine-grained named entity classification using ELMo embeddings and Wikidata. The proposed model learns representations for entity mentions based on its context and incorporates the rich structure of Wikidata to augment these labels into finer-grained subtypes. We can see comparisons of our model made on Wiki(gold) in Table TABREF20 . We note that the model performs similarly to existing systems without being trained or tuned on that particular dataset. Future work may include refining the clustering method described in Section 2.2 to extend to types other than person, location, organization, and also to include disambiguation of entity types.", "FLOAT SELECTED: Table 3: Comparison with existing models." ], "highlighted_evidence": [ "We can see comparisons of our model made on Wiki(gold) in Table TABREF20 .", "FLOAT SELECTED: Table 3: Comparison with existing models." ] } ] } ], "1912.01772": [ { "question": "How is non-standard pronunciation identified?", "answers": [ { "answer": "Original transcription was labeled with additional labels in [] brackets with nonstandard pronunciation.", "type": "abstractive" } ], "q_uid": "f9bf6bef946012dd42835bf0c547c0de9c1d229f", "evidence": [ { "raw_evidence": [ "In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.", "FLOAT SELECTED: Table 2: Example of an utterance along with the different annotations. We additionally highlight the code-switching annotations ([SPA] indicates Spanish words) as well as pre-normalized transcriptions that indicating non-standard pronunciations ([!1pu\u2019] indicates that the previous 1 word was pronounced as \u2018pu\u2019\u2019 instead of \u2018pues\u2019)." ], "highlighted_evidence": [ "In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.", "FLOAT SELECTED: Table 2: Example of an utterance along with the different annotations. We additionally highlight the code-switching annotations ([SPA] indicates Spanish words) as well as pre-normalized transcriptions that indicating non-standard pronunciations ([!1pu\u2019] indicates that the previous 1 word was pronounced as \u2018pu\u2019\u2019 instead of \u2018pues\u2019)." ] } ] } ], "1909.04002": [ { "question": "What kind of celebrities do they obtain tweets from?", "answers": [ { "answer": "Amitabh Bachchan, Ariana Grande, Barack Obama, Bill Gates, Donald Trump,\nEllen DeGeneres, J K Rowling, Jimmy Fallon, Justin Bieber, Kevin Durant, Kim Kardashian, Lady Gaga, LeBron James,Narendra Modi, Oprah Winfrey", "type": "abstractive" }, { "answer": "Celebrities from varioius domains - Acting, Music, Politics, Business, TV, Author, Sports, Modeling. ", "type": "abstractive" } ], "q_uid": "4d28c99750095763c81bcd5544491a0ba51d9070", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)" ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)" ] } ] } ], "1712.00991": [ { "question": "What summarization algorithms did the authors experiment with?", "answers": [ { "answer": "LSA, TextRank, LexRank and ILP-based summary.", "type": "abstractive" }, { "answer": "LSA, TextRank, LexRank", "type": "abstractive" } ], "q_uid": "443d2448136364235389039cbead07e80922ec5c", "evidence": [ { "raw_evidence": [ "We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries.", "FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms" ], "highlighted_evidence": [ "Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries.", "For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package.", "FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms" ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms", "We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms", "For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. ", "Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. " ] } ] }, { "question": "What evaluation metrics are looked at for classification tasks?", "answers": [ { "answer": "Precision, Recall, F-measure, accuracy", "type": "extractive" }, { "answer": "Precision, Recall and F-measure", "type": "extractive" } ], "q_uid": "fb3d30d59ed49e87f63d3735b876d45c4c6b8939", "evidence": [ { "raw_evidence": [ "Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3", "We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation.", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2." ], "highlighted_evidence": [ "Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . ", "The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. ", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2." ] }, { "raw_evidence": [ "Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3" ], "highlighted_evidence": [ "Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 ." ] } ] }, { "question": "What methods were used for sentence classification?", "answers": [ { "answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK and Pattern-based", "type": "abstractive" }, { "answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK, Pattern-based approach", "type": "abstractive" } ], "q_uid": "197b276d0610ebfacd57ab46b0b29f3033c96a40", "evidence": [ { "raw_evidence": [ "We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation.", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1." ], "highlighted_evidence": [ "Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation.", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2.", "We manually tagged the same 2000 sentences in Dataset D1 with attributes, where each sentence may get 0, 1, 2, etc. up to 15 class labels (this is dataset D2). This labelled dataset contained 749, 206, 289, 207, 91, 223, 191, 144, 103, 80, 82, 42, 29, 15, 24 sentences having the class labels listed in Table TABREF20 in the same order. The number of sentences having 0, 1, 2, or more than 2 attributes are: 321, 1070, 470 and 139 respectively. We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2.", "We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2.", "We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2.", "We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. " ] } ] } ], "2003.04642": [ { "question": "What modern MRC gold standards are analyzed?", "answers": [ { "answer": "fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations", "type": "extractive" }, { "answer": "MSMARCO, HOTPOTQA, RECORD, MULTIRC, NEWSQA, and DROP.", "type": "abstractive" } ], "q_uid": "9ecde59ffab3c57ec54591c3c7826a9188b2b270", "evidence": [ { "raw_evidence": [ "We select contemporary MRC benchmarks to represent all four commonly used problem definitions BIBREF15. In selecting relevant datasets, we do not consider those that are considered \u201csolved\u201d, i.e. where the state of the art performance surpasses human performance, as is the case with SQuAD BIBREF28, BIBREF7. Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4 We randomly draw one from each bucket and add two randomly drawn datasets from the candidate pool. This leaves us with the datasets described in Table TABREF19. For a more detailed description, we refer to Appendix ." ], "highlighted_evidence": [ "Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4" ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: Summary of selected datasets" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Summary of selected datasets" ] } ] } ], "1904.07904": [ { "question": "What was the score of the proposed model?", "answers": [ { "answer": "Best results authors obtain is EM 51.10 and F1 63.11", "type": "abstractive" }, { "answer": "EM Score of 51.10", "type": "abstractive" } ], "q_uid": "38f58f13c7f23442d5952c8caf126073a477bac0", "evidence": [ { "raw_evidence": [ "To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 . The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 . We also compare to the approach proposed by Lan et al. BIBREF16 in the row (d). This approach is originally proposed for spoken language understanding, and we adopt the same approach on the setting here. The approach models domain-specific features from the source and target domains separately by two different embedding encoders with a shared embedding encoder for modeling domain-general features. The domain-general parameters are adversarially trained by domain discriminator." ], "highlighted_evidence": [ "To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 ." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2. The EM/F1 scores of proposed adversarial domain adaptation approaches over Spoken-SQuAD." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2. The EM/F1 scores of proposed adversarial domain adaptation approaches over Spoken-SQuAD." ] } ] } ], "2003.11645": [ { "question": "What hyperparameters are explored?", "answers": [ { "answer": "Dimension size, window size, architecture, algorithm, epochs, hidden dimension size, learning rate, loss function, optimizer algorithm.", "type": "abstractive" }, { "answer": "Hyperparameters explored were: dimension size, window size, architecture, algorithm and epochs.", "type": "abstractive" } ], "q_uid": "27275fe9f6a9004639f9ac33c3a5767fea388a98", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Hyper-parameter choices", "FLOAT SELECTED: Table 2: Network hyper-parameters" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Hyper-parameter choices", "FLOAT SELECTED: Table 2: Network hyper-parameters" ] }, { "raw_evidence": [ "To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased.", "FLOAT SELECTED: Table 1: Hyper-parameter choices" ], "highlighted_evidence": [ "Table TABREF2 describes most hyper-parameters explored for each dataset.", "FLOAT SELECTED: Table 1: Hyper-parameter choices" ] } ] }, { "question": "Do they test both skipgram and c-bow?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "c2d1387e08cf25cb6b1f482178cca58030e85b70", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Hyper-parameter choices" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Hyper-parameter choices" ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: Hyper-parameter choices", "To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Hyper-parameter choices", "Table TABREF2 describes most hyper-parameters explored for each dataset." ] } ] } ], "1608.06757": [ { "question": "what is the state of the art?", "answers": [ { "answer": "Babelfy, DBpedia Spotlight, Entityclassifier.eu, FOX, LingPipe MUC-7, NERD-ML, Stanford NER, TagMe 2", "type": "abstractive" } ], "q_uid": "c2b8ee872b99f698b3d2082d57f9408a91e1b4c1", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets." ] } ] } ], "1806.04330": [ { "question": "Do the authors also analyze transformer-based architectures?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "8bf7f1f93d0a2816234d36395ab40c481be9a0e0", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Summary of representative neural models for sentence pair modeling. The upper half contains sentence encoding models, and the lower half contains sentence pair interaction models." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Summary of representative neural models for sentence pair modeling. The upper half contains sentence encoding models, and the lower half contains sentence pair interaction models." ] }, { "raw_evidence": [], "highlighted_evidence": [] } ] } ], "1904.03288": [ { "question": "what were the baselines?", "answers": [ { "answer": "LF-MMI Attention\nSeq2Seq \nRNN-T \nChar E2E LF-MMI \nPhone E2E LF-MMI \nCTC + Gram-CTC", "type": "abstractive" } ], "q_uid": "2ddb51b03163d309434ee403fef42d6b9aecc458", "evidence": [ { "raw_evidence": [ "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .", "FLOAT SELECTED: Table 7: Hub5\u201900, WER (%)" ], "highlighted_evidence": [ " We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .", "FLOAT SELECTED: Table 7: Hub5\u201900, WER (%)" ] } ] }, { "question": "what competitive results did they obtain?", "answers": [ { "answer": "In case of read speech datasets, their best model got the highest nov93 score of 16.1 and the highest nov92 score of 13.3.\nIn case of Conversational Speech, their best model got the highest SWB of 8.3 and the highest CHM of 19.3. ", "type": "abstractive" }, { "answer": "On WSJ datasets author's best approach achieves 9.3 and 6.9 WER compared to best results of 7.5 and 4.1 on nov93 and nov92 subsets.\nOn Hub5'00 datasets author's best approach achieves WER of 7.8 and 16.2 compared to best result of 7.3 and 14.2 on Switchboard (SWB) and Callhome (CHM) subsets.", "type": "abstractive" } ], "q_uid": "e587559f5ab6e42f7d981372ee34aebdc92b646e", "evidence": [ { "raw_evidence": [ "We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5\u201900, WER (%)", "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ." ], "highlighted_evidence": [ "We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5\u201900, WER (%)", "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5\u201900, WER (%)", "We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5\u201900, WER (%)", "We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ." ] } ] } ], "1909.13714": [ { "question": "By how much is performance improved with multimodality?", "answers": [ { "answer": "by 2.3-6.8 points in f1 score for intent recognition and 0.8-3.5 for slot filling", "type": "abstractive" }, { "answer": "F1 score increased from 0.89 to 0.92", "type": "abstractive" } ], "q_uid": "f68508adef6f4bcdc0cc0a3ce9afc9a2b6333cc5", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models" ] }, { "raw_evidence": [ "For incorporating speech embeddings experiments, performance results of NLU models on in-cabin data with various feature concatenations can be found in Table TABREF3, using our previous hierarchical joint model (H-Joint-2). When used in isolation, Word2Vec and Speech2Vec achieves comparable performances, which cannot reach GloVe performance. This was expected as the pre-trained Speech2Vec vectors have lower vocabulary coverage than GloVe. Yet, we observed that concatenating GloVe + Speech2Vec, and further GloVe + Word2Vec + Speech2Vec yields better NLU results: F1-score increased from 0.89 to 0.91 for intent recognition, from 0.96 to 0.97 for slot filling.", "For multimodal (audio & video) features exploration, performance results of the compared models with varying modality/feature concatenations can be found in Table TABREF4. Since these audio/video features are extracted per utterance (on segmented audio & video clips), we experimented with the utterance-level intent recognition task only, using hierarchical joint learning (H-Joint-2). We investigated the audio-visual feature additions on top of text-only and text+speech embedding models. Adding openSMILE/IS10 features from audio, as well as incorporating intermediate CNN/Inception-ResNet-v2 features from video brought slight improvements to our intent models, reaching 0.92 F1-score. These initial results using feature concatenations may need further explorations, especially for certain intent-types such as stop (audio intensity) or relevant slots such as passenger gestures/gaze (from cabin video) and outside objects (from road video)." ], "highlighted_evidence": [ "Yet, we observed that concatenating GloVe + Speech2Vec, and further GloVe + Word2Vec + Speech2Vec yields better NLU results: F1-score increased from 0.89 to 0.91 for intent recognition, from 0.96 to 0.97 for slot filling.", "We investigated the audio-visual feature additions on top of text-only and text+speech embedding models. Adding openSMILE/IS10 features from audio, as well as incorporating intermediate CNN/Inception-ResNet-v2 features from video brought slight improvements to our intent models, reaching 0.92 F1-score." ] } ] } ], "1909.03405": [ { "question": "How much is performance improved on NLI?", "answers": [ { "answer": " improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase", "type": "extractive" }, { "answer": "The average score improved by 1.4 points over the previous best result.", "type": "abstractive" } ], "q_uid": "bdc91d1283a82226aeeb7a2f79dbbc57d3e84a1a", "evidence": [ { "raw_evidence": [ "Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase. Besides NLI, our model also performs better than BERTBase in the STS task. The STS tasks are semantically similar to the NLI tasks, and hence able to take advantage of PSP as well. Actually, the proposed method has a positive effect whenever the input is a sentence pair. The improvements suggest that the PSP task encourages the model to learn more detailed semantics in the pre-training, which improves the model on the downstream learning tasks. Moreover, our method is surprisingly able to achieve slightly better results in the single-sentence problem. The improvement should be attributed to better semantic representation.", "FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The \u201dAverage\u201d column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs." ], "highlighted_evidence": [ "Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase.", "FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The \u201dAverage\u201d column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The \u201dAverage\u201d column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The \u201dAverage\u201d column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs." ] } ] } ], "1907.03060": [ { "question": "what was the baseline?", "answers": [ { "answer": "pivot-based translation relying on a helping language BIBREF10, nduction of phrase tables from monolingual data BIBREF14 , attentional RNN-based model (RNMT) BIBREF2, Transformer model BIBREF18, bi-directional model BIBREF11, multi-to-multi (M2M) model BIBREF8, back-translation BIBREF17", "type": "extractive" }, { "answer": "M2M Transformer", "type": "abstractive" } ], "q_uid": "761de1610e934189850e8fda707dc5239dd58092", "evidence": [ { "raw_evidence": [ "We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .", "As for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .", "After identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 ." ], "highlighted_evidence": [ "We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .\n\nAs for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .\n\nAfter identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 ." ] }, { "raw_evidence": [ "In this paper, we challenged the difficult task of Ja INLINEFORM0 Ru news domain translation in an extremely low-resource setting. We empirically confirmed the limited success of well-established solutions when restricted to in-domain data. Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . This paper contains an empirical comparison of several existing approaches and hence we hope that our paper can act as a guideline to researchers attempting to tackle extremely low-resource translation.", "FLOAT SELECTED: Table 13: Summary of our investigation: BLEU scores of the best NMT systems at each step." ], "highlighted_evidence": [ "Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . ", "FLOAT SELECTED: Table 13: Summary of our investigation: BLEU scores of the best NMT systems at each step." ] } ] } ], "1911.10049": [ { "question": "How larger are the training sets of these versions of ELMo compared to the previous ones?", "answers": [ { "answer": "By 14 times.", "type": "abstractive" }, { "answer": "up to 1.95 times larger", "type": "abstractive" } ], "q_uid": "603fee7314fa65261812157ddfc2c544277fcf90", "evidence": [ { "raw_evidence": [ "Recently, ELMoForManyLangs BIBREF6 project released pre-trained ELMo models for a number of different languages BIBREF7. These models, however, were trained on a significantly smaller datasets. They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. The quality of these models is questionable. For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens. The difference of each model on the word analogy task is shown in Figure FIGREF16 in Section SECREF5. As the results of the ELMoForManyLangs embeddings are significantly worse than using the full corpus, we can conclude that these embeddings are not of sufficient quality. For that reason, we computed ELMo embeddings for seven languages on much larger corpora. As this effort requires access to large amount of textual data and considerable computational resources, we made the precomputed models publicly available by depositing them to Clarin repository." ], "highlighted_evidence": [ "They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. ", "For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens." ] }, { "raw_evidence": [ "Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese.", "FLOAT SELECTED: Table 1: The training corpora used. We report their size (in billions of tokens), and ELMo vocabulary size (in millions of tokens)." ], "highlighted_evidence": [ "The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words.", "FLOAT SELECTED: Table 1: The training corpora used. We report their size (in billions of tokens), and ELMo vocabulary size (in millions of tokens)." ] } ] }, { "question": "What is the improvement in performance for Estonian in the NER task?", "answers": [ { "answer": "5 percent points.", "type": "abstractive" }, { "answer": "0.05 F1", "type": "abstractive" } ], "q_uid": "09a1173e971e0fcdbf2fbecb1b077158ab08f497", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (\u2206(E \u2212 FT ))." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (\u2206(E \u2212 FT ))." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (\u2206(E \u2212 FT ))." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (\u2206(E \u2212 FT ))." ] } ] } ], "1812.06864": [ { "question": "what is the state of the art on WSJ?", "answers": [ { "answer": "CNN-DNN-BLSTM-HMM", "type": "abstractive" }, { "answer": "HMM-based system", "type": "extractive" } ], "q_uid": "70e9210fe64f8d71334e5107732d764332a81cb1", "evidence": [ { "raw_evidence": [ "Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92. DeepSpeech 2 shows a WER of INLINEFORM1 but uses 150 times more training data for the acoustic model and huge text datasets for LM training. Finally, the state-of-the-art among end-to-end systems trained only on WSJ, and hence the most comparable to our system, uses lattice-free MMI on augmented data (with speed perturbation) and gets INLINEFORM2 WER. Our baseline system, trained on mel-filterbanks, and decoded with a n-gram language model has a INLINEFORM3 WER. Replacing the n-gram LM by a convolutional one reduces the WER to INLINEFORM4 , and puts our model on par with the current best end-to-end system. Replacing the speech features by a learnable frontend finally reduces the WER to INLINEFORM5 and then to INLINEFORM6 when doubling the number of learnable filters, improving over DeepSpeech 2 and matching the performance of the best HMM-DNN system.", "FLOAT SELECTED: Table 1: WER (%) on the open vocabulary task of WSJ." ], "highlighted_evidence": [ "Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92.", "FLOAT SELECTED: Table 1: WER (%) on the open vocabulary task of WSJ." ] }, { "raw_evidence": [ "Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92. DeepSpeech 2 shows a WER of INLINEFORM1 but uses 150 times more training data for the acoustic model and huge text datasets for LM training. Finally, the state-of-the-art among end-to-end systems trained only on WSJ, and hence the most comparable to our system, uses lattice-free MMI on augmented data (with speed perturbation) and gets INLINEFORM2 WER. Our baseline system, trained on mel-filterbanks, and decoded with a n-gram language model has a INLINEFORM3 WER. Replacing the n-gram LM by a convolutional one reduces the WER to INLINEFORM4 , and puts our model on par with the current best end-to-end system. Replacing the speech features by a learnable frontend finally reduces the WER to INLINEFORM5 and then to INLINEFORM6 when doubling the number of learnable filters, improving over DeepSpeech 2 and matching the performance of the best HMM-DNN system." ], "highlighted_evidence": [ "Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92." ] } ] } ], "1811.12254": [ { "question": "what is the size of the augmented dataset?", "answers": [ { "answer": "609", "type": "abstractive" } ], "q_uid": "57f23dfc264feb62f45d9a9e24c60bd73d7fe563", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.", "All datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with the same set of features extracted from the transcripts (see Sec. SECREF3 ). HAPD and HAFP are jointly referred to as HA.", "Binary classification of each speech transcript as AD or HC is performed. We do 5-fold cross-validation, stratified by subject so that each subject's samples do not occur in both training and testing sets in each fold. The minority class is oversampled in the training set using SMOTE BIBREF14 to deal with the class imbalance. We consider a Random Forest (100 trees), Na\u00efve Bayes (with equal priors), SVM (with RBF kernel), and a 2-layer neural network (10 units, Adam optimizer, 500 epochs) BIBREF15 . Additionally, we augment the DB data with healthy samples from FP with varied ages.", "We augment DB with healthy samples from FP with varying ages (Tab. SECREF11 ), considering 50 samples for each 15 year duration starting from age 30. Adding the same number of samples from bins of age greater than 60 leads to greater increase in performance. This could be because the average age of participants in the datasets (DB, HA etc.) we use are greater than 60. Note that despite such a trend, addition of healthy data produces fair classifiers with respect to samples with age INLINEFORM0 60 and those with age INLINEFORM1 60 (balanced F1 scores of 75.6% and 76.1% respectively; further details in App. SECREF43 .)", "FLOAT SELECTED: Table 3: Augmenting DB with healthy data of varied ages. Scores averaged across 4 classifiers." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.", "\nAll datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with the same set of features extracted from the transcripts (see Sec. SECREF3 ). HAPD and HAFP are jointly referred to as HA.", "Additionally, we augment the DB data with healthy samples from FP with varied ages.", "We augment DB with healthy samples from FP with varying ages (Tab. SECREF11 ), considering 50 samples for each 15 year duration starting from age 30. ", "FLOAT SELECTED: Table 3: Augmenting DB with healthy data of varied ages. Scores averaged across 4 classifiers." ] } ] } ], "1908.05828": [ { "question": "How many sentences does the dataset contain?", "answers": [ { "answer": "3606", "type": "abstractive" }, { "answer": "6946", "type": "extractive" } ], "q_uid": "d51dc36fbf6518226b8e45d4c817e07e8f642003", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Dataset statistics" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Dataset statistics" ] }, { "raw_evidence": [ "In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset." ], "highlighted_evidence": [ "In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset." ] } ] }, { "question": "What is the baseline?", "answers": [ { "answer": "CNN modelBIBREF0, Stanford CRF modelBIBREF21", "type": "extractive" }, { "answer": "Bam et al. SVM, Ma and Hovy w/glove, Lample et al. w/fastText, Lample et al. w/word2vec", "type": "abstractive" } ], "q_uid": "cb77d6a74065cb05318faf57e7ceca05e126a80d", "evidence": [ { "raw_evidence": [ "Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone." ], "highlighted_evidence": [ "First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 6: Comparison with previous models based on Test F1 score" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Comparison with previous models based on Test F1 score" ] } ] }, { "question": "What is the size of the dataset?", "answers": [ { "answer": "Dataset contains 3606 total sentences and 79087 total entities.", "type": "abstractive" }, { "answer": "ILPRL contains 548 sentences, OurNepali contains 3606 sentences", "type": "abstractive" } ], "q_uid": "a1b3e2107302c5a993baafbe177684ae88d6f505", "evidence": [ { "raw_evidence": [ "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics" ], "highlighted_evidence": [ "The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics" ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: Dataset statistics", "Dataset Statistics ::: OurNepali dataset", "Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25.", "Dataset Statistics ::: ILPRL dataset", "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Dataset statistics", "Dataset Statistics ::: OurNepali dataset\nSince, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset.", "Dataset Statistics ::: ILPRL dataset\nAfter much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. ", " The statistics of both the dataset is presented in table TABREF23." ] } ] }, { "question": "How many different types of entities exist in the dataset?", "answers": [ { "answer": "OurNepali contains 3 different types of entities, ILPRL contains 4 different types of entities", "type": "abstractive" }, { "answer": "three", "type": "extractive" } ], "q_uid": "1462eb312944926469e7cee067dfc7f1267a2a8c", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Dataset statistics", "Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Dataset statistics", "Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments." ] }, { "raw_evidence": [ "Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25." ], "highlighted_evidence": [ "This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG)." ] } ] }, { "question": "How big is the new Nepali NER dataset?", "answers": [ { "answer": "3606 sentences", "type": "abstractive" }, { "answer": "Dataset contains 3606 total sentences and 79087 total entities.", "type": "abstractive" } ], "q_uid": "f59f1f5b528a2eec5cfb1e49c87699e0c536cc45", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Dataset statistics", "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Dataset statistics", "The statistics of both the dataset is presented in table TABREF23.\n\n" ] }, { "raw_evidence": [ "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics" ], "highlighted_evidence": [ "The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics" ] } ] }, { "question": "What is the performance improvement of the grapheme-level representation model over the character-level model?", "answers": [ { "answer": "On OurNepali test dataset Grapheme-level representation model achieves average 0.16% improvement, on ILPRL test dataset it achieves maximum 1.62% improvement", "type": "abstractive" }, { "answer": "BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration", "type": "extractive" } ], "q_uid": "9bd080bb2a089410fd7ace82e91711136116af6c", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: Comparison of different variation of our models" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Comparison of different variation of our models" ] }, { "raw_evidence": [ "We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively." ], "highlighted_evidence": [ "We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration." ] } ] } ], "2002.02070": [ { "question": "What is the performance of classifiers?", "answers": [ { "answer": "Table TABREF10, The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set, While these classifiers did not perform particularly well, they provide a good starting point for future work on this subject", "type": "extractive" }, { "answer": "Using F1 Micro measure, the KNN classifier perform 0.6762, the RF 0.6687, SVM 0.6712 and MLP 0.6778.", "type": "abstractive" } ], "q_uid": "d53299fac8c94bd0179968eb868506124af407d1", "evidence": [ { "raw_evidence": [ "In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set.", "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers." ], "highlighted_evidence": [ "In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set.", "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers." ] } ] }, { "question": "What classifiers have been trained?", "answers": [ { "answer": "KNN\nRF\nSVM\nMLP", "type": "abstractive" }, { "answer": " K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), Multi-layer Perceptron (MLP)", "type": "extractive" } ], "q_uid": "29f2954098f055fb19d9502572f085862d75bf61", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers.", "In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers.", "In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers." ] }, { "raw_evidence": [ "We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13." ], "highlighted_evidence": [ " The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13." ] } ] } ], "1908.10084": [ { "question": "What other sentence embeddings methods are evaluated?", "answers": [ { "answer": "GloVe, BERT, Universal Sentence Encoder, TF-IDF, InferSent", "type": "abstractive" }, { "answer": "Avg. GloVe embeddings, Avg. fast-text embeddings, Avg. BERT embeddings, BERT CLS-vector, InferSent - GloVe and Universal Sentence Encoder.", "type": "abstractive" } ], "q_uid": "e2db361ae9ad9dbaa9a85736c5593eb3a471983d", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Spearman rank correlation \u03c1 between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as \u03c1 \u00d7 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatedness dataset.", "FLOAT SELECTED: Table 3: Average Pearson correlation r and average Spearman\u2019s rank correlation \u03c1 on the Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Misra et al. proposes 10-fold cross-validation. We additionally evaluate in a cross-topic scenario: Methods are trained on two topics, and are evaluated on the third topic." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Spearman rank correlation \u03c1 between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as \u03c1 \u00d7 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatedness dataset.", "FLOAT SELECTED: Table 3: Average Pearson correlation r and average Spearman\u2019s rank correlation \u03c1 on the Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Misra et al. proposes 10-fold cross-validation. We additionally evaluate in a cross-topic scenario: Methods are trained on two topics, and are evaluated on the third topic." ] }, { "raw_evidence": [ "We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:", "The results can be found in Table TABREF15. SBERT is able to achieve the best performance in 5 out of 7 tasks. The average performance increases by about 2 percentage points compared to InferSent as well as the Universal Sentence Encoder. Even though transfer learning is not the purpose of SBERT, it outperforms other state-of-the-art sentence embeddings methods on this task.", "FLOAT SELECTED: Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation." ], "highlighted_evidence": [ "We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:", "The results can be found in Table TABREF15.", "FLOAT SELECTED: Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation." ] } ] } ], "1806.04511": [ { "question": "which non-english language had the best performance?", "answers": [ { "answer": "Russian", "type": "extractive" }, { "answer": "Russsian", "type": "abstractive" } ], "q_uid": "e79a5b6b6680bd2f63e9f4adbaae1d7795d81e38", "evidence": [ { "raw_evidence": [ "Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages. Building separate models for each language requires both labeled and unlabeled data. Even though having lots of labeled data in every language is the perfect case, it is unrealistic. Therefore, eliminating the resource requirement in this resource-constrained task is crucial. The fact that machine translation can be used in reusing models from different languages is promising for reducing the data requirements." ], "highlighted_evidence": [ "Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 3: Accuracy results (%) for RNN-based approach compared with majority baseline and lexicon-based baseline." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Accuracy results (%) for RNN-based approach compared with majority baseline and lexicon-based baseline." ] } ] } ], "1910.06592": [ { "question": "How big is the dataset used in this work?", "answers": [ { "answer": "Total dataset size: 171 account (522967 tweets)", "type": "abstractive" }, { "answer": "212 accounts", "type": "abstractive" } ], "q_uid": "3e1829e96c968cbd8ad8e9ce850e3a92a76b26e4", "evidence": [ { "raw_evidence": [ "Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset.", "FLOAT SELECTED: Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R)." ], "highlighted_evidence": [ "Table TABREF13 presents statistics on our dataset.", "FLOAT SELECTED: Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R)." ] }, { "raw_evidence": [ "Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset." ], "highlighted_evidence": [ " For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1.", "On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties." ] } ] } ], "1902.09666": [ { "question": "What is the size of the new dataset?", "answers": [ { "answer": "14,100 tweets", "type": "abstractive" }, { "answer": "Dataset contains total of 14100 annotations.", "type": "abstractive" } ], "q_uid": "74fb77a624ea9f1821f58935a52cca3086bb0981", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID.", "The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID.", "The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ." ] } ] }, { "question": "How long is the dataset for each step of hierarchy?", "answers": [ { "answer": "Level A: 14100 Tweets\nLevel B: 4640 Tweets\nLevel C: 4089 Tweets", "type": "abstractive" } ], "q_uid": "1b72aa2ec3ce02131e60626639f0cf2056ec23ca", "evidence": [ { "raw_evidence": [ "The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 .", "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID." ], "highlighted_evidence": [ " The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 .", "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID." ] } ] } ], "1604.00400": [ { "question": "What different correlations result when using different variants of ROUGE scores?", "answers": [ { "answer": "we observe that many variants of Rouge scores do not have high correlations with human pyramid scores", "type": "extractive" }, { "answer": "Using Pearson corelation measure, for example, ROUGE-1-P is 0.257 and ROUGE-3-F 0.878.", "type": "abstractive" } ], "q_uid": "bf52c01bf82612d0c7bbf2e6a5bb2570c322936f", "evidence": [ { "raw_evidence": [ "Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores. Both Rouge and Sera are calculated with stopwords removed and with stemming. Our experiments with inclusion of stopwords and without stemming showed similar results and thus, we do not include those to avoid redundancy.", "Another important observation is regarding the effectiveness of Rouge scores (top part of Table TABREF23 ). Interestingly, we observe that many variants of Rouge scores do not have high correlations with human pyramid scores. The lowest F-score correlations are for Rouge-1 and Rouge-L (with INLINEFORM0 =0.454). Weak correlation of Rouge-1 shows that matching unigrams between the candidate summary and gold summaries is not accurate in quantifying the quality of the summary. On higher order n-grams, however, we can see that Rouge correlates better with pyramid. In fact, the highest overall INLINEFORM1 is obtained by Rouge-3. Rouge-L and its weighted version Rouge-W, both have weak correlations with pyramid. Skip-bigrams (Rouge-S) and its combination with unigrams (Rouge-SU) also show sub-optimal correlations. Note that INLINEFORM2 and INLINEFORM3 correlations are more reliable in our setup due to the small sample size." ], "highlighted_evidence": [ "Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores.", "Interestingly, we observe that many variants of Rouge scores do not have high correlations with human pyramid scores. The lowest F-score correlations are for Rouge-1 and Rouge-L (with INLINEFORM0 =0.454). Weak correlation of Rouge-1 shows that matching unigrams between the candidate summary and gold summaries is not accurate in quantifying the quality of the summary." ] }, { "raw_evidence": [ "We provided an analysis of existing evaluation metrics for scientific summarization with evaluation of all variants of Rouge. We showed that Rouge may not be the best metric for summarization evaluation; especially in summaries with high terminology variations and paraphrasing (e.g. scientific summaries). Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. We furthermore proposed an alternative and more effective approach for scientific summarization evaluation (Summarization Evaluation by Relevance Analysis - Sera). Results revealed that in general, the proposed evaluation metric achieves higher correlations with semi-manual pyramid evaluation scores in comparison with Rouge.", "FLOAT SELECTED: Table 2: Correlation between variants of ROUGE and SERA, with human pyramid scores. All variants of ROUGE are displayed. F : F-Score; R: Recall; P : Precision; DIS: Discounted variant of SERA; KW: using Keyword query reformulation; NP: Using noun phrases for query reformulation. The numbers in front of the SERA metrics indicate the rank cut-off point." ], "highlighted_evidence": [ "Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. ", "FLOAT SELECTED: Table 2: Correlation between variants of ROUGE and SERA, with human pyramid scores. All variants of ROUGE are displayed. F : F-Score; R: Recall; P : Precision; DIS: Discounted variant of SERA; KW: using Keyword query reformulation; NP: Using noun phrases for query reformulation. The numbers in front of the SERA metrics indicate the rank cut-off point." ] } ] } ], "1810.12196": [ { "question": "What tasks were evaluated?", "answers": [ { "answer": "ReviewQA's test set", "type": "extractive" }, { "answer": "Detection of an aspect in a review, Prediction of the customer general satisfaction, Prediction of the global trend of an aspect in a given review, Prediction of whether the rating of a given aspect is above or under a given value, Prediction of the exact rating of an aspect in a review, Prediction of the list of all the positive/negative aspects mentioned in the review, Comparison between aspects, Prediction of the strengths and weaknesses in a review", "type": "abstractive" } ], "q_uid": "52f8a3e3cd5d42126b5307adc740b71510a6bdf5", "evidence": [ { "raw_evidence": [ "Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models. According to our results, the simple LSTM network and the MemN2N perform very poorly on this dataset. Especially on the most advanced reasoning tasks. Indeed, the task 5 which corresponds to the prediction of the exact rating of an aspect seems to be very challenging for these model. Maybe the tokenization by sentence to create the memory blocks of the MemN2N, which is appropriated in the case of the bAbI tasks, is not a good representation of the documents when it has to handle human generated comments. However, the logistic regression achieves reasonable performance on these tasks, and do not suffer from catastrophic performance on any tasks. Its worst result comes on task 6 and one of the reason is probably that this architecture is not designed to predict a list of answers. On the contrary, the deep projective reader achieves encouraging on this dataset. It outperforms all the other baselines, with very good scores on the first fourth tasks. The question/document and document/document attention layers proposed in BIBREF12 seem once again to produce rich encodings of the inputs which are relevant for our projection layer." ], "highlighted_evidence": [ "Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: Descriptions and examples of the 8 tasks evaluated in ReviewQA.", "We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task. We also provide the expected type of the answer (Yes/No question, rating question...). It can be an additional tool to analyze the errors of the readers." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Descriptions and examples of the 8 tasks evaluated in ReviewQA.", "Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task." ] } ] } ], "1707.05236": [ { "question": "What are their results on both datasets?", "answers": [ { "answer": "Combining pattern based and Machine translation approaches gave the best overall F0.5 scores. It was 49.11 for FCE dataset , 21.87 for the first annotation of CoNLL-14, and 30.13 for the second annotation of CoNLL-14. ", "type": "abstractive" } ], "q_uid": "ab9b0bde6113ffef8eb1c39919d21e5913a05081", "evidence": [ { "raw_evidence": [ "The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision \u2013 this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset.", "FLOAT SELECTED: Table 2: Error detection performance when combining manually annotated and artificial training data.", "The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection." ], "highlighted_evidence": [ "The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 .", "FLOAT SELECTED: Table 2: Error detection performance when combining manually annotated and artificial training data.", "The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection." ] } ] } ], "1908.11047": [ { "question": "Does this method help in sentiment classification task improvement?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "f2155dc4aeab86bf31a838c8ff388c85440fce6e", "evidence": [ { "raw_evidence": [ "Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs.", "FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks." ], "highlighted_evidence": [ "Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks.", "FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks." ] }, { "raw_evidence": [ "Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs." ], "highlighted_evidence": [ "Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs." ] } ] }, { "question": "For how many probe tasks the shallow-syntax-aware contextual embedding perform better than ELMo\u2019s embedding?", "answers": [ { "answer": "performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks", "type": "extractive" }, { "answer": "3", "type": "abstractive" } ], "q_uid": "ed6a15f0f7fa4594e51d5bde21cc0c6c1bedbfdc", "evidence": [ { "raw_evidence": [ "Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer." ], "highlighted_evidence": [ "Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 3: Test performance of ELMo-transformer (Peters et al., 2018b) vs. mSynC on several linguistic probes from Liu et al. (2019). In each case, performance of the best layer from the architecture is reported. Details on the probes can be found in \u00a74.2.1." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Test performance of ELMo-transformer (Peters et al., 2018b) vs. mSynC on several linguistic probes from Liu et al. (2019). In each case, performance of the best layer from the architecture is reported. Details on the probes can be found in \u00a74.2.1." ] } ] }, { "question": "What are the black-box probes used?", "answers": [ { "answer": "CCG Supertagging CCGBank , PTB part-of-speech tagging, EWT part-of-speech tagging,\nChunking, Named Entity Recognition, Semantic Tagging, Grammar Error Detection, Preposition Supersense Role, Preposition Supersense Function, Event Factuality Detection", "type": "abstractive" }, { "answer": "Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases.", "type": "extractive" } ], "q_uid": "4d706ce5bde82caf40241f5b78338ea5ee5eb01e", "evidence": [ { "raw_evidence": [ "Recent work has probed the knowledge encoded in cwrs and found they capture a surprisingly large amount of syntax BIBREF10, BIBREF1, BIBREF11. We further examine the contextual embeddings obtained from the enhanced architecture and a shallow syntactic context, using black-box probes from BIBREF1. Our analysis indicates that our shallow-syntax-aware contextual embeddings do not transfer to linguistic tasks any more easily than ELMo embeddings (\u00a7SECREF18).", "FLOAT SELECTED: Table 6: Dataset and metrics for each probing task from Liu et al. (2019), corresponding to Table 3." ], "highlighted_evidence": [ " We further examine the contextual embeddings obtained from the enhanced architecture and a shallow syntactic context, using black-box probes from BIBREF1", "FLOAT SELECTED: Table 6: Dataset and metrics for each probing task from Liu et al. (2019), corresponding to Table 3." ] }, { "raw_evidence": [ "We further analyze whether awareness of shallow syntax carries over to other linguistic tasks, via probes from BIBREF1. Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases. Unlike \u00a7SECREF11, there is minimal downstream task architecture, bringing into focus the transferability of cwrs, as opposed to task-specific adaptation." ], "highlighted_evidence": [ "Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases." ] } ] }, { "question": "What are improvements for these two approaches relative to ELMo-only baselines?", "answers": [ { "answer": "only modest gains on three of the four downstream tasks", "type": "extractive" }, { "answer": " the performance differences across all tasks are small enough ", "type": "extractive" } ], "q_uid": "86bf75245358f17e35fc133e46a92439ac86d472", "evidence": [ { "raw_evidence": [ "Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer." ], "highlighted_evidence": [ "Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks.", "Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks.", "Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs" ] } ] } ], "1612.08205": [ { "question": "What are the industry classes defined in this paper?", "answers": [ { "answer": "technology, religion, fashion, publishing, sports or recreation, real estate, agriculture/environment, law, security/military, tourism, construction, museums or libraries, banking/investment banking, automotive", "type": "abstractive" }, { "answer": "Technology, Religion, Fashion, Publishing, Sports coach, Real Estate, Law, Environment, Tourism, Construction, Museums, Banking, Security, Automotive.", "type": "abstractive" } ], "q_uid": "cd2878c5a52542ddf080b20bec005d9a74f2d916", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Industry categories and number of users per category." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Industry categories and number of users per category." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 7: Three top-ranked words for each industry." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 7: Three top-ranked words for each industry." ] } ] } ], "1907.09369": [ { "question": "Do they report results only on English data?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "fd2c6c26fd0ab3c10aae4f2550c5391576a77491", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Results of final classification in Wang et al." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results of final classification in Wang et al." ] } ] } ], "1911.07555": [ { "question": "Does the paper report the performance of a baseline model on South African languages LID?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "307e8ab37b67202fe22aedd9a98d9d06aaa169c5", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with \u2019\u2014\u2019." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with \u2019\u2014\u2019." ] }, { "raw_evidence": [ "The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8.", "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with \u2019\u2014\u2019." ], "highlighted_evidence": [ "The average classification accuracy results are summarised in Table TABREF9.", "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with \u2019\u2014\u2019." ] } ] }, { "question": "Does the algorithm improve on the state-of-the-art methods?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "From all reported results proposed method (NB+Lex) shows best accuracy on all 3 datasets - some models are not evaluated and not available in literature.", "type": "abstractive" } ], "q_uid": "e5c8e9e54e77960c8c26e8e238168a603fcdfcc6", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with \u2019\u2014\u2019." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with \u2019\u2014\u2019." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with \u2019\u2014\u2019.", "The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with \u2019\u2014\u2019.", "The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label." ] } ] } ], "1804.11346": [ { "question": "Is the dataset balanced between speakers of different L1s?", "answers": [ { "answer": "No", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "2ceced87af4c8fdebf2dc959aa700a5c95bd518f", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Distribution by L1s and source corpora." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Distribution by L1s and source corpora." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Distribution by L1s and source corpora." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Distribution by L1s and source corpora." ] } ] } ], "1909.00175": [ { "question": "What state-of-the-art results are achieved?", "answers": [ { "answer": "F1 score of 92.19 on homographic pun detection, 80.19 on homographic pun location, 89.76 on heterographic pun detection.", "type": "abstractive" }, { "answer": "for the homographic dataset F1 score of 92.19 and 80.19 on detection and location and for the heterographic dataset F1 score of 89.76 on detection", "type": "abstractive" } ], "q_uid": "badc9db40adbbf2ea7bac29f2e4e3b6b9175b1f9", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)" ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)" ] } ] }, { "question": "What baselines do they compare with?", "answers": [ { "answer": "They compare with the following models: by Pedersen (2017), by Pramanick and Das (2017), by Mikhalkova and Karyakin (2017), by Vadehra (2017), Indurthi and Oota (2017), by Vechtomova (2017), by (Cai et al., 2018), and CRF.", "type": "abstractive" } ], "q_uid": "67b66fe67a3cb2ce043070513664203e564bdcbd", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)" ] } ] } ], "1910.06036": [ { "question": "How big are significant improvements?", "answers": [ { "answer": "Metrics show better results on all metrics compared to baseline except Bleu1 on Zhou split (worse by 0.11 compared to baseline). Bleu1 score on DuSplit is 45.66 compared to best baseline 43.47, other metrics on average by 1", "type": "abstractive" } ], "q_uid": "92294820ac0d9421f086139e816354970f066d8a", "evidence": [ { "raw_evidence": [ "Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers). Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models BIBREF9, BIBREF15 on both dataset splits. Presumably, our structured answer-relevant relation is a generalization of the context explored by the proximity-based methods because they can only capture short dependencies around answer fragments while our extractions can capture both short and long dependencies given the answer fragments. Moreover, our proposed framework is a general one to jointly leverage structured relations and unstructured sentences. All compared baseline models which only consider unstructured sentences can be further enhanced under our framework.", "FLOAT SELECTED: Table 4: The main experimental results for our model and several baselines. \u2018-\u2019 means no results reported in their papers. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L)" ], "highlighted_evidence": [ "Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers).", "FLOAT SELECTED: Table 4: The main experimental results for our model and several baselines. \u2018-\u2019 means no results reported in their papers. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L)" ] } ] } ], "2002.01984": [ { "question": "What was their highest MRR score?", "answers": [ { "answer": "0.5115", "type": "abstractive" }, { "answer": "0.6103", "type": "extractive" } ], "q_uid": "9ec1f88ceec84a10dc070ba70e90a792fba8ce71", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Factoid Questions. In Batch 3 we obtained the highest score. Also the relative distance between our best system and the top performing system shrunk between Batch 4 and 5." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Factoid Questions. In Batch 3 we obtained the highest score. Also the relative distance between our best system and the top performing system shrunk between Batch 4 and 5." ] }, { "raw_evidence": [ "Sharma et al. BIBREF3 describe a system with two stage process for factoid and list type question answering. Their system extracts relevant entities and then runs supervised classifier to rank the entities. Wiese et al. BIBREF4 propose neural network based model for Factoid and List-type question answering task. The model is based on Fast QA and predicts the answer span in the passage for a given question. The model is trained on SQuAD data set and fine tuned on the BioASQ data. Dimitriadis et al. BIBREF5 proposed two stage process for Factoid question answering task. Their system uses general purpose tools such as Metamap, BeCas to identify candidate sentences. These candidate sentences are represented in the form of features, and are then ranked by the binary classifier. Classifier is trained on candidate sentences extracted from relevant questions, snippets and correct answers from BioASQ challenge. For factoid question answering task highest \u2018MRR\u2019 achieved in the 6th edition of BioASQ competition is \u20180.4325\u2019. Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a \u2018MRR\u2019 score \u20180.6103\u2019 in one of the test batches for Factoid Question Answering task." ], "highlighted_evidence": [ "Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a \u2018MRR\u2019 score \u20180.6103\u2019 in one of the test batches for Factoid Question Answering task." ] } ] } ], "1809.03449": [ { "question": "Do the authors hypothesize that humans' robustness to noise is due to their general knowledge?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "52f9cd05d8312ae3c7a43689804bac63f7cac34b", "evidence": [ { "raw_evidence": [ "To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down. Then we conduct an ablation study by replacing the knowledge aided attention mechanisms with the mutual attention proposed by BIBREF3 and the self attention proposed by BIBREF4 separately, and find that the F1 score of KAR drops by INLINEFORM2 on the development set, INLINEFORM3 on AddSent, and INLINEFORM4 on AddOneSent. Finally we find that after only one epoch of training, KAR already achieves an EM of INLINEFORM5 and an F1 score of INLINEFORM6 on the development set, which is even better than the final performance of several strong baselines, such as DCN (EM / F1: INLINEFORM7 / INLINEFORM8 ) BIBREF36 and BiDAF (EM / F1: INLINEFORM9 / INLINEFORM10 ) BIBREF3 . The above empirical findings imply that general knowledge indeed plays an effective role in KAR.", "FLOAT SELECTED: Table 2: The amount of the extraction results and the performance of KAR under each setting for \u03c7." ], "highlighted_evidence": [ "To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down.", "FLOAT SELECTED: Table 2: The amount of the extraction results and the performance of KAR under each setting for \u03c7." ] }, { "raw_evidence": [ "OF COURSE NOT. There is a huge gap between MRC models and human beings, which is mainly reflected in the hunger for data and the robustness to noise. On the one hand, developing MRC models requires a large amount of training examples (i.e. the passage-question pairs labeled with answer spans), while human beings can achieve good performance on evaluation examples (i.e. the passage-question pairs to address) without training examples. On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge. A typical category of general knowledge is inter-word semantic connections. As shown in Table TABREF1 , such general knowledge is essential to the reading comprehension ability of human beings." ], "highlighted_evidence": [ "On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge." ] } ] } ], "1903.09722": [ { "question": "What is the previous state-of-the-art in summarization?", "answers": [ { "answer": "BIBREF26 ", "type": "extractive" }, { "answer": "BIBREF26", "type": "extractive" } ], "q_uid": "ab0fd94dfc291cf3e54e9b7a7f78b852ddc1a797", "evidence": [ { "raw_evidence": [ "Following BIBREF11 , we experiment on the non-anonymized version of . When generating summaries, we follow standard practice of tuning the maximum output length and disallow repeating the same trigram BIBREF27 , BIBREF14 . For this task we train language model representations on the combination of newscrawl and the training data. Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline. Pre-trained representations are complementary to their method.", "FLOAT SELECTED: Table 3: Abstractive summarization results on CNNDailyMail. ELMo inputs achieve a new state of the art." ], "highlighted_evidence": [ "Following BIBREF11 , we experiment on the non-anonymized version of . When generating summaries, we follow standard practice of tuning the maximum output length and disallow repeating the same trigram BIBREF27 , BIBREF14 . For this task we train language model representations on the combination of newscrawl and the training data. Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline. Pre-trained representations are complementary to their method.", "FLOAT SELECTED: Table 3: Abstractive summarization results on CNNDailyMail. ELMo inputs achieve a new state of the art." ] }, { "raw_evidence": [ "Following BIBREF11 , we experiment on the non-anonymized version of . When generating summaries, we follow standard practice of tuning the maximum output length and disallow repeating the same trigram BIBREF27 , BIBREF14 . For this task we train language model representations on the combination of newscrawl and the training data. Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline. Pre-trained representations are complementary to their method.", "FLOAT SELECTED: Table 3: Abstractive summarization results on CNNDailyMail. ELMo inputs achieve a new state of the art." ], "highlighted_evidence": [ "Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline.", "FLOAT SELECTED: Table 3: Abstractive summarization results on CNNDailyMail. ELMo inputs achieve a new state of the art." ] } ] } ], "1806.11432": [ { "question": "Does the method achieve sota performance on this dataset?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "701571680724c05ca70c11bc267fb1160ea1460a", "evidence": [ { "raw_evidence": [ "That said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. While the listing description is surely an influential metric in determining the quality of a listing, other factors such as location, amenities, and home type might play a larger role in the consumer's decision. We were hopeful that these factors would be represented in the price per bedroom of the listing \u2013 our control variable \u2013 but the relationship may not have been strong enough.", "However, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong. We assumed that listings with similar occupancy rates would have similar listing descriptions regardless of price, which is not necessarily a strong assumption. This is coupled with an unexpected sparseness of clean data. With over 40,000 listings, we did not expect to see such poor attention to orthography in what are essentially public advertisements of the properties. In this way, our decision to use a window size of 5, a minimum occurrence count of 2, and a dimensionality of 50 when training our GloVe vectors was ad hoc.", "FLOAT SELECTED: Table 2: GAN Model, Keywords = [parking], Varying Gamma Parameter" ], "highlighted_evidence": [ "That said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. ", "However, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong.", "FLOAT SELECTED: Table 2: GAN Model, Keywords = [parking], Varying Gamma Parameter" ] } ] }, { "question": "What are the baselines used in the paper?", "answers": [ { "answer": "GloVe vectors trained on Wikipedia Corpus with ensembling, and GloVe vectors trained on Airbnb Data without ensembling", "type": "abstractive" } ], "q_uid": "600b097475b30480407ce1de81c28c54a0b3b2f8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Results of RNN/LSTM" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results of RNN/LSTM" ] } ] } ], "1910.14537": [ { "question": "How better is performance compared to previous state-of-the-art models?", "answers": [ { "answer": "F1 score of 97.5 on MSR and 95.7 on AS", "type": "abstractive" }, { "answer": "MSR: 97.7 compared to 97.5 of baseline\nAS: 95.7 compared to 95.6 of baseline", "type": "abstractive" } ], "q_uid": "5fda8539a97828e188ba26aad5cda1b9dd642bc8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).", "FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).", "FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)." ] }, { "raw_evidence": [ "With unsupervised segmentation features introduced by BIBREF20, our model gets a higher result. Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. The unsupervised segmentation features are derived from the given training dataset, thus using them does not violate the rule of closed test of SIGHAN Bakeoff.", "FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).", "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)." ], "highlighted_evidence": [ " Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. ", "FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).", "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)." ] } ] }, { "question": "What are strong baselines model is compared to?", "answers": [ { "answer": "Baseline models are:\n- Chen et al., 2015a\n- Chen et al., 2015b\n- Liu et al., 2016\n- Cai and Zhao, 2016\n- Cai et al., 2017\n- Zhou et al., 2017\n- Ma et al., 2018\n- Wang et al., 2019", "type": "abstractive" } ], "q_uid": "fabcd71644bb63559d34b38d78f6ef87c256d475", "evidence": [ { "raw_evidence": [ "Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting. Without the assistance of unsupervised segmentation features userd in BIBREF20, our model outperforms all the other models in MSR and AS except BIBREF18 and get comparable performance in PKU and CITYU. Note that all the other models for this comparison adopt various $n$-gram features while only our model takes unigram ones.", "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)." ], "highlighted_evidence": [ "Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting.", "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)." ] } ] } ], "1702.03342": [ { "question": "which neural embedding model works better?", "answers": [ { "answer": "the CRX model", "type": "abstractive" }, { "answer": "3C model", "type": "extractive" } ], "q_uid": "2a6003a74d051d0ebbe62e8883533a5f5e55078b", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5 Accuracy of concept categorization" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5 Accuracy of concept categorization" ] }, { "raw_evidence": [ "Table 3 presents the results of fine-grained dataless classification measured in micro-averaged F1. As we can notice, ESA achieves its peak performance with a few hundred dimensions of the sparse BOC vector. Using our densification mechanism, both the CRC & 3C models achieve equal performance to ESA at much less dimensions. Densification using the CRC model embeddings gives the best F1 scores on the three tasks. Interestingly, the CRC model improves the F1 score by INLINEFORM0 7% using only 14 concepts on Autos vs. Motorcycles, and by INLINEFORM1 3% using 70 concepts on Guns vs. Mideast vs. Misc. The 3C model, still performs better than ESA on 2 out of the 3 tasks. Both WE INLINEFORM2 and WE INLINEFORM3 improve the performance over ESA but not as our CRC model." ], "highlighted_evidence": [ "The 3C model, still performs better than ESA on 2 out of the 3 tasks." ] } ] }, { "question": "What is the degree of dimension reduction of the efficient aggregation method?", "answers": [ { "answer": "The number of dimensions can be reduced by up to 212 times.", "type": "abstractive" } ], "q_uid": "1b1b0c71f1a4b37c6562d444f75c92eb2c727d9b", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 8 Evaluation results of dataless document classification of coarse-grained classes measured in micro-averaged F1 along with # of dimensions (concepts) at which corresponding performance is achieved" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 8 Evaluation results of dataless document classification of coarse-grained classes measured in micro-averaged F1 along with # of dimensions (concepts) at which corresponding performance is achieved" ] } ] } ], "1805.03710": [ { "question": "For which languages do they build word embeddings for?", "answers": [ { "answer": "English", "type": "abstractive" } ], "q_uid": "9c44df7503720709eac933a15569e5761b378046", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: We generate vectors for OOV using subword information and search for the nearest (cosine distance) words in the embedding space. The LV-M segmentation for each word is: {\u3008hell, o, o, o\u3009}, {\u3008marvel, i, cious\u3009}, {\u3008louis, ana\u3009}, {\u3008re, re, read\u3009}, {\u3008 tu, z, read\u3009}. We omit the LV-N and FT n-grams as they are trivial and too numerous to list." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: We generate vectors for OOV using subword information and search for the nearest (cosine distance) words in the embedding space. The LV-M segmentation for each word is: {\u3008hell, o, o, o\u3009}, {\u3008marvel, i, cious\u3009}, {\u3008louis, ana\u3009}, {\u3008re, re, read\u3009}, {\u3008 tu, z, read\u3009}. We omit the LV-N and FT n-grams as they are trivial and too numerous to list." ] } ] } ], "1909.03135": [ { "question": "How big was the corpora they trained ELMo on?", "answers": [ { "answer": "2174000000, 989000000", "type": "abstractive" }, { "answer": "2174 million tokens for English and 989 million tokens for Russian", "type": "abstractive" } ], "q_uid": "d509081673f5667060400eb325a8050fa5db7cc8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Training corpora", "For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Training corpora", "For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). " ] }, { "raw_evidence": [ "For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.", "FLOAT SELECTED: Table 1: Training corpora" ], "highlighted_evidence": [ "For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). ", "As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.", "FLOAT SELECTED: Table 1: Training corpora" ] } ] } ], "1804.07789": [ { "question": "What dataset is used?", "answers": [ { "answer": "English WIKIBIO, French WIKIBIO , German WIKIBIO ", "type": "abstractive" }, { "answer": "WikiBio dataset, introduce two new biography datasets, one in French and one in German", "type": "extractive" } ], "q_uid": "6cd25c637c6b772ce29e8ee81571e8694549c5ab", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Comparison of different models on the English WIKIBIO dataset", "FLOAT SELECTED: Table 4: Comparison of different models on the French WIKIBIO dataset", "FLOAT SELECTED: Table 5: Comparison of different models on the German WIKIBIO dataset" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Comparison of different models on the English WIKIBIO dataset", "FLOAT SELECTED: Table 4: Comparison of different models on the French WIKIBIO dataset", "FLOAT SELECTED: Table 5: Comparison of different models on the German WIKIBIO dataset" ] }, { "raw_evidence": [ "We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia. A biography article corresponds to a person (sportsman, politician, historical figure, actor, etc.). Each Wikipedia article has an accompanying infobox which serves as the structured input and the task is to generate the first sentence of the article (which typically is a one-line description of the person). We used the same train, valid and test sets which were made publicly available by lebret2016neural.", "We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article. As with the English dataset, we split the French and German datasets randomly into train (80%), test (10%) and valid (10%). The French and German datasets extracted by us has been made publicly available. The number of examples was 170K and 50K and the vocabulary size was 297K and 143K for French and German respectively. Although in this work we focus only on generating descriptions in one language, we hope that this dataset will also be useful for developing models which jointly learn to generate descriptions from structured data in multiple languages." ], "highlighted_evidence": [ "We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia.", "We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article." ] } ] } ], "1810.12085": [ { "question": "what topics did they label?", "answers": [ { "answer": "Demographics Age, DiagnosisHistory, MedicationHistory, ProcedureHistory, Symptoms/Signs, Vitals/Labs, Procedures/Results, Meds/Treatments, Movement, Other.", "type": "abstractive" }, { "answer": "Demographics, Diagnosis History, Medication History, Procedure History, Symptoms, Labs, Procedures, Treatments, Hospital movements, and others", "type": "abstractive" } ], "q_uid": "ceb767e33fde4b927e730f893db5ece947ffb0d8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1. HPI Categories and Annotation Instructions" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1. HPI Categories and Annotation Instructions" ] }, { "raw_evidence": [ "We developed a classifier to label topics in the history of present illness (HPI) notes, including demographics, diagnosis history, and symptoms/signs, among others. A random sample of 515 history of present illness notes was taken, and each of the notes was manually annotated by one of eight annotators using the software Multi-document Annotation Environment (MAE) BIBREF20 . MAE provides an interactive GUI for annotators and exports the results of each annotation as an XML file with text spans and their associated labels for additional processing. 40% of the HPI notes were labeled by clinicians and 60% by non-clinicians. Table TABREF5 shows the instructions given to the annotators for each of the 10 labels. The entire HPI note was labeled with one of the labels, and instructions were given to label each clause in a sentence with the same label when possible." ], "highlighted_evidence": [ "We developed a classifier to label topics in the history of present illness (HPI) notes, including demographics, diagnosis history, and symptoms/signs, among others", "Table TABREF5 shows the instructions given to the annotators for each of the 10 labels." ] } ] }, { "question": "did they compare with other extractive summarization methods?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "c2cb6c4500d9e02fc9a1bdffd22c3df69655189f", "evidence": [ { "raw_evidence": [ "We evaluated our model on the 515 annotated history of present illness notes, which were split in a 70% train set, 15% development set, and a 15% test set. The model is trained using the Adam algorithm for gradient-based optimization BIBREF25 with an initial learning rate = 0.001 and decay = 0.9. A dropout rate of 0.5 was applied for regularization, and each batch size = 20. The model ran for 20 epochs and was halted early if there was no improvement after 3 epochs.", "We evaluated the impact of character embeddings, the choice of pretrained w2v embeddings, and the addition of learned word embeddings on model performance on the dev set. We report performance of the best performing model on the test set.", "Table TABREF16 compares dev set performance of the model using various pretrained word embeddings, with and without character embeddings, and with pretrained versus learned word embeddings. The first row in each section is the performance of the model architecture described in the methods section for comparison. Models using word embeddings trained on the discharge summaries performed better than word embeddings trained on all MIMIC notes, likely because the discharge summary word embeddings better captured word use in discharge summaries alone. Interestingly, the continuous bag of words embeddings outperformed skip gram embeddings, which is surprising because the skip gram architecture typically works better for infrequent words BIBREF26 . As expected, inclusion of character embeddings increases performance by approximately 3%. The model with word embeddings learned in the model achieves the highest performance on the dev set (0.886), which may be because the pretrained worm embeddings were trained on a previous version of MIMIC. As a result, some words in the discharge summaries, such as mi-spelled words or rarer diseases and medications, did not have associated word embeddings. Performing a simple spell correction on out of vocab words may improve performance with pretrained word embeddings.", "FLOAT SELECTED: Table 3. Average Recall for five sections of the discharge summary. Recall for each patient\u2019s sex was calculated by examining the structured data for the patient\u2019s current admission, and recall for the remaining sections was calculated by comparing CUI overlap between the section and the remaining notes for the current admission." ], "highlighted_evidence": [ "We evaluated our model on the 515 annotated history of present illness notes, which were split in a 70% train set, 15% development set, and a 15% test set.", "We evaluated the impact of character embeddings, the choice of pretrained w2v embeddings, and the addition of learned word embeddings on model performance on the dev set. We report performance of the best performing model on the test set.", "Table TABREF16 compares dev set performance of the model using various pretrained word embeddings, with and without character embeddings, and with pretrained versus learned word embeddings.", "FLOAT SELECTED: Table 3. Average Recall for five sections of the discharge summary. Recall for each patient\u2019s sex was calculated by examining the structured data for the patient\u2019s current admission, and recall for the remaining sections was calculated by comparing CUI overlap between the section and the remaining notes for the current admission." ] } ] } ], "1610.07809": [ { "question": "what levels of document preprocessing are looked at?", "answers": [ { "answer": "raw text, text cleaning through document logical structure detection, removal of keyphrase sparse sections of the document", "type": "extractive" }, { "answer": "Level 1, Level 2 and Level 3.", "type": "abstractive" } ], "q_uid": "06eb9f2320451df83e27362c22eb02f4a426a018", "evidence": [ { "raw_evidence": [ "While previous work clearly states that efficient document preprocessing is a prerequisite for the extraction of high quality keyphrases, there is, to our best knowledge, no empirical evidence of how preprocessing affects keyphrase extraction performance. In this paper, we re-assess the performance of several state-of-the-art keyphrase extraction models at increasingly sophisticated levels of preprocessing. Three incremental levels of document preprocessing are experimented with: raw text, text cleaning through document logical structure detection, and removal of keyphrase sparse sections of the document. In doing so, we present the first consistent comparison of different keyphrase extraction models and study their robustness over noisy text. More precisely, our contributions are:" ], "highlighted_evidence": [ "Three incremental levels of document preprocessing are experimented with: raw text, text cleaning through document logical structure detection, and removal of keyphrase sparse sections of the document. In doing so, we present the first consistent comparison of different keyphrase extraction models and study their robustness over noisy text." ] }, { "raw_evidence": [ "In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.", "Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall.", "FLOAT SELECTED: Table 1: Statistics computed at the different levels of document preprocessing on the training set." ], "highlighted_evidence": [ "In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.", "Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall.", "FLOAT SELECTED: Table 1: Statistics computed at the different levels of document preprocessing on the training set." ] } ] } ], "2003.03044": [ { "question": "How many different phenotypes are present in the dataset?", "answers": [ { "answer": "15 clinical patient phenotypes", "type": "extractive" }, { "answer": "Thirteen different phenotypes are present in the dataset.", "type": "abstractive" } ], "q_uid": "46c9e5f335b2927db995a55a18b7c7621fd3d051", "evidence": [ { "raw_evidence": [ "We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12" ], "highlighted_evidence": [ "We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12" ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype." ] } ] }, { "question": "What are 10 other phenotypes that are annotated?", "answers": [ { "answer": "Adv. Heart Disease, Adv. Lung Disease, Alcohol Abuse, Chronic Neurologic Dystrophies, Dementia, Depression, Developmental Delay, Obesity, Psychiatric disorders and Substance Abuse", "type": "abstractive" } ], "q_uid": "ce0e2a8675055a5468c4c54dbb099cfd743df8a7", "evidence": [ { "raw_evidence": [ "Table defines each of the considered clinical patient phenotypes. Table counts the occurrences of these phenotypes across patient notes and Figure contains the corresponding correlation matrix. Lastly, Table presents an overview of some descriptive statistics on the patient notes' lengths.", "FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype." ], "highlighted_evidence": [ "Table defines each of the considered clinical patient phenotypes.", "FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype." ] } ] } ], "1909.00015": [ { "question": "HOw does the method perform compared with baselines?", "answers": [ { "answer": "On the datasets DE-EN, JA-EN, RO-EN, and EN-DE, the baseline achieves 29.79, 21.57, 32.70, and 26.02 BLEU score, respectively. The 1.5-entmax achieves 29.83, 22.13, 33.10, and 25.89 BLEU score, which is a difference of +0.04, +0.56, +0.40, and -0.13 BLEU score versus the baseline. The \u03b1-entmax achieves 29.90, 21.74, 32.89, and 26.93 BLEU score, which is a difference of +0.11, +0.17, +0.19, +0.91 BLEU score versus the baseline.", "type": "abstractive" } ], "q_uid": "f8c1b17d265a61502347c9a937269b38fc3fcab1", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Machine translation tokenized BLEU test results on IWSLT 2017 DE EN, KFTT JA EN, WMT 2016 RO EN and WMT 2014 EN DE, respectively.", "We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis. In the next section, we make use of this potential by exploring the learned internal mechanics of the self-attention heads." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Machine translation tokenized BLEU test results on IWSLT 2017 DE EN, KFTT JA EN, WMT 2016 RO EN and WMT 2014 EN DE, respectively.", "We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis." ] } ] } ], "1705.01214": [ { "question": "What evaluation metrics did look at?", "answers": [ { "answer": "precision, recall, F1 and accuracy", "type": "abstractive" }, { "answer": "Response time, resource consumption (memory, CPU, network bandwidth), precision, recall, F1, accuracy.", "type": "abstractive" } ], "q_uid": "cc608df2884e1e82679f663ed9d9d67a4b6c03f3", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set" ] }, { "raw_evidence": [ "In this section, we describe the validation framework that we created for integration tests. For this, we developed it as a new component of SABIA's system architecture and it provides a high level language which is able to specify interaction scenarios that simulate users interacting with the deployed chatbots. The system testers provide a set of utterances and their corresponding expected responses, and the framework automatically simulates users interacting with the bots and collect metrics, such as time taken to answer an utterance and other resource consumption metrics (e.g., memory, CPU, network bandwidth). Our goal was to: (i) provide a tool for integration tests, (ii) to validate CognIA's implementation, and (iii) to support the system developers in understanding the behavior of the system and which aspects can be improved. Thus, whenever developers modify the system's source code, the modifications must first pass the automatic test before actual deployment.", "FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set" ], "highlighted_evidence": [ "The system testers provide a set of utterances and their corresponding expected responses, and the framework automatically simulates users interacting with the bots and collect metrics, such as time taken to answer an utterance and other resource consumption metrics (e.g., memory, CPU, network bandwidth). ", "FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set" ] } ] } ], "1908.07195": [ { "question": "How much improvement is gained from Adversarial Reward Augmented Maximum Likelihood (ARAML)?", "answers": [ { "answer": "ARAM has achieved improvement over all baseline methods using reverese perplexity and slef-BLEU metric. The maximum reverse perplexity improvement 936,16 is gained for EMNLP2017 WMT dataset and 48,44 for COCO dataset.", "type": "abstractive" }, { "answer": "Compared to the baselines, ARAML does not do better in terms of perplexity on COCO and EMNLP 2017 WMT datasets, but it does by up to 0.27 Self-BLEU points on COCO and 0.35 Self-BLEU on EMNLP 2017 WMT. In terms of Grammaticality and Relevance, it scores better than the baselines on up to 75.5% and 73% of the cases respectively.", "type": "abstractive" } ], "q_uid": "79f9468e011670993fd162543d1a4b3dd811ac5d", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation.", "FLOAT SELECTED: Table 5: Human evaluation on WeiboDial. The scores represent the percentages of Win, Lose or Tie when our model is compared with a baseline. \u03ba denotes Fleiss\u2019 kappa (all are moderate agreement). The scores marked with * mean p-value< 0.05 and ** indicates p-value< 0.01 in sign test." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation.", "FLOAT SELECTED: Table 5: Human evaluation on WeiboDial. The scores represent the percentages of Win, Lose or Tie when our model is compared with a baseline. \u03ba denotes Fleiss\u2019 kappa (all are moderate agreement). The scores marked with * mean p-value< 0.05 and ** indicates p-value< 0.01 in sign test." ] } ] } ], "1703.07090": [ { "question": "what was their character error rate?", "answers": [ { "answer": "2.49% for layer-wise training, 2.63% for distillation, 6.26% for transfer learning.", "type": "abstractive" }, { "answer": "Their best model achieved a 2.49% Character Error Rate.", "type": "abstractive" } ], "q_uid": "1bb7eb5c3d029d95d1abf9f2892c1ec7b6eef306", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others\u2019 teacher is CE model.", "FLOAT SELECTED: Table 4. The CER of different 2-layers models, which are Shenma distilled model, Amap model further trained with Amap dataset, and Shenma model trained with sMBR on Amap dataset." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others\u2019 teacher is CE model.", "FLOAT SELECTED: Table 4. The CER of different 2-layers models, which are Shenma distilled model, Amap model further trained with Amap dataset, and Shenma model trained with sMBR on Amap dataset." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others\u2019 teacher is CE model." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others\u2019 teacher is CE model." ] } ] }, { "question": "which lstm models did they compare with?", "answers": [ { "answer": "Unidirectional LSTM networks with 2, 6, 7, 8, and 9 layers.", "type": "abstractive" } ], "q_uid": "c0af8b7bf52dc15e0b33704822c4a34077e09cd1", "evidence": [ { "raw_evidence": [ "There is a high real time requirement in real world application, especially in online voice search system. Shenma voice search is one of the most popular mobile search engines in China, and it is a streaming service that intermediate recognition results displayed while users are still speaking. Unidirectional LSTM network is applied, rather than bidirectional one, because it is well suited to real-time streaming speech recognition.", "FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others\u2019 teacher is CE model." ], "highlighted_evidence": [ "Unidirectional LSTM network is applied, rather than bidirectional one, because it is well suited to real-time streaming speech recognition.", "FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others\u2019 teacher is CE model." ] } ] } ], "1707.03569": [ { "question": "What was the baseline?", "answers": [ { "answer": "SVMs, LR, BIBREF2", "type": "extractive" }, { "answer": "SVM INLINEFORM0, SVM INLINEFORM1, LR INLINEFORM2, MaxEnt", "type": "extractive" } ], "q_uid": "37edc25e39515ffc2d92115d2fcd9e6ceb18898b", "evidence": [ { "raw_evidence": [ "Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry \u201cBalikas et al.\u201d stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved.", "The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion.", "For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. The activation function of the hidden layers is the hyperbolic tangent. The weights of the layers were initialized from a uniform distribution, scaled as described in BIBREF21 . We used the Root Mean Square Propagation optimization method. We used dropout for regularizing the network. We trained the network using batches of 128 examples as follows: before selecting the batch, we perform a Bernoulli trial with probability INLINEFORM0 to select the task to train for. With probability INLINEFORM1 we pick a batch for the fine-grained sentiment classification problem, while with probability INLINEFORM2 we pick a batch for the ternary problem. As shown in Figure FIGREF2 , the error is backpropagated until the embeddings, that we fine-tune during the learning process. Notice also that the weights of the network until the layer INLINEFORM3 are shared and therefore affected by both tasks.", "FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1." ], "highlighted_evidence": [ "Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry \u201cBalikas et al.\u201d stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art.", "o evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion.", "For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. ", "FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1." ] }, { "raw_evidence": [ "The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion." ], "highlighted_evidence": [ " To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion." ] } ] }, { "question": "By how much did they improve?", "answers": [ { "answer": "They decrease MAE in 0.34", "type": "abstractive" } ], "q_uid": "e431661f17347607c3d3d9764928385a8f3d9650", "evidence": [ { "raw_evidence": [ "Concerning the neural network architecture, we focus on Recurrent Neural Networks (RNNs) that are capable of modeling short-range and long-range dependencies like those exhibited in sequence data of arbitrary length like text. While in the traditional information retrieval paradigm such dependencies are captured using INLINEFORM0 -grams and skip-grams, RNNs learn to capture them automatically BIBREF11 . To circumvent the problems with capturing long-range dependencies and preventing gradients from vanishing, the long short-term memory network (LSTM) was proposed BIBREF12 . In this work, we use an extended version of LSTM called bidirectional LSTM (biLSTM). While standard LSTMs access information only from the past (previous words), biLSTMs capture both past and future information effectively BIBREF13 , BIBREF11 . They consist of two LSTM networks, for propagating text forward and backwards with the goal being to capture the dependencies better. Indeed, previous work on multitask learning showed the effectiveness of biLSTMs in a variety of problems: BIBREF14 tackled sequence prediction, while BIBREF6 and BIBREF15 used biLSTMs for Named Entity Recognition and dependency parsing respectively.", "The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion.", "Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry \u201cBalikas et al.\u201d stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved.", "FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1.", "Evaluation measure To reproduce the setting of the SemEval challenges BIBREF16 , we optimize our systems using as primary measure the macro-averaged Mean Absolute Error ( INLINEFORM0 ) given by: INLINEFORM1" ], "highlighted_evidence": [ "In this work, we use an extended version of LSTM called bidirectional LSTM (biLSTM)", "The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16", "Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation.", "Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry \u201cBalikas et al.\u201d stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. ", "FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1.", "To reproduce the setting of the SemEval challenges BIBREF16 , we optimize our systems using as primary measure the macro-averaged Mean Absolute Error ( INLINEFORM0 ) given by: INLINEFORM1" ] } ] } ], "1912.10011": [ { "question": "What is quantitative improvement of proposed method (the best variant) w.r.t. baseline (the best variant)?", "answers": [ { "answer": "Hierarchical-k", "type": "extractive" } ], "q_uid": "664db503509b8236bc4d3dc39cebb74498365750", "evidence": [ { "raw_evidence": [ "To evaluate the impact of our model components, we first compare scenarios Flat, Hierarchical-k, and Hierarchical-kv. As shown in Table TABREF25, we can see the lower results obtained by the Flat scenario compared to the other scenarios (e.g. BLEU $16.7$ vs. $17.5$ for resp. Flat and Hierarchical-k), suggesting the effectiveness of encoding the data-structure using a hierarchy. This is expected, as losing explicit delimitation between entities makes it harder a) for the encoder to encode semantics of the objects contained in the table and b) for the attention mechanism to extract salient entities/records.", "FLOAT SELECTED: Table 1: Evaluation on the RotoWire testset using relation generation (RG) count (#) and precision (P%), content selection (CS) precision (P%) and recall (R%), content ordering (CO), and BLEU. -: number of parameters unavailable." ], "highlighted_evidence": [ "To evaluate the impact of our model components, we first compare scenarios Flat, Hierarchical-k, and Hierarchical-kv. As shown in Table TABREF25, we can see the lower results obtained by the Flat scenario compared to the other scenarios (e.g. BLEU $16.7$ vs. $17.5$ for resp. Flat and Hierarchical-k), suggesting the effectiveness of encoding the data-structure using a hierarchy.", "FLOAT SELECTED: Table 1: Evaluation on the RotoWire testset using relation generation (RG) count (#) and precision (P%), content selection (CS) precision (P%) and recall (R%), content ordering (CO), and BLEU. -: number of parameters unavailable." ] } ] } ], "2003.11563": [ { "question": "What metrics are used in evaluation?", "answers": [ { "answer": "precision, recall , F1 score", "type": "extractive" } ], "q_uid": "b0a18628289146472aa42f992d0db85c200ec64b", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: F1 scores on an unseen (not used for training) part of the training set and the development set on BERT using different augmentation techniques.", "FLOAT SELECTED: Table 3: Class-wise precision and recall with and without oversampling (OS) achieved on unseen part of the training set.", "FLOAT SELECTED: Table 4: Our results on the SLC task (2nd, in bold) alongside comparable results from the competition leaderboard.", "FLOAT SELECTED: Table 5: Our results on the FLC task (7th, in bold) alongside those of better performing teams from the competition leaderboard.", "So as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision. This is achieved by significantly improving the recall of the minority class (propaganda) at the cost of the recall of the majority class.", "We explore the validity of this by performing several experiments with different weights assigned to the minority class. We note that in our experiments use significantly higher weights than the weights proportional to class frequencies in the training data, that are common in literature BIBREF17. Rather than directly using the class proportions of the training set, we show that tuning weights based on performance on the development set is more beneficial. Figure FIGREF22 shows the results of these experiments wherein we are able to maintain the precision on the subset of the training set used for testing while reducing its recall and thus generalising the model. The fact that the model is generalising on a dissimilar dataset is confirmed by the increase in the development set F1 score. We note that the gains are not infinite and that a balance must be struck based on the amount of generalisation and the corresponding loss in accuracy. The exact weight to use for the best transfer of classification accuracy is related to the dissimilarity of that other dataset and hence is to be obtained experimentally through hyperparameter search. Our experiments showed that a value of 4 is best suited for this task." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: F1 scores on an unseen (not used for training) part of the training set and the development set on BERT using different augmentation techniques.", "FLOAT SELECTED: Table 3: Class-wise precision and recall with and without oversampling (OS) achieved on unseen part of the training set.", "FLOAT SELECTED: Table 4: Our results on the SLC task (2nd, in bold) alongside comparable results from the competition leaderboard.", "FLOAT SELECTED: Table 5: Our results on the FLC task (7th, in bold) alongside those of better performing teams from the competition leaderboard.", "o as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision.", "The fact that the model is generalising on a dissimilar dataset is confirmed by the increase in the development set F1 score." ] } ] } ], "1909.00105": [ { "question": "What metrics are used for evaluation?", "answers": [ { "answer": "Byte-Pair Encoding perplexity (BPE PPL),\nBLEU-1,\nBLEU-4,\nROUGE-L,\npercentage of distinct unigram (D-1),\npercentage of distinct bigrams(D-2),\nuser matching accuracy(UMA),\nMean Reciprocal Rank(MRR)\nPairwise preference over baseline(PP)", "type": "abstractive" }, { "answer": "BLEU-1/4 and ROUGE-L, likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles, user matching accuracy (UMA), Mean Reciprocal Rank (MRR), neural scoring model from BIBREF33 to measure recipe-level coherence", "type": "extractive" }, { "answer": " Distinct-1/2, UMA = User Matching Accuracy, MRR\n= Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model)", "type": "abstractive" } ], "q_uid": "5b551ba47d582f2e6467b1b91a8d4d6a30c343ec", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Metrics on generated recipes from test set. D-1/2 = Distinct-1/2, UMA = User Matching Accuracy, MRR = Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model).", "In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8.", "We observe that personalized models make more diverse recipes than baseline. They thus perform better in BLEU-1 with more key entities (ingredient mentions) present, but worse in BLEU-4, as these recipes are written in a personalized way and deviate from gold on the phrasal level. Similarly, the `Prior Name' model generates more unigram-diverse recipes than other personalized models and obtains a correspondingly lower BLEU-1 score.", "Our model must learn to generate from a diverse recipe space: in our training data, the average recipe length is 117 tokens with a maximum of 256. There are 13K unique ingredients across all recipes. Rare words dominate the vocabulary: 95% of words appear $<$100 times, accounting for only 1.65% of all word usage. As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. User profiles are similarly diverse: 50% of users have consumed $\\le $6 recipes, while 10% of users have consumed $>$45 recipes.", "Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles\u2014one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)\u2014the proportion where the gold user is ranked highest\u2014and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Metrics on generated recipes from test set. D-1/2 = Distinct-1/2, UMA = User Matching Accuracy, MRR = Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model).", " All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8.\n\nWe observe that personalized models make more diverse recipes than ba", "As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. ", "We measure user matching accuracy (UMA)\u2014the proportion where the gold user is ranked highest\u2014and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user." ] }, { "raw_evidence": [ "In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8.", "Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles\u2014one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)\u2014the proportion where the gold user is ranked highest\u2014and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline.", "Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77." ], "highlighted_evidence": [ "While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes.", "We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles\u2014one `gold' user who consumed the original recipe, and nine randomly generated user profiles.", "We measure user matching accuracy (UMA)\u2014the proportion where the gold user is ranked highest\u2014and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user.", "We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Metrics on generated recipes from test set. D-1/2 = Distinct-1/2, UMA = User Matching Accuracy, MRR = Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Metrics on generated recipes from test set. D-1/2 = Distinct-1/2, UMA = User Matching Accuracy, MRR = Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model)." ] } ] } ], "1809.06537": [ { "question": "what are the state-of-the-art models?", "answers": [ { "answer": "SVM , CNN , GRU , CNN/GRU+law, r-net , AoA ", "type": "extractive" }, { "answer": "SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4, attention-based method BIBREF3 and other methods we deem important, some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard", "type": "extractive" } ], "q_uid": "e3c9e4bc7bb93461856e1f4354f33010bc7d28d5", "evidence": [ { "raw_evidence": [ "For comparison, we adopt and re-implement three kinds of baselines as follows:", "We implement an SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4 and select the best feature set on the development set.", "We implement and fine-tune a series of neural text classifiers, including attention-based method BIBREF3 and other methods we deem important. CNN BIBREF18 and GRU BIBREF27 , BIBREF21 take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.", "We implement and train some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard. In our initial experiments, these models take fact description as passage and plea as query. Further, Law articles are added to the fact description as a part of the reading materials, which is a simple way to consider them as well.", "FLOAT SELECTED: Table 1: Experimental results(%). P/R/F1 are reported for positive samples and calculated as the mean score over 10-time experiments. Acc is defined as the proportion of test samples classified correctly, equal to micro-precision. MaxFreq refers to always predicting the most frequent label, i.e. support in our dataset. * indicates methods proposed in previous works." ], "highlighted_evidence": [ "For comparison, we adopt and re-implement three kinds of baselines as follows:\n\nWe implement an SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4 and select the best feature set on the development set.\n\nWe implement and fine-tune a series of neural text classifiers, including attention-based method BIBREF3 and other methods we deem important. CNN BIBREF18 and GRU BIBREF27 , BIBREF21 take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.\n\nWe implement and train some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard. In our initial experiments, these models take fact description as passage and plea as query. Further, Law articles are added to the fact description as a part of the reading materials, which is a simple way to consider them as well.", "FLOAT SELECTED: Table 1: Experimental results(%). P/R/F1 are reported for positive samples and calculated as the mean score over 10-time experiments. Acc is defined as the proportion of test samples classified correctly, equal to micro-precision. MaxFreq refers to always predicting the most frequent label, i.e. support in our dataset. * indicates methods proposed in previous works." ] }, { "raw_evidence": [ "For comparison, we adopt and re-implement three kinds of baselines as follows:", "We implement an SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4 and select the best feature set on the development set.", "We implement and fine-tune a series of neural text classifiers, including attention-based method BIBREF3 and other methods we deem important. CNN BIBREF18 and GRU BIBREF27 , BIBREF21 take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.", "We implement and train some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard. In our initial experiments, these models take fact description as passage and plea as query. Further, Law articles are added to the fact description as a part of the reading materials, which is a simple way to consider them as well.", "In this paper, we explore the task of predicting judgments of civil cases. Comparing with conventional text classification framework, we propose Legal Reading Comprehension framework to handle multiple and complex textual inputs. Moreover, we present a novel neural model, AutoJudge, to incorporate law articles for judgment prediction. In experiments, we compare our model on divorce proceedings with various state-of-the-art baselines of various frameworks. Experimental results show that our model achieves considerable improvement than all the baselines. Besides, visualization results also demonstrate the effectiveness and interpretability of our proposed model." ], "highlighted_evidence": [ "For comparison, we adopt and re-implement three kinds of baselines as follows:\n\nWe implement an SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4 and select the best feature set on the development set.\n\nWe implement and fine-tune a series of neural text classifiers, including attention-based method BIBREF3 and other methods we deem important. CNN BIBREF18 and GRU BIBREF27 , BIBREF21 take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.\n\nWe implement and train some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard. ", "Moreover, we present a novel neural model, AutoJudge, to incorporate law articles for judgment prediction. In experiments, we compare our model on divorce proceedings with various state-of-the-art baselines of various frameworks." ] } ] } ], "2003.03014": [ { "question": "Do they analyze specific derogatory words?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "0682bf049f96fa603d50f0fdad0b79a5c55f6c97", "evidence": [ { "raw_evidence": [ "In addition to the public's overall attitudes, it is important to consider variation and change in the specific words used to refer to LGBTQ people. Because these labels potentially convey many different social meanings and have different relationships with dehumanization in the media, a primary focus of this study involves comparing different LGBTQ labels, specifically gay and homosexual. The Gallup survey asked for opinions on legality of \u201chomosexual relations\" until 2008, but then changed the wording to \u201cgay and lesbian relations\". This was likely because many people who identify as gay and lesbian find the word homosexual to be outdated and derogatory. According to the LGBTQ media monitoring organization GLAAD, homosexual's offensiveness originates in the word's dehumanizing clinical history, which had falsely suggested that \u201cpeople attracted to the same sex are somehow diseased or psychologically/emotionally disordered\" . Beyond its outdated clinical associations, some argue that the word homosexual is more closely associated with sex and all of its negative connotations simply by virtue of containing the word sex, while terms such as gay and lesbian avoid such connotations BIBREF52. Most newspapers, including the New York Times, almost exclusively used the word homosexual in articles about gay and lesbian people until the late 1980s BIBREF53. The New York Times began using the word gay in non-quoted text in 1987. Many major newspapers began restricting the use of the word homosexual in 2006 BIBREF52. As of 2013, the New York Times has confined the use of homosexual to specific references to sexual activity or clinical orientation, in addition to direct quotes and paraphrases ." ], "highlighted_evidence": [ "The Gallup survey asked for opinions on legality of \u201chomosexual relations\" until 2008, but then changed the wording to \u201cgay and lesbian relations\". This was likely because many people who identify as gay and lesbian find the word homosexual to be outdated and derogatory." ] }, { "raw_evidence": [ "The data for our case study spans over thirty years of articles from the New York Times, from January 1986 to December 2015, and was originally collected by BIBREF68 BIBREF68. The articles come from all sections of the newspaper, such as \u201cWorld\", \u201cNew York & Region\", \u201cOpinion\", \u201cStyle\", and \u201cSports\". Our distributional semantic methods rely on all of the available data in order to obtain the most fine-grained understanding of the relationships between words possible. For the other techniques, we extract paragraphs containing any word from a predetermined list of LGTBQ terms (shown in Table TABREF19).", "FLOAT SELECTED: Table 3: Nearest words to weighted average of all LGBTQ terms\u2019 vectors in 1986, 2000, and 2015", "In addition to the public's overall attitudes, it is important to consider variation and change in the specific words used to refer to LGBTQ people. Because these labels potentially convey many different social meanings and have different relationships with dehumanization in the media, a primary focus of this study involves comparing different LGBTQ labels, specifically gay and homosexual. The Gallup survey asked for opinions on legality of \u201chomosexual relations\" until 2008, but then changed the wording to \u201cgay and lesbian relations\". This was likely because many people who identify as gay and lesbian find the word homosexual to be outdated and derogatory. According to the LGBTQ media monitoring organization GLAAD, homosexual's offensiveness originates in the word's dehumanizing clinical history, which had falsely suggested that \u201cpeople attracted to the same sex are somehow diseased or psychologically/emotionally disordered\" . Beyond its outdated clinical associations, some argue that the word homosexual is more closely associated with sex and all of its negative connotations simply by virtue of containing the word sex, while terms such as gay and lesbian avoid such connotations BIBREF52. Most newspapers, including the New York Times, almost exclusively used the word homosexual in articles about gay and lesbian people until the late 1980s BIBREF53. The New York Times began using the word gay in non-quoted text in 1987. Many major newspapers began restricting the use of the word homosexual in 2006 BIBREF52. As of 2013, the New York Times has confined the use of homosexual to specific references to sexual activity or clinical orientation, in addition to direct quotes and paraphrases ." ], "highlighted_evidence": [ "For the other techniques, we extract paragraphs containing any word from a predetermined list of LGTBQ terms (shown in Table TABREF19).", "FLOAT SELECTED: Table 3: Nearest words to weighted average of all LGBTQ terms\u2019 vectors in 1986, 2000, and 2015", "The Gallup survey asked for opinions on legality of \u201chomosexual relations\" until 2008, but then changed the wording to \u201cgay and lesbian relations\". This was likely because many people who identify as gay and lesbian find the word homosexual to be outdated and derogatory." ] } ] } ], "1908.08345": [ { "question": "What rouge score do they achieve?", "answers": [ { "answer": "Best results on unigram:\nCNN/Daily Mail: Rogue F1 43.85\nNYT: Rogue Recall 49.02\nXSum: Rogue F1 38.81", "type": "abstractive" }, { "answer": "Highest scores for ROUGE-1, ROUGE-2 and ROUGE-L on CNN/DailyMail test set are 43.85, 20.34 and 39.90 respectively; on the XSum test set 38.81, 16.50 and 31.27 and on the NYT test set 49.02, 31.02 and 45.55", "type": "abstractive" } ], "q_uid": "c17b609b0b090d7e8f99de1445be04f8f66367d4", "evidence": [ { "raw_evidence": [ "We evaluated summarization quality automatically using ROUGE BIBREF32. We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency. Table TABREF23 summarizes our results on the CNN/DailyMail dataset. The first block in the table includes the results of an extractive Oracle system as an upper bound. We also present the Lead-3 baseline (which simply selects the first three sentences in a document). The second block in the table includes various extractive models trained on the CNN/DailyMail dataset (see Section SECREF5 for an overview). For comparison to our own model, we also implemented a non-pretrained Transformer baseline (TransformerExt) which uses the same architecture as BertSumExt, but with fewer parameters. It is randomly initialized and only trained on the summarization task. TransformerExt has 6 layers, the hidden size is 512, and the feed-forward filter size is 2,048. The model was trained with same settings as in BIBREF3. The third block in Table TABREF23 highlights the performance of several abstractive models on the CNN/DailyMail dataset (see Section SECREF6 for an overview). We also include an abstractive Transformer baseline (TransformerAbs) which has the same decoder as our abstractive BertSum models; the encoder is a 6-layer Transformer with 768 hidden size and 2,048 feed-forward filter size. The fourth block reports results with fine-tuned Bert models: BertSumExt and its two variants (one without interval embeddings, and one with the large version of Bert), BertSumAbs, and BertSumExtAbs. Bert-based models outperform the Lead-3 baseline which is not a strawman; on the CNN/DailyMail corpus it is indeed superior to several extractive BIBREF7, BIBREF8, BIBREF19 and abstractive models BIBREF6. Bert models collectively outperform all previously proposed extractive and abstractive systems, only falling behind the Oracle upper bound. Among Bert variants, BertSumExt performs best which is not entirely surprising; CNN/DailyMail summaries are somewhat extractive and even abstractive models are prone to copying sentences from the source document when trained on this dataset BIBREF6. Perhaps unsurprisingly we observe that larger versions of Bert lead to performance improvements and that interval embeddings bring only slight gains. Table TABREF24 presents results on the NYT dataset. Following the evaluation protocol in BIBREF27, we use limited-length ROUGE Recall, where predicted summaries are truncated to the length of the gold summaries. Again, we report the performance of the Oracle upper bound and Lead-3 baseline. The second block in the table contains previously proposed extractive models as well as our own Transformer baseline. Compress BIBREF27 is an ILP-based model which combines compression and anaphoricity constraints. The third block includes abstractive models from the literature, and our Transformer baseline. Bert-based models are shown in the fourth block. Again, we observe that they outperform previously proposed approaches. On this dataset, abstractive Bert models generally perform better compared to BertSumExt, almost approaching Oracle performance.", "Table TABREF26 summarizes our results on the XSum dataset. Recall that summaries in this dataset are highly abstractive (see Table TABREF12) consisting of a single sentence conveying the gist of the document. Extractive models here perform poorly as corroborated by the low performance of the Lead baseline (which simply selects the leading sentence from the document), and the Oracle (which selects a single-best sentence in each document) in Table TABREF26. As a result, we do not report results for extractive models on this dataset. The second block in Table TABREF26 presents the results of various abstractive models taken from BIBREF22 and also includes our own abstractive Transformer baseline. In the third block we show the results of our Bert summarizers which again are superior to all previously reported models (by a wide margin).", "FLOAT SELECTED: Table 2: ROUGE F1 results on CNN/DailyMail test set (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence). Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software.", "FLOAT SELECTED: Table 4: ROUGE F1 results on the XSum test set. Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software.", "FLOAT SELECTED: Table 3: ROUGE Recall results on NYT test set. Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software. Table cells are filled with \u2014 whenever results are not available." ], "highlighted_evidence": [ "The third block in Table TABREF23 highlights the performance of several abstractive models on the CNN/DailyMail dataset (see Section SECREF6 for an overview).", "Table TABREF26 summarizes our results on the XSum dataset.", "Table TABREF24 presents results on the NYT dataset.", "FLOAT SELECTED: Table 2: ROUGE F1 results on CNN/DailyMail test set (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence). Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software.", "FLOAT SELECTED: Table 4: ROUGE F1 results on the XSum test set. Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software.", "FLOAT SELECTED: Table 3: ROUGE Recall results on NYT test set. Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software. Table cells are filled with \u2014 whenever results are not available." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: ROUGE F1 results on CNN/DailyMail test set (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence). Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software.", "FLOAT SELECTED: Table 4: ROUGE F1 results on the XSum test set. Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: ROUGE F1 results on CNN/DailyMail test set (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence). Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software.", "FLOAT SELECTED: Table 4: ROUGE F1 results on the XSum test set. Results for comparison systems are taken from the authors\u2019 respective papers or obtained on our data by running publicly released software." ] } ] } ], "1605.07333": [ { "question": "By how much does their best model outperform the state-of-the-art?", "answers": [ { "answer": "0.8% F1 better than the best state-of-the-art", "type": "abstractive" }, { "answer": "Best proposed model achieves F1 score of 84.9 compared to best previous result of 84.1.", "type": "abstractive" } ], "q_uid": "6cd8bad8a031ce6d802ded90f9754088e0c8d653", "evidence": [ { "raw_evidence": [ "Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features.", "FLOAT SELECTED: Table 3: State-of-the-art results for relation classification" ], "highlighted_evidence": [ "Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models.", "FLOAT SELECTED: Table 3: State-of-the-art results for relation classification" ] }, { "raw_evidence": [ "Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features.", "FLOAT SELECTED: Table 3: State-of-the-art results for relation classification" ], "highlighted_evidence": [ "Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features.", "FLOAT SELECTED: Table 3: State-of-the-art results for relation classification" ] } ] } ], "2003.08385": [ { "question": "Does the paper report the performance of the model for each individual language?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "35b3ce3a7499070e9b280f52e2cb0c29b0745380", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Baseline scores in the cross-lingual setting. No Italian samples were seen during training, making this a case of zero-shot cross-lingual transfer. The scores are reported as the macro-average of the F1scores for \u2018favor\u2019 and for \u2018against\u2019." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Baseline scores in the cross-lingual setting. No Italian samples were seen during training, making this a case of zero-shot cross-lingual transfer. The scores are reported as the macro-average of the F1scores for \u2018favor\u2019 and for \u2018against\u2019." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 4: Baseline scores in the cross-target setting. For each test set we separately report a German and a French score, as well as their harmonic mean." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Baseline scores in the cross-target setting. For each test set we separately report a German and a French score, as well as their harmonic mean." ] } ] }, { "question": "What is the performance of the baseline?", "answers": [ { "answer": "M-Bert had 76.6 F1 macro score.", "type": "abstractive" }, { "answer": "75.1% and 75.6% accuracy", "type": "abstractive" } ], "q_uid": "71ba1b09bb03f5977d790d91702481cc406b3767", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks." ] } ] }, { "question": "What was the performance of multilingual BERT?", "answers": [ { "answer": "BERT had 76.6 F1 macro score on x-stance dataset.", "type": "abstractive" } ], "q_uid": "bd40f33452da7711b65faaa248aca359b27fddb6", "evidence": [ { "raw_evidence": [ "To put the supervised score into context we list scores that variants of Bert have achieved on other stance detection datasets in Table TABREF46. It seems that the supervised part of x-stance has a similar difficulty as the SemEval-2016 BIBREF0 or MPCHI BIBREF22 datasets on which Bert has previously been evaluated.", "FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks." ], "highlighted_evidence": [ "To put the supervised score into context we list scores that variants of Bert have achieved on other stance detection datasets in Table TABREF46.", "FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks." ] } ] } ], "1908.07245": [ { "question": "What is the state of the art system mentioned?", "answers": [ { "answer": "Two knowledge-based systems,\ntwo traditional word expert supervised systems, six recent neural-based systems, and one BERT feature-based system.", "type": "abstractive" } ], "q_uid": "e82fa03f1638a8c59ceb62bb9a6b41b498950e1f", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: F1-score (%) for fine-grained English all-words WSD on the test sets in the framework of Raganato et al. (2017b) (including the development set SE07). Bold font indicates best systems. The five blocks list the MFS baseline, two knowledge-based systems, two traditional word expert supervised systems, six recent neural-based systems and our systems, respectively. Results in first three blocks come from Raganato et al. (2017b), and others from the corresponding papers." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: F1-score (%) for fine-grained English all-words WSD on the test sets in the framework of Raganato et al. (2017b) (including the development set SE07). Bold font indicates best systems. The five blocks list the MFS baseline, two knowledge-based systems, two traditional word expert supervised systems, six recent neural-based systems and our systems, respectively. Results in first three blocks come from Raganato et al. (2017b), and others from the corresponding papers." ] } ] } ], "1901.01010": [ { "question": "Do the methods that work best on academic papers also work best on Wikipedia?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "No", "type": "boolean" } ], "q_uid": "1097768b89f8bd28d6ef6443c94feb04c1a1318e", "evidence": [ { "raw_evidence": [ "We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment." ], "highlighted_evidence": [ "Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv." ] }, { "raw_evidence": [ "We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment.", "FLOAT SELECTED: Table 1: Experimental results. The best result for each dataset is indicated in bold, and marked with \u201c\u2020\u201d if it is significantly higher than the second best result (based on a one-tailed Wilcoxon signed-rank test; p < 0.05). The results of Benchmark on Peer Review are from the original paper, where the standard deviation values were not reported." ], "highlighted_evidence": [ "We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. ", "FLOAT SELECTED: Table 1: Experimental results. The best result for each dataset is indicated in bold, and marked with \u201c\u2020\u201d if it is significantly higher than the second best result (based on a one-tailed Wilcoxon signed-rank test; p < 0.05). The results of Benchmark on Peer Review are from the original paper, where the standard deviation values were not reported." ] } ] }, { "question": "What is their system's absolute accuracy?", "answers": [ { "answer": "59.4% on wikipedia dataset, 93.4% on peer-reviewed archive AI papers, 77.1% on peer-reviewed archive Computation and Language papers, and 79.9% on peer-reviewed archive Machine Learning papers", "type": "abstractive" } ], "q_uid": "fc1679c714eab822431bbe96f0e9cf4079cd8b8d", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Experimental results. The best result for each dataset is indicated in bold, and marked with \u201c\u2020\u201d if it is significantly higher than the second best result (based on a one-tailed Wilcoxon signed-rank test; p < 0.05). The results of Benchmark on Peer Review are from the original paper, where the standard deviation values were not reported." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Experimental results. The best result for each dataset is indicated in bold, and marked with \u201c\u2020\u201d if it is significantly higher than the second best result (based on a one-tailed Wilcoxon signed-rank test; p < 0.05). The results of Benchmark on Peer Review are from the original paper, where the standard deviation values were not reported." ] } ] } ], "1809.02279": [ { "question": "What were their best results on the benchmark datasets?", "answers": [ { "answer": "In SNLI, our best model achieves the new state-of-the-art accuracy of 87.0%, we can see that our models outperform other models by large margin, achieving the new state of the art., Our models achieve the new state-of-the-art accuracy on SST-2 and competitive accuracy on SST-5", "type": "extractive" }, { "answer": "accuracy of 87.0%", "type": "extractive" } ], "q_uid": "c35806cf68220b2b9bb082b62f493393b9bdff86", "evidence": [ { "raw_evidence": [ "Table TABREF32 and TABREF33 contain results of the models on SNLI and MultiNLI datasets. In SNLI, our best model achieves the new state-of-the-art accuracy of 87.0% with relatively fewer parameters. Similarly in MultiNLI, our models match the accuracy of state-of-the-art models in both in-domain (matched) and cross-domain (mismatched) test sets. Note that only the GloVe word vectors are used as word representations, as opposed to some models that introduce character-level features. It is also notable that our proposed architecture does not restrict the selection of pooling method; the performance could further be improved by replacing max-pooling with other advanced algorithms e.g. intra-sentence attention BIBREF39 and generalized pooling BIBREF19 .", "Similar to the NLI experiments, GloVe pretrained vectors, 300D encoders, and 1024D MLP are used. The number of CAS-LSTM layers is fixed to 2 in PI experiments. Two sentence vectors are aggregated using Eq. EQREF29 and fed as input to the MLP. The results on the Quora Question Pairs dataset are summarized in Table TABREF34 . Again we can see that our models outperform other models by large margin, achieving the new state of the art.", "FLOAT SELECTED: Table 3: Results of the models on the Quora Question Pairs dataset.", "FLOAT SELECTED: Table 4: Results of the models on the SST dataset. \u2217: models pretrained on large external corpora are used." ], "highlighted_evidence": [ "In SNLI, our best model achieves the new state-of-the-art accuracy of 87.0% with relatively fewer parameters.", "The results on the Quora Question Pairs dataset are summarized in Table TABREF34 . Again we can see that our models outperform other models by large margin, achieving the new state of the art.", "FLOAT SELECTED: Table 3: Results of the models on the Quora Question Pairs dataset.", "FLOAT SELECTED: Table 4: Results of the models on the SST dataset. \u2217: models pretrained on large external corpora are used." ] }, { "raw_evidence": [ "Table TABREF32 and TABREF33 contain results of the models on SNLI and MultiNLI datasets. In SNLI, our best model achieves the new state-of-the-art accuracy of 87.0% with relatively fewer parameters. Similarly in MultiNLI, our models match the accuracy of state-of-the-art models in both in-domain (matched) and cross-domain (mismatched) test sets. Note that only the GloVe word vectors are used as word representations, as opposed to some models that introduce character-level features. It is also notable that our proposed architecture does not restrict the selection of pooling method; the performance could further be improved by replacing max-pooling with other advanced algorithms e.g. intra-sentence attention BIBREF39 and generalized pooling BIBREF19 ." ], "highlighted_evidence": [ "Table TABREF32 and TABREF33 contain results of the models on SNLI and MultiNLI datasets. In SNLI, our best model achieves the new state-of-the-art accuracy of 87.0% with relatively fewer parameters. Similarly in MultiNLI, our models match the accuracy of state-of-the-art models in both in-domain (matched) and cross-domain (mismatched) test sets. " ] } ] } ], "2002.01207": [ { "question": "what linguistics features are used?", "answers": [ { "answer": "POS, gender/number and stem POS", "type": "abstractive" } ], "q_uid": "76ed74788e3eb3321e646c48ae8bf6cdfe46dca1", "evidence": [ { "raw_evidence": [ "Table TABREF17 lists the features that we used for CE recovery. We used Farasa to perform segmentation and POS tagging and to determine stem-templates BIBREF31. Farasa has a reported POS accuracy of 96% on the WikiNews dataset BIBREF31. Though the Farasa diacritizer utilizes a combination of some the features presented herein, namely segmentation, POS tagging, and stem templates, Farasa's SVM-ranking approach requires explicit specification of feature combinations (ex. $Prob(CE\\Vert current\\_word, prev\\_word, prev\\_CE)$). Manual exploration of the feature space is undesirable, and ideally we would want our learning algorithm to do so automatically. The flexibility of the DNN model allowed us to include many more surface level features such as affixes, leading and trailing characters in words and stems, and the presence of words in large gazetteers of named entities. As we show later, these additional features significantly lowered CEER.", "FLOAT SELECTED: Table 1. Features with examples and motivation." ], "highlighted_evidence": [ "Table TABREF17 lists the features that we used for CE recovery.", "FLOAT SELECTED: Table 1. Features with examples and motivation." ] } ] } ], "1909.06162": [ { "question": "What is best performing model among author's submissions, what performance it had?", "answers": [ { "answer": "For SLC task, the \"ltuorp\" team has the best performing model (0.6323/0.6028/0.6649 for F1/P/R respectively) and for FLC task the \"newspeak\" team has the best performing model (0.2488/0.2863/0.2201 for F1/P/R respectively).", "type": "abstractive" } ], "q_uid": "c9305e5794b65b33399c22ac8e4e024f6b757a30", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Comparison of our system (MIC-CIS) with top-5 participants: Scores on Test set for SLC and FLC" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Comparison of our system (MIC-CIS) with top-5 participants: Scores on Test set for SLC and FLC" ] } ] }, { "question": "What extracted features were most influencial on performance?", "answers": [ { "answer": "Linguistic", "type": "abstractive" }, { "answer": "BERT", "type": "extractive" } ], "q_uid": "56b7319be68197727baa7d498fa38af0a8440fe4", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features." ] }, { "raw_evidence": [ "Shared Task: This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s).", "Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively.", "Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal." ], "highlighted_evidence": [ " This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s).", "To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. ", "Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal." ] } ] }, { "question": "Did ensemble schemes help in boosting peformance, by how much?", "answers": [ { "answer": "The best ensemble topped the best single model by 0.029 in F1 score on dev (external).", "type": "abstractive" }, { "answer": "They increased F1 Score by 0.029 in Sentence Level Classification, and by 0.044 in Fragment-Level classification", "type": "abstractive" } ], "q_uid": "2268c9044e868ba0a16e92d2063ada87f68b5d03", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features." ] }, { "raw_evidence": [ "Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal.", "We choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\\lambda \\in \\lbrace .99, .95\\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess.", "Finally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position.", "FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features.", "Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external).", "Finally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position." ], "highlighted_evidence": [ "Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal.\n\nWe choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\\lambda \\in \\lbrace .99, .95\\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess.\n\nFinally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position.", "FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features.", "Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external).\n\nFinally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position." ] } ] }, { "question": "Which basic neural architecture perform best by itself?", "answers": [ { "answer": "BERT", "type": "abstractive" } ], "q_uid": "6b7354d7d715bad83183296ce2f3ddf2357cb449", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features." ] } ] }, { "question": "What participating systems had better results than ones authors submitted?", "answers": [ { "answer": "For SLC task : Ituorp, ProperGander and YMJA teams had better results.\nFor FLC task: newspeak and Antiganda teams had better results.", "type": "abstractive" } ], "q_uid": "e949b28f6d1f20e18e82742e04d68158415dc61e", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Comparison of our system (MIC-CIS) with top-5 participants: Scores on Test set for SLC and FLC" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Comparison of our system (MIC-CIS) with top-5 participants: Scores on Test set for SLC and FLC" ] } ] } ], "1810.05241": [ { "question": "What is the size of the StackExchange dataset?", "answers": [ { "answer": "around 332k questions", "type": "abstractive" } ], "q_uid": "a3efe43a72b76b8f5e5111b54393d00e6a5c97ab", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Statistics of datasets we use in this work. Avg# and Var# indicate the mean and variance of numbers of target phrases per data point, respectively." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Statistics of datasets we use in this work. Avg# and Var# indicate the mean and variance of numbers of target phrases per data point, respectively." ] } ] }, { "question": "What were the baselines?", "answers": [ { "answer": "CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b)", "type": "abstractive" }, { "answer": "CopyRNN BIBREF0, KEA BIBREF4 and Maui BIBREF8, CopyRNN*", "type": "extractive" } ], "q_uid": "f1e90a553a4185a4b0299bd179f4f156df798bce", "evidence": [ { "raw_evidence": [ "We report our model's performance on the present-keyphrase portion of the KP20k dataset in Table TABREF35 . To compare with previous works, we provide compute INLINEFORM0 and INLINEFORM1 scores. The new proposed F INLINEFORM2 @ INLINEFORM3 metric indicates consistent ranking with INLINEFORM4 for most cases. Due to its target number sensitivity, we find that its value is closer to INLINEFORM5 for KP20k and Krapivin where average target keyphrases is less and closer to INLINEFORM6 for the other three datasets.", "FLOAT SELECTED: Table 2: Present keyphrase predicting performance on KP20K test set. Compared with CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b)." ], "highlighted_evidence": [ "We report our model's performance on the present-keyphrase portion of the KP20k dataset in Table TABREF35 .", "FLOAT SELECTED: Table 2: Present keyphrase predicting performance on KP20K test set. Compared with CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b)." ] }, { "raw_evidence": [ "We include four non-neural extractive models and CopyRNN BIBREF0 as baselines. We use CopyRNN to denote the model reported by BIBREF0 , CopyRNN* to denote our implementation of CopyRNN based on their open sourced code. To draw fair comparison with existing study, we use the same model hyperparameter setting as used in BIBREF0 and use exhaustive decoding strategy for most experiments. KEA BIBREF4 and Maui BIBREF8 are trained on a subset of 50,000 documents from either KP20k (Table TABREF35 ) or StackEx (Table TABREF37 ) instead of all documents due to implementation limits (without fine-tuning on target dataset)." ], "highlighted_evidence": [ "We include four non-neural extractive models and CopyRNN BIBREF0 as baselines. We use CopyRNN to denote the model reported by BIBREF0 , CopyRNN* to denote our implementation of CopyRNN based on their open sourced code. To draw fair comparison with existing study, we use the same model hyperparameter setting as used in BIBREF0 and use exhaustive decoding strategy for most experiments. KEA BIBREF4 and Maui BIBREF8 are trained on a subset of 50,000 documents from either KP20k (Table TABREF35 ) or StackEx (Table TABREF37 ) instead of all documents due to implementation limits (without fine-tuning on target dataset)." ] } ] } ], "1909.01383": [ { "question": "by how much did the BLEU score improve?", "answers": [ { "answer": "On average 0.64 ", "type": "abstractive" } ], "q_uid": "b68f72aed961d5ba152e9dc50345e1e832196a76", "evidence": [ { "raw_evidence": [ "The BLEU scores are provided in Table TABREF24 (we evaluate translations of 4-sentence fragments). To see which part of the improvement is due to fixing agreement between sentences rather than simply sentence-level post-editing, we train the same repair model at the sentence level. Each sentence in a group is now corrected separately, then they are put back together in a group. One can see that most of the improvement comes from accounting for extra-sentential dependencies. DocRepair outperforms the baseline and CADec by 0.7 BLEU, and its sentence-level repair version by 0.5 BLEU.", "FLOAT SELECTED: Table 2: BLEU scores. For CADec, the original implementation was used." ], "highlighted_evidence": [ "The BLEU scores are provided in Table TABREF24 (we evaluate translations of 4-sentence fragments).", "FLOAT SELECTED: Table 2: BLEU scores. For CADec, the original implementation was used." ] } ] } ], "2001.08868": [ { "question": "How better does new approach behave than existing solutions?", "answers": [ { "answer": " On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment, Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively., Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model", "type": "extractive" }, { "answer": "On Coin Collector, proposed model finds shorter path in fewer number of interactions with enironment.\nOn Cooking World, proposed model uses smallest amount of steps and on average has bigger score and number of wins by significant margin.", "type": "abstractive" } ], "q_uid": "df0257ab04686ddf1c6c4d9b0529a7632330b98e", "evidence": [ { "raw_evidence": [ "Results ::: CoinCollector", "In this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between Go-Explore and algorithms that use a count-based reward in text-based games. Importantly, BIBREF8 showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure FIGREF17 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by BIBREF8, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively.", "In CookingWorld, we compared models in the three settings mentioned earlier, namely, single, joint, and zero-shot. In all experiments, we measured the sum of the final scores of all the games and their trajectory length (number of steps). Table TABREF26 summarizes the results in these three settings. Phase 1 of Go-Explore on single games achieves a total score of 19,530 (sum over all games), which is very close to the maximum possible points (i.e. 19,882), with 47,562 steps. A winning trajectory was found in 4,279 out of the total of 4,440 games. This result confirms again that the exploration strategy of Go-Explore is effective in text-based games. Next, we evaluate the effectiveness and the generalization ability of the simple imitation learning policy trained using the extracted trajectories in phase 1 of Go-Explore in the three settings mentioned above.", "In this setting, each model is trained from scratch in each of the 4,440 games based on the trajectory found in phase 1 of Go-Explore (previous step). As shown in Table TABREF26, the LSTM-DQN BIBREF7, BIBREF8 approach without the use of admissible actions performs poorly. One explanation for this could be that it is difficult for this model to explore both language and game strategy at the same time; it is hard for the model to find a reward signal before it has learned to model language, since almost none of its actions will be admissible, and those reward signals are what is necessary in order to learn the language model. As we see in Table TABREF26, however, by using the admissible actions in the $\\epsilon $-greedy step the score achieved by the LSTM-DQN increases dramatically (+ADM row in Table TABREF26). DRRN BIBREF10 achieves a very high score, since it explicitly learns how to rank admissible actions (i.e. a much simpler task than generating text). Finally, our approach of using a Seq2Seq model trained on the single trajectory provided by phase 1 of Go-Explore achieves the highest score among all the methods, even though we do not use admissible actions in this phase. However, in this experiment the Seq2Seq model cannot perfectly replicate the provided trajectory and the total score that it achieves is in fact 9.4% lower compared to the total score achieved by phase 1 of Go-Explore. Figure FIGREF61 (in Appendix SECREF60) shows the score breakdown for each level and model, where we can see that the gap between our model and other methods increases as the games become harder in terms of skills needed.", "In this setting the 4,440 games are split into training, validation, and test games. The split is done randomly but in a way that different difficulty levels (recipes 1, 2 and 3), are represented with equal ratios in all the 3 splits, i.e. stratified by difficulty. As shown in Table TABREF26, the zero-shot performance of the RL baselines is poor, which could be attributed to the same reasons why RL baselines under-perform in the Joint case. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model, even though the DRRN model has access to the admissible actions at test time, while the Seq2Seq model (as well as the LSTM-DQN model) has to construct actions token-by-token from the entire vocabulary of 20,000 tokens. On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games." ], "highlighted_evidence": [ "Results ::: CoinCollector\nIn this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between Go-Explore and algorithms that use a count-based reward in text-based games. Importantly, BIBREF8 showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure FIGREF17 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by BIBREF8, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively.", "In case of Game CoinCollector, ", "In CookingWorld, we compared models in the three settings mentioned earlier, namely, single, joint, and zero-shot. In all experiments, we measured the sum of the final scores of all the games and their trajectory length (number of steps). Table TABREF26 summarizes the results in these three settings. Phase 1 of Go-Explore on single games achieves a total score of 19,530 (sum over all games), which is very close to the maximum possible points (i.e. 19,882), with 47,562 steps. A winning trajectory was found in 4,279 out of the total of 4,440 games. This result confirms again that the exploration strategy of Go-Explore is effective in text-based games. Next, we evaluate the effectiveness and the generalization ability of the simple imitation learning policy trained using the extracted trajectories in phase 1 of Go-Explore in the three settings mentioned above.", "In this setting, each model is trained from scratch in each of the 4,440 games based on the trajectory found in phase 1 of Go-Explore (previous step). As shown in Table TABREF26, the LSTM-DQN BIBREF7, BIBREF8 approach without the use of admissible actions performs poorly. One explanation for this could be that it is difficult for this model to explore both language and game strategy at the same time; it is hard for the model to find a reward signal before it has learned to model language, since almost none of its actions will be admissible, and those reward signals are what is necessary in order to learn the language model. As we see in Table TABREF26, however, by using the admissible actions in the $\\epsilon $-greedy step the score achieved by the LSTM-DQN increases dramatically (+ADM row in Table TABREF26). DRRN BIBREF10 achieves a very high score, since it explicitly learns how to rank admissible actions (i.e. a much simpler task than generating text). Finally, our approach of using a Seq2Seq model trained on the single trajectory provided by phase 1 of Go-Explore achieves the highest score among all the methods, even though we do not use admissible actions in this phase. However, in this experiment the Seq2Seq model cannot perfectly replicate the provided trajectory and the total score that it achieves is in fact 9.4% lower compared to the total score achieved by phase 1 of Go-Explore. Figure FIGREF61 (in Appendix SECREF60) shows the score breakdown for each level and model, where we can see that the gap between our model and other methods increases as the games become harder in terms of skills needed.", "In this setting the 4,440 games are split into training, validation, and test games. The split is done randomly but in a way that different difficulty levels (recipes 1, 2 and 3), are represented with equal ratios in all the 3 splits, i.e. stratified by difficulty. As shown in Table TABREF26, the zero-shot performance of the RL baselines is poor, which could be attributed to the same reasons why RL baselines under-perform in the Joint case. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model, even though the DRRN model has access to the admissible actions at test time, while the Seq2Seq model (as well as the LSTM-DQN model) has to construct actions token-by-token from the entire vocabulary of 20,000 tokens. On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games." ] }, { "raw_evidence": [ "In this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between Go-Explore and algorithms that use a count-based reward in text-based games. Importantly, BIBREF8 showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure FIGREF17 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by BIBREF8, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively.", "FLOAT SELECTED: Table 3: CookingWorld results on the three evaluated settings single, joint and zero-shot." ], "highlighted_evidence": [ "On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively.", "FLOAT SELECTED: Table 3: CookingWorld results on the three evaluated settings single, joint and zero-shot." ] } ] } ], "1910.12795": [ { "question": "How much is classification performance improved in experiments for low data regime and class-imbalance problems?", "answers": [ { "answer": "Low data: SST-5, TREC, IMDB around 1-2 accuracy points better than baseline\nImbalanced labels: the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000", "type": "abstractive" } ], "q_uid": "3415762847ed13acc3c90de60e3ef42612bc49af", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Accuracy of Data Manipulation on Text Classification. All results are averaged over 15 runs \u00b1 one standard deviation. The numbers in parentheses next to the dataset names indicate the size of the datasets. For example, (40+2) denotes 40 training instances and 2 validation instances per class.", "Table TABREF26 shows the manipulation results on text classification. For data augmentation, our approach significantly improves over the base model on all the three datasets. Besides, compared to both the conventional synonym substitution and the approach that keeps the augmentation network fixed, our adaptive method that fine-tunes the augmentation network jointly with model training achieves superior results. Indeed, the heuristic-based synonym approach can sometimes harm the model performance (e.g., SST-5 and IMDB), as also observed in previous work BIBREF19, BIBREF18. This can be because the heuristic rules do not fit the task or datasets well. In contrast, learning-based augmentation has the advantage of adaptively generating useful samples to improve model training.", "Table TABREF29 shows the classification results on SST-2 with varying imbalance ratios. We can see our data weighting performs best across all settings. In particular, the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000. Our method is again consistently better than BIBREF4, validating that the parametric treatment is beneficial. The proportion-based data weighting provides only limited improvement, showing the advantage of adaptive data weighting. The base model trained on the joint training-validation data for fixed steps fails to perform well, partly due to the lack of a proper mechanism for selecting steps." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Accuracy of Data Manipulation on Text Classification. All results are averaged over 15 runs \u00b1 one standard deviation. The numbers in parentheses next to the dataset names indicate the size of the datasets. For example, (40+2) denotes 40 training instances and 2 validation instances per class.", "Table TABREF26 shows the manipulation results on text classification. For data augmentation, our approach significantly improves over the base model on all the three datasets.", "Table TABREF29 shows the classification results on SST-2 with varying imbalance ratios. We can see our data weighting performs best across all settings. In particular, the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000." ] } ] } ], "2003.04866": [ { "question": "What are the 12 languages covered?", "answers": [ { "answer": "Chinese Mandarin, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, Yue Chinese", "type": "abstractive" }, { "answer": "Chinese Mandarin, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, Yue Chinese", "type": "abstractive" } ], "q_uid": "a616a3f0d244368ec588f04dfbc37d77fda01b4c", "evidence": [ { "raw_evidence": [ "Language Selection. Multi-SimLex comprises eleven languages in addition to English. The main objective for our inclusion criteria has been to balance language prominence (by number of speakers of the language) for maximum impact of the resource, while simultaneously having a diverse suite of languages based on their typological features (such as morphological type and language family). Table TABREF10 summarizes key information about the languages currently included in Multi-SimLex. We have included a mixture of fusional, agglutinative, isolating, and introflexive languages that come from eight different language families. This includes languages that are very widely used such as Chinese Mandarin and Spanish, and low-resource languages such as Welsh and Kiswahili. We hope to further include additional languages and inspire other researchers to contribute to the effort over the lifetime of this project.", "FLOAT SELECTED: Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com." ], "highlighted_evidence": [ "Table TABREF10 summarizes key information about the languages currently included in Multi-SimLex.", "FLOAT SELECTED: Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com." ] } ] } ], "1901.08079": [ { "question": "Do they report results only on English data?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "5fa36dc8f7c4e65acb962fc484989d20b8fdaeec", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Description of training and test datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Description of training and test datasets." ] } ] } ], "1705.01265": [ { "question": "Which hyperparameters were varied in the experiments on the four tasks?", "answers": [ { "answer": "number of clusters, seed value in clustering, selection of word vectors, window size and dimension of embedding", "type": "abstractive" }, { "answer": "different number of clusters, different embeddings", "type": "extractive" } ], "q_uid": "12159f04e0427fe33fa05af6ba8c950f1a5ce5ea", "evidence": [ { "raw_evidence": [ "We cluster the embeddings with INLINEFORM0 -Means. The k-means clusters are initialized using \u201ck-means++\u201d as proposed in BIBREF9 , while the algorithm is run for 300 iterations. We try different values for INLINEFORM1 . For each INLINEFORM2 , we repeat the clustering experiment with different seed initialization for 10 times and we select the clustering result that minimizes the cluster inertia.", "FLOAT SELECTED: Table 1: Scores on F1-measure for named entities segmentation for the different word embeddings across different number of clusters. For each embedding type, we show its dimension and window size. For instance, glove40,w5 is 40-dimensional glove embeddings with window size 5.", "FLOAT SELECTED: Table 2: Results in terms of F1-score for named entities classification for the different word clusters across different number of clusters.", "FLOAT SELECTED: Table 3: MAEM scores (lower is better) for sentiment classification across different types of word embeddings and number of clusters.", "FLOAT SELECTED: Table 5: Earth Movers Distance for fine-grained sentiment quantification across different types of word embeddings and number of clusters. The score in brackets denotes the best performance achieved in the challenge." ], "highlighted_evidence": [ "We cluster the embeddings with INLINEFORM0 -Means.", "We try different values for INLINEFORM1 . For each INLINEFORM2 , we repeat the clustering experiment with different seed initialization for 10 times and we select the clustering result that minimizes the cluster inertia.", "FLOAT SELECTED: Table 1: Scores on F1-measure for named entities segmentation for the different word embeddings across different number of clusters. For each embedding type, we show its dimension and window size. For instance, glove40,w5 is 40-dimensional glove embeddings with window size 5.", "FLOAT SELECTED: Table 2: Results in terms of F1-score for named entities classification for the different word clusters across different number of clusters.", "FLOAT SELECTED: Table 3: MAEM scores (lower is better) for sentiment classification across different types of word embeddings and number of clusters.", "FLOAT SELECTED: Table 5: Earth Movers Distance for fine-grained sentiment quantification across different types of word embeddings and number of clusters. The score in brackets denotes the best performance achieved in the challenge." ] }, { "raw_evidence": [ "Tables TABREF6 and TABREF7 present the results for the different number of clusters across the three vector models used to induce the clusters. For all the experiments we keep the same parametrization for the learning algorithm and we present the performance of each run on the official test set.", "Note, also, that using the clusters produced by the out-of-domain embeddings trained on wikipedia that were released as part of BIBREF8 performs surprisingly well. One might have expected their addition to hurt the performance. However, their value probably stems from the sheer amount of data used for their training as well as the relatively simple type of words (like awesome, terrible) which are discriminative for this task. Lastly, note that in each of the settings, the best results are achieved when the number of clusters is within INLINEFORM0 as in the NER tasks. Comparing the performance across the different embeddings, one cannot claim that a particular embedding performs better. It is evident though that augmenting the feature space with feature derived using the proposed method, preferably with in-domain data, helps the classification performance and reduces MAE INLINEFORM1 ." ], "highlighted_evidence": [ "Tables TABREF6 and TABREF7 present the results for the different number of clusters across the three vector models used to induce the clusters. For all the experiments we keep the same parametrization for the learning algorithm and we present the performance of each run on the official test set.", "One might have expected their addition to hurt the performance. However, their value probably stems from the sheer amount of data used for their training as well as the relatively simple type of words (like awesome, terrible) which are discriminative for this task. Lastly, note that in each of the settings, the best results are achieved when the number of clusters is within INLINEFORM0 as in the NER tasks. Comparing the performance across the different embeddings, one cannot claim that a particular embedding performs better. It is evident though that augmenting the feature space with feature derived using the proposed method, preferably with in-domain data, helps the classification performance and reduces MAE INLINEFORM1 ." ] } ] } ], "1906.10225": [ { "question": "what were the evaluation metrics?", "answers": [ { "answer": "INLINEFORM0 scores", "type": "extractive" }, { "answer": "Unlabeled sentence-level F1, perplexity, grammatically judgment performance", "type": "abstractive" } ], "q_uid": "01f4a0a19467947a8f3bdd7ec9fac75b5222d710", "evidence": [ { "raw_evidence": [ "Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines. All models soundly outperform right branching baselines, and we find that the neural PCFG/compound PCFG are strong models for grammar induction. In particular the compound PCFG outperforms other models by an appreciable margin on both English and Chinese. We again note that we were unable to induce meaningful grammars through a traditional PCFG with the scalar parameterization despite a thorough hyperparameter search. See lab:full for the full results (including corpus-level INLINEFORM1 ) broken down by sentence length." ], "highlighted_evidence": [ "Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 3: Results from training RNNGs on induced trees from various models (Induced RNNG) on the PTB. Induced URNNG indicates fine-tuning with the URNNG objective. We show perplexity (PPL), grammaticality judgment performance (Syntactic Eval.), and unlabeled F1. PPL/F1 are calculated on the PTB test set and Syntactic Eval. is from Marvin and Linzen (2018)\u2019s dataset. Results on top do not make any use of annotated trees, while the bottom two results are trained on binarized gold trees. The perplexity numbers here are not comparable to standard results on the PTB since our models are generative model of sentences and hence we do not carry information across sentence boundaries. Also note that all the RNN-based models above (i.e. LSTM/PRPN/ON/RNNG/URNNG) have roughly the same model capacity (see appendix A.3).", "FLOAT SELECTED: Table 1: Unlabeled sentence-level F1 scores on PTB and CTB test sets. Top shows results from previous work while the rest of the results are from this paper. Mean/Max scores are obtained from 4 runs of each model with different random seeds. Oracle is the maximum score obtainable with binarized trees, since we compare against the non-binarized gold trees per convention. Results with \u2020 are trained on a version of PTB with punctuation, and hence not strictly comparable to the present work. For URNNG/DIORA, we take the parsed test set provided by the authors from their best runs and evaluate F1 with our evaluation setup, which ignores punctuation. ough hyperparameter search.13 See appendix A.2 for the full results (including corpus-level F1) broken down by sentence length.", "FLOAT SELECTED: Table 2: (Top) Mean F1 similarity against Gold, Left, Right, and Self trees. Self F1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. (Bottom) Fraction of ground truth constituents that were predicted as a constituent by the models broken down by label (i.e. label recall)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Results from training RNNGs on induced trees from various models (Induced RNNG) on the PTB. Induced URNNG indicates fine-tuning with the URNNG objective. We show perplexity (PPL), grammaticality judgment performance (Syntactic Eval.), and unlabeled F1. PPL/F1 are calculated on the PTB test set and Syntactic Eval. is from Marvin and Linzen (2018)\u2019s dataset. Results on top do not make any use of annotated trees, while the bottom two results are trained on binarized gold trees. The perplexity numbers here are not comparable to standard results on the PTB since our models are generative model of sentences and hence we do not carry information across sentence boundaries. Also note that all the RNN-based models above (i.e. LSTM/PRPN/ON/RNNG/URNNG) have roughly the same model capacity (see appendix A.3).", "FLOAT SELECTED: Table 1: Unlabeled sentence-level F1 scores on PTB and CTB test sets. Top shows results from previous work while the rest of the results are from this paper. Mean/Max scores are obtained from 4 runs of each model with different random seeds. Oracle is the maximum score obtainable with binarized trees, since we compare against the non-binarized gold trees per convention. Results with \u2020 are trained on a version of PTB with punctuation, and hence not strictly comparable to the present work. For URNNG/DIORA, we take the parsed test set provided by the authors from their best runs and evaluate F1 with our evaluation setup, which ignores punctuation. ough hyperparameter search.13 See appendix A.2 for the full results (including corpus-level F1) broken down by sentence length.", "FLOAT SELECTED: Table 2: (Top) Mean F1 similarity against Gold, Left, Right, and Self trees. Self F1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. (Bottom) Fraction of ground truth constituents that were predicted as a constituent by the models broken down by label (i.e. label recall)." ] } ] } ], "1712.05999": [ { "question": "What were their distribution results?", "answers": [ { "answer": "Distributions of Followers, Friends and URLs are significantly different between the set of tweets containing fake news and those non containing them, but for Favourites, Mentions, Media, Retweets and Hashtags they are not significantly different", "type": "abstractive" } ], "q_uid": "907b3af3cfaf68fe188de9467ed1260e52ec6cf1", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: For each one of the selected features, the table shows the difference between the set of tweets containing fake news and those non containing them, and the associated p-value (applying a KolmogorovSmirnov test). The null hypothesis is that both distributions are equal (two sided). Results are ordered by decreasing p-value.", "The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: For each one of the selected features, the table shows the difference between the set of tweets containing fake news and those non containing them, and the associated p-value (applying a KolmogorovSmirnov test). The null hypothesis is that both distributions are equal (two sided). Results are ordered by decreasing p-value.", " Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered." ] } ] } ], "1808.09029": [ { "question": "what previous RNN models do they compare with?", "answers": [ { "answer": "Variational LSTM, CharCNN, Pointer Sentinel-LSTM, RHN, NAS Cell, SRU, QRNN, RAN, 4-layer skip-connection LSTM, AWD-LSTM, Quantized LSTM", "type": "abstractive" } ], "q_uid": "6aaf12505add25dd133c7b0dafe8f4fe966d1f1d", "evidence": [ { "raw_evidence": [ "Table TABREF23 compares the performance of the PRU with state-of-the-art methods. We can see that the PRU achieves the best performance with fewer parameters.", "FLOAT SELECTED: Table 1: Comparison of single model word-level perplexity of our model with state-of-the-art on validation and test sets of Penn Treebank and Wikitext-2 dataset. For evaluation, we select the model with minimum validation loss. Lower perplexity value represents better performance." ], "highlighted_evidence": [ "Table TABREF23 compares the performance of the PRU with state-of-the-art methods. ", "FLOAT SELECTED: Table 1: Comparison of single model word-level perplexity of our model with state-of-the-art on validation and test sets of Penn Treebank and Wikitext-2 dataset. For evaluation, we select the model with minimum validation loss. Lower perplexity value represents better performance." ] } ] } ], "2004.04721": [ { "question": "What are the languages they use in their experiment?", "answers": [ { "answer": "English\nFrench\nSpanish\nGerman\nGreek\nBulgarian\nRussian\nTurkish\nArabic\nVietnamese\nThai\nChinese\nHindi\nSwahili\nUrdu\nFinnish", "type": "abstractive" }, { "answer": "English, Spanish, Finnish", "type": "extractive" } ], "q_uid": "5bc1dc6ebcb88fd0310b21d2a74939e35a4c1a11", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: XNLI dev results (acc). BT-XX and MT-XX consistently outperform ORIG in all cases.", "We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. Quite remarkably, MT-ES and MT-FI also outperform Orig by a substantial margin, and are only 0.8 points below their BT-ES and BT-FI counterparts. Recall that, for these two systems, training is done in machine translated Spanish or Finnish, while inference is done in machine translated English. This shows that the loss of performance when generalizing from original data to machine translated data is substantially larger than the loss of performance when generalizing from one language to another.", "FLOAT SELECTED: Table 5: XNLI dev results with class distribution unbiasing (average acc across all languages). Adjusting the bias term of the classifier to match the true class distribution brings large improvements for ORIG, but is less effective for BT-FI and MT-FI." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: XNLI dev results (acc). BT-XX and MT-XX consistently outperform ORIG in all cases.", "We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. ", "FLOAT SELECTED: Table 5: XNLI dev results with class distribution unbiasing (average acc across all languages). Adjusting the bias term of the classifier to match the true class distribution brings large improvements for ORIG, but is less effective for BT-FI and MT-FI.", "As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case." ] }, { "raw_evidence": [ "We try 3 variants of each training set to fine-tune our models: (i) the original one in English (Orig), (ii) an English paraphrase of it generated through back-translation using Spanish or Finnish as pivot (BT-ES and BT-FI), and (iii) a machine translated version in Spanish or Finnish (MT-ES and MT-FI). For sentences occurring multiple times in the training set (e.g. premises repeated for multiple hypotheses), we use the exact same translation for all occurrences, as our goal is to understand the inherent effect of translation rather than its potential application as a data augmentation method." ], "highlighted_evidence": [ "We try 3 variants of each training set to fine-tune our models: (i) the original one in English (Orig), (ii) an English paraphrase of it generated through back-translation using Spanish or Finnish as pivot (BT-ES and BT-FI), and (iii) a machine translated version in Spanish or Finnish (MT-ES and MT-FI)." ] } ] } ], "2002.04181": [ { "question": "Which sentiment class is the most accurately predicted by ELS systems?", "answers": [ { "answer": "neutral sentiment", "type": "abstractive" } ], "q_uid": "cd06d775f491b4a17c9d616a8729fd45aa2e79bf", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Average Correct Classification Rate (CCR) for named-entity recognition (NER) of four presidential candidates and entity-level sentiment (ELS) analysis by NLP tools and crowdworkers", "Crowdworkers correctly identified 62% of the neutral, 85% of the positive, and 92% of the negative sentiments. Google Cloud correctly identified 88% of the neutral sentiments, but only 3% of the positive, and 19% of the negative sentiments. TensiStrength correctly identified 87.2% of the neutral sentiments, but 10.5% of the positive, and 8.1% of the negative sentiments. Rosette Text Analytics correctly identified 22.7% of neutral sentiments, 38.1% of negative sentiments and 40.9% of positive sentiments. The lowest and highest CCR pertains to tweets about Trump and Sanders for both Google Cloud and TensiStrength, Trump and Clinton for Rosette Text Analytics, and Clinton and Cruz for crowdworkers. An example of incorrect ELS analysis is shown in Figure FIGREF1 bottom." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Average Correct Classification Rate (CCR) for named-entity recognition (NER) of four presidential candidates and entity-level sentiment (ELS) analysis by NLP tools and crowdworkers", "Crowdworkers correctly identified 62% of the neutral, 85% of the positive, and 92% of the negative sentiments. Google Cloud correctly identified 88% of the neutral sentiments, but only 3% of the positive, and 19% of the negative sentiments. TensiStrength correctly identified 87.2% of the neutral sentiments, but 10.5% of the positive, and 8.1% of the negative sentiments. Rosette Text Analytics correctly identified 22.7% of neutral sentiments, 38.1% of negative sentiments and 40.9% of positive sentiments. The lowest and highest CCR pertains to tweets about Trump and Sanders for both Google Cloud and TensiStrength, Trump and Clinton for Rosette Text Analytics, and Clinton and Cruz for crowdworkers. An example of incorrect ELS analysis is shown in Figure FIGREF1 bottom." ] } ] } ], "1908.06264": [ { "question": "what were the baselines?", "answers": [ { "answer": "BOW-LR, BOW-RF. TFIDF-RF, TextCNN, C-TextCNN", "type": "abstractive" }, { "answer": "bag-of-words (BOW), term frequency\u2013inverse document frequency (TFIDF), neural-based word embedding, Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe", "type": "extractive" } ], "q_uid": "0af16b164db20d8569df4ce688d5a62c861ace0b", "evidence": [ { "raw_evidence": [ "The experiment results of validation on Friends are shown in Table TABREF19. The proposed model and baselines are evaluated based on the Precision (P.), Recall (R.), and F1-measure (F1).", "FLOAT SELECTED: Table 6: Validation Results (Friends)" ], "highlighted_evidence": [ "The experiment results of validation on Friends are shown in Table TABREF19. ", "FLOAT SELECTED: Table 6: Validation Results (Friends)" ] }, { "raw_evidence": [ "The hyperparameters and training setup of our models (FriendsBERT and ChatBERT) are shown in Table TABREF25. Some common and easily implemented methods are selected as the baselines embedding methods and classification models. The baseline embedding methods are including bag-of-words (BOW), term frequency\u2013inverse document frequency (TFIDF), and neural-based word embedding. The classification models are including Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe BIBREF11, and our proposed model. All the experiment results are based on the best performances of validation results." ], "highlighted_evidence": [ "Some common and easily implemented methods are selected as the baselines embedding methods and classification models. The baseline embedding methods are including bag-of-words (BOW), term frequency\u2013inverse document frequency (TFIDF), and neural-based word embedding. The classification models are including Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe BIBREF11, and our proposed model." ] } ] }, { "question": "What BERT models are used?", "answers": [ { "answer": "BERT-base, BERT-large, BERT-uncased, BERT-cased", "type": "abstractive" } ], "q_uid": "6a14379fee26a39631aebd0e14511ce3756e42ad", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 6: Validation Results (Friends)", "FLOAT SELECTED: Table 7: Experimental Setup of Proposed Model" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Validation Results (Friends)", "FLOAT SELECTED: Table 7: Experimental Setup of Proposed Model" ] } ] } ], "1709.10367": [ { "question": "Do they evaluate on English only datasets?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "40c0f97c3547232d6aa039fcb330f142668dea4b", "evidence": [ { "raw_evidence": [ "Data. We apply the sefe on three datasets: ArXiv papers, U.S. Senate speeches, and purchases on supermarket grocery shopping data. We describe these datasets below, and we provide a summary of the datasets in Table TABREF17 .", "Grocery shopping data: This dataset contains the purchases of INLINEFORM0 customers. The data covers a period of 97 weeks. After removing low-frequency items, the data contains INLINEFORM1 unique items at the 1.10upc (Universal Product Code) level. We split the data into a training, test, and validation sets, with proportions of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 , respectively. The training data contains INLINEFORM5 shopping trips and INLINEFORM6 purchases in total.", "FLOAT SELECTED: Table 1: Group structure and size of the three corpora analyzed in Section 3." ], "highlighted_evidence": [ "Data. We apply the sefe on three datasets: ArXiv papers, U.S. Senate speeches, and purchases on supermarket grocery shopping data. We describe these datasets below, and we provide a summary of the datasets in Table TABREF17 .", "Grocery shopping data: This dataset contains the purchases of INLINEFORM0 customers. The data covers a period of 97 weeks. After removing low-frequency items, the data contains INLINEFORM1 unique items at the 1.10upc (Universal Product Code) level. We split the data into a training, test, and validation sets, with proportions of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 , respectively. The training data contains INLINEFORM5 shopping trips and INLINEFORM6 purchases in total.", "FLOAT SELECTED: Table 1: Group structure and size of the three corpora analyzed in Section 3." ] } ] } ], "1908.06267": [ { "question": "Which component is the least impactful?", "answers": [ { "answer": "Based on table results provided changing directed to undirected edges had least impact - max abs difference of 0.33 points on all three datasets.", "type": "abstractive" } ], "q_uid": "2858620e0498db2f2224bfbed5263432f0570832", "evidence": [ { "raw_evidence": [ "Results and ablations ::: Ablation studies", "To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.", "FLOAT SELECTED: Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2).", "Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation." ], "highlighted_evidence": [ "Results and ablations ::: Ablation studies\nTo understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.", "FLOAT SELECTED: Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2).", "Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation." ] } ] }, { "question": "Which component has the greatest impact on performance?", "answers": [ { "answer": "Increasing number of message passing iterations showed consistent improvement in performance - around 1 point improvement compared between 1 and 4 iterations", "type": "abstractive" }, { "answer": "Removing the master node deteriorates performance across all datasets", "type": "extractive" } ], "q_uid": "545e92833b0ad4ba32eac5997edecf97a366a244", "evidence": [ { "raw_evidence": [ "Results and ablations ::: Ablation studies", "To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.", "Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary.", "FLOAT SELECTED: Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2)." ], "highlighted_evidence": [ "Results and ablations ::: Ablation studies\nTo understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.\n\nNumber of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary.", "FLOAT SELECTED: Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2)." ] }, { "raw_evidence": [ "No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document." ], "highlighted_evidence": [ "No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document." ] } ] } ], "1701.05574": [ { "question": "What is the best reported system?", "answers": [ { "answer": "Gaze Sarcasm using Multi Instance Logistic Regression.", "type": "abstractive" }, { "answer": "the MILR classifier", "type": "extractive" } ], "q_uid": "bbb77f2d6685c9257763ca38afaaef29044b4018", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Classification results for different feature combinations. P\u2192 Precision, R\u2192Recall, F\u2192 F\u02d9score, Kappa\u2192 Kappa statistics show agreement with the gold labels. Subscripts 1 and -1 correspond to sarcasm and non-sarcasm classes respectively." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Classification results for different feature combinations. P\u2192 Precision, R\u2192Recall, F\u2192 F\u02d9score, Kappa\u2192 Kappa statistics show agreement with the gold labels. Subscripts 1 and -1 correspond to sarcasm and non-sarcasm classes respectively." ] }, { "raw_evidence": [ "For all regular classifiers, the gaze features are averaged across participants and augmented with linguistic and sarcasm related features. For the MILR classifier, the gaze features derived from each participant are augmented with linguistic features and thus, a multi instance \u201cbag\u201d of features is formed for each sentence in the training data. This multi-instance dataset is given to an MILR classifier, which follows the standard multi instance assumption to derive class-labels for each bag.", "For all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall." ], "highlighted_evidence": [ "For all regular classifiers, the gaze features are averaged across participants and augmented with linguistic and sarcasm related features. For the MILR classifier, the gaze features derived from each participant are augmented with linguistic features and thus, a multi instance \u201cbag\u201d of features is formed for each sentence in the training data. This multi-instance dataset is given to an MILR classifier, which follows the standard multi instance assumption to derive class-labels for each bag.\n\nFor all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall." ] } ] }, { "question": "What cognitive features are used?", "answers": [ { "answer": "Readability (RED), Number of Words (LEN), Avg. Fixation Duration (FDUR), Avg. Fixation Count (FC), Avg. Saccade Length (SL), Regression Count (REG), Skip count (SKIP), Count of regressions from second half\nto first half of the sentence (RSF), Largest Regression Position (LREG), Edge density of the saliency gaze\ngraph (ED), Fixation Duration at Left/Source\n(F1H, F1S), Fixation Duration at Right/Target\n(F2H, F2S), Forward Saccade Word Count of\nSource (PSH, PSS), Forward SaccadeWord Count of Destination\n(PSDH, PSDS), Regressive Saccade Word Count of\nSource (RSH, RSS), Regressive Saccade Word Count of\nDestination (RSDH, RSDS)", "type": "abstractive" } ], "q_uid": "74b338d5352fe1a6fd592e38269a4c81fe79b866", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: The complete set of features used in our system." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: The complete set of features used in our system." ] } ] } ], "1909.00694": [ { "question": "What are the results?", "answers": [ { "answer": "Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiGRU achieved 0.917 accuracy, ACP+AL+CA+CO -- BERT achieved 0.913 accuracy. \nUsing a subset to train: BERT achieved 0.876 accuracy using ACP (6K), BERT achieved 0.886 accuracy using ACP (6K) + AL, BiGRU achieved 0.830 accuracy using ACP (6K), BiGRU achieved 0.879 accuracy using ACP (6K) + AL + CA + CO.", "type": "abstractive" } ], "q_uid": "9d578ddccc27dd849244d632dd0f6bf27348ad81", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Performance of various models on the ACP test set.", "FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data.", "As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.", "We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Performance of various models on the ACP test set.", "FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data.", "As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. ", "We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$." ] } ] }, { "question": "How big is the Japanese data?", "answers": [ { "answer": "7000000 pairs of events were extracted from the Japanese Web corpus, 529850 pairs of events were extracted from the ACP corpus", "type": "abstractive" }, { "answer": "The ACP corpus has around 700k events split into positive and negative polarity ", "type": "abstractive" } ], "q_uid": "44c4bd6decc86f1091b5fc0728873d9324cdde4e", "evidence": [ { "raw_evidence": [ "As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as \u201c\u306e\u3067\u201d (because) and \u201c\u306e\u306b\u201d (in spite of) were present. We treated Cause/Reason (\u539f\u56e0\u30fb\u7406\u7531) and Condition (\u6761\u4ef6) in the original tagset BIBREF15 as Cause and Concession (\u9006\u63a5) as Concession, respectively. Here is an example of event pair extraction.", "We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.", "FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.", "We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:", "Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.", "FLOAT SELECTED: Table 2: Details of the ACP dataset." ], "highlighted_evidence": [ "As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. ", "From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.", "FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.", "We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well.", "Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.", "FLOAT SELECTED: Table 2: Details of the ACP dataset." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 2: Details of the ACP dataset." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Details of the ACP dataset." ] } ] }, { "question": "How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed approach copared to basic approach?", "answers": [ { "answer": "3%", "type": "abstractive" } ], "q_uid": "c029deb7f99756d2669abad0a349d917428e9c12", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data." ] } ] } ], "2003.07723": [ { "question": "Does the paper report macro F1?", "answers": [ { "answer": "Yes", "type": "boolean" }, { "answer": "Yes", "type": "boolean" } ], "q_uid": "3a9d391d25cde8af3334ac62d478b36b30079d74", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. \u2018Support\u2019 signifies the number of labels." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. \u2018Support\u2019 signifies the number of labels." ] }, { "raw_evidence": [ "We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels.", "FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. \u2018Support\u2019 signifies the number of labels." ], "highlighted_evidence": [ "See Table TABREF37 for a breakdown of all emotions as predicted by the this model.", "FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. \u2018Support\u2019 signifies the number of labels." ] } ] } ], "1910.14497": [ { "question": "What are the three measures of bias which are reduced in experiments?", "answers": [ { "answer": "RIPA, Neighborhood Metric, WEAT", "type": "abstractive" } ], "q_uid": "8958465d1eaf81c8b781ba4d764a4f5329f026aa", "evidence": [ { "raw_evidence": [ "Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen)...\\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \\sum _{j=1}^{k} (v \\cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$.", "The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:", "Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT.", "The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$.", "The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word\u2019s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias.", "FLOAT SELECTED: Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)" ], "highlighted_evidence": [ "Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0.", "The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:\n\nWhere $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured.", "The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. ", "The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector.", "FLOAT SELECTED: Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)" ] } ] } ], "2003.12218": [ { "question": "Do they list all the named entity types present?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "4f243056e63a74d1349488983dc1238228ca76a7", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Examples of the most frequent entities annotated in CORD-NER." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Examples of the most frequent entities annotated in CORD-NER." ] } ] } ], "1904.09678": [ { "question": "how is quality measured?", "answers": [ { "answer": "Accuracy and the macro-F1 (averaged F1 over positive and negative classes) are used as a measure of quality.", "type": "abstractive" } ], "q_uid": "8f87215f4709ee1eb9ddcc7900c6c054c970160b", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting." ] } ] } ], "1910.04269": [ { "question": "Does the model use both spectrogram images and raw waveforms as features?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "dc1fe3359faa2d7daa891c1df33df85558bc461b", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Results of the two models and all its variations" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Results of the two models and all its variations" ] } ] } ], "2001.00137": [ { "question": "By how much do they outperform other models in the sentiment in intent classification tasks?", "answers": [ { "answer": "In the sentiment classification task by 6% to 8% and in the intent classification task by 0.94% on average", "type": "abstractive" } ], "q_uid": "3b745f086fb5849e7ce7ce2c02ccbde7cfdedda5", "evidence": [ { "raw_evidence": [ "Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$ against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$ accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$ for our model and 74$\\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.", "Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.", "FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5))." ], "highlighted_evidence": [ "Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. ", "Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.", "FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5))." ] } ] } ], "2002.06644": [ { "question": "What is the baseline for the experiments?", "answers": [ { "answer": "FastText, BiLSTM, BERT", "type": "extractive" }, { "answer": "FastText, BERT , two-layer BiLSTM architecture with GloVe word embeddings", "type": "extractive" } ], "q_uid": "680dc3e56d1dc4af46512284b9996a1056f89ded", "evidence": [ { "raw_evidence": [ "FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.", "BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.", "BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset." ], "highlighted_evidence": [ "FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.", "BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.", "BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset." ] }, { "raw_evidence": [ "Baselines and Approach", "In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.", "Baselines and Approach ::: Baselines", "FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.", "BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.", "BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset.", "FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task" ], "highlighted_evidence": [ "Baselines and Approach\nIn this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.\n\n", "Baselines and Approach ::: Baselines\nFastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.\n\nBiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.\n\nBERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset.", "FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task" ] } ] } ], "1809.04960": [ { "question": "By how much does their system outperform the lexicon-based models?", "answers": [ { "answer": "Under the retrieval evaluation setting, their proposed model + IR2 had better MRR than NVDM by 0.3769, better MR by 4.6, and better Recall@10 by 20 . \nUnder the generative evaluation setting the proposed model + IR2 had better BLEU by 0.044 , better CIDEr by 0.033, better ROUGE by 0.032, and better METEOR by 0.029", "type": "abstractive" }, { "answer": "Proposed model is better than both lexical based models by significan margin in all metrics: BLEU 0.261 vs 0.250, ROUGLE 0.162 vs 0.155 etc.", "type": "abstractive" } ], "q_uid": "8cc56fc44136498471754186cfa04056017b4e54", "evidence": [ { "raw_evidence": [ "NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.", "Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.", "FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)", "Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.", "FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)" ], "highlighted_evidence": [ "NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.", "Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. ", "FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)", "Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation.", "FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)" ] }, { "raw_evidence": [ "TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.", "NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.", "Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.", "Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.", "FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)", "FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)" ], "highlighted_evidence": [ "TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline.", "NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.", "Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation.", "Table TABREF32 shows the performance for our models and the baselines in generative evaluation.", "FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)", "FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)" ] } ] } ], "1909.08859": [ { "question": "How better is accuracy of new model compared to previously reported models?", "answers": [ { "answer": "Average accuracy of proposed model vs best prevous result:\nSingle-task Training: 57.57 vs 55.06\nMulti-task Training: 50.17 vs 50.59", "type": "abstractive" } ], "q_uid": "171ebfdc9b3a98e4cdee8f8715003285caeb2f39", "evidence": [ { "raw_evidence": [ "Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20.", "FLOAT SELECTED: Table 1: Quantitative comparison of the proposed PRN model against the baselines." ], "highlighted_evidence": [ "Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models.", "In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF.", "FLOAT SELECTED: Table 1: Quantitative comparison of the proposed PRN model against the baselines." ] } ] } ], "1905.00563": [ { "question": "What datasets are used to evaluate this approach?", "answers": [ { "answer": " Kinship and Nations knowledge graphs, YAGO3-10 and WN18KGs knowledge graphs ", "type": "abstractive" }, { "answer": "WN18 and YAGO3-10", "type": "extractive" } ], "q_uid": "bc9c31b3ce8126d1d148b1025c66f270581fde10", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Data Statistics of the benchmarks." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Data Statistics of the benchmarks." ] }, { "raw_evidence": [ "Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\\%$ and $50.7\\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\\%$ accuracy in detecting errors." ], "highlighted_evidence": [ "WN18 and YAGO3-10", "Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\\%$ and $50.7\\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. " ] } ] } ], "1902.00330": [ { "question": "How big is the performance difference between this method and the baseline?", "answers": [ { "answer": "Comparing with the highest performing baseline: 1.3 points on ACE2004 dataset, 0.6 points on CWEB dataset, and 0.86 points in the average of all scores.", "type": "abstractive" } ], "q_uid": "b0376a7f67f1568a7926eff8ff557a93f434a253", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Compare our model with other baseline methods on different types of datasets. The evaluation metric is micro F1." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Compare our model with other baseline methods on different types of datasets. The evaluation metric is micro F1." ] } ] } ], "1810.06743": [ { "question": "Which languages do they validate on?", "answers": [ { "answer": "Ar, Bg, Ca, Cs, Da, De, En, Es, Eu, Fa, Fi, Fr, Ga, He, Hi, Hu, It, La, Lt, Lv, Nb, Nl, Nn, PL, Pt, Ro, Ru, Sl, Sv, Tr, Uk, Ur", "type": "abstractive" }, { "answer": "We apply this conversion to the 31 languages, Arabic, Hindi, Lithuanian, Persian, and Russian. , Dutch, Spanish", "type": "extractive" } ], "q_uid": "564dcaf8d0bcc274ab64c784e4c0f50d7a2c17ee", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method." ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs", "A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.", "FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.", "There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset\u2014which only contains verbs\u2014we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.", "We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.", "For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased\u2014albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs", "We apply this conversion to the 31 languages", "FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.", "Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.", "Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.", "UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance." ] } ] } ], "1905.11901": [ { "question": "what amounts of size were used on german-english?", "answers": [ { "answer": "Training data with 159000, 80000, 40000, 20000, 10000 and 5000 sentences, and 7584 sentences for development", "type": "abstractive" }, { "answer": "ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words)", "type": "extractive" } ], "q_uid": "4547818a3bbb727c4bb4a76554b5a5a7b5c5fedb", "evidence": [ { "raw_evidence": [ "We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.", "To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.", "FLOAT SELECTED: Table 1: Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE\u2192EN data, and for KO\u2192EN data." ], "highlighted_evidence": [ "We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.", "Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.", "FLOAT SELECTED: Table 1: Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE\u2192EN data, and for KO\u2192EN data." ] }, { "raw_evidence": [ "Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6\u20137 BLEU in both data conditions.", "In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.", "FLOAT SELECTED: Table 2: German\u2192English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported." ], "highlighted_evidence": [ "Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6\u20137 BLEU in both data conditions.\n\nIn the ultra-low data condition, reducing the BPE vocabulary size is very effecti", "FLOAT SELECTED: Table 2: German\u2192English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported." ] } ] } ], "1912.13109": [ { "question": "How big is the dataset?", "answers": [ { "answer": "3189 rows of text messages", "type": "extractive" }, { "answer": "Resulting dataset was 7934 messages for train and 700 messages for test.", "type": "abstractive" } ], "q_uid": "5908d7fb6c48f975c5dfc5b19bb0765581df2b25", "evidence": [ { "raw_evidence": [ "Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data." ], "highlighted_evidence": [ "Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295." ] }, { "raw_evidence": [ "Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:", "FLOAT SELECTED: Table 3: Train-test split" ], "highlighted_evidence": [ "The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:", "FLOAT SELECTED: Table 3: Train-test split" ] } ] } ], "1911.03310": [ { "question": "How they demonstrate that language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment?", "answers": [ { "answer": "Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.", "type": "extractive" }, { "answer": "explicit projection had a negligible effect on the performance", "type": "extractive" } ], "q_uid": "66125cfdf11d3bf8e59728428e02021177142c3a", "evidence": [ { "raw_evidence": [ "Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.", "We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.", "To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.", "Results ::: Word Alignment.", "Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.", "FLOAT SELECTED: Table 4: Maximum F1 score for word alignment across layers compared with FastAlign baseline." ], "highlighted_evidence": [ "Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.", "We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.\n\nTo train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.", "Results ::: Word Alignment.\nTable TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.", "FLOAT SELECTED: Table 4: Maximum F1 score for word alignment across layers compared with FastAlign baseline." ] }, { "raw_evidence": [ "Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance." ], "highlighted_evidence": [ "Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance." ] } ] } ], "1909.00578": [ { "question": "What are their correlation results?", "answers": [ { "answer": "High correlation results range from 0.472 to 0.936", "type": "abstractive" } ], "q_uid": "ff28d34d1aaa57e7ad553dba09fc924dc21dd728", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Spearman\u2019s \u03c1, Kendall\u2019s \u03c4 and Pearson\u2019s r correlations on DUC-05, DUC-06 and DUC-07 for Q1\u2013Q5. BEST-ROUGE refers to the version that achieved best correlations and is different across years." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Spearman\u2019s \u03c1, Kendall\u2019s \u03c4 and Pearson\u2019s r correlations on DUC-05, DUC-06 and DUC-07 for Q1\u2013Q5. BEST-ROUGE refers to the version that achieved best correlations and is different across years." ] } ] } ], "1904.05584": [ { "question": "Which downstream sentence-level tasks do they evaluate on?", "answers": [ { "answer": "BIBREF13 , BIBREF18", "type": "extractive" } ], "q_uid": "323e100a6c92d3fe503f7a93b96d821408f92109", "evidence": [ { "raw_evidence": [ "Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .", "table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from.", "FLOAT SELECTED: Table B.2: Sentence representation evaluation datasets. SST5 was obtained from a GitHub repository with no associated peer-reviewed work." ], "highlighted_evidence": [ "Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .", "table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper.", "FLOAT SELECTED: Table B.2: Sentence representation evaluation datasets. SST5 was obtained from a GitHub repository with no associated peer-reviewed work." ] } ] } ], "1910.03891": [ { "question": "How much better is performance of proposed method than state-of-the-art methods in experiments?", "answers": [ { "answer": "Accuracy of best proposed method KANE (LSTM+Concatenation) are 0.8011, 0.8592, 0.8605 compared to best state-of-the art method R-GCN + LR 0.7721, 0.8193, 0.8229 on three datasets respectively.", "type": "abstractive" } ], "q_uid": "52f7e42fe8f27d800d1189251dfec7446f0e1d3b", "evidence": [ { "raw_evidence": [ "Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.", "FLOAT SELECTED: Table 2: Entity classification results in accuracy. We run all models 10 times and report mean \u00b1 standard deviation. KANE significantly outperforms baselines on FB24K, DBP24K and Game30K." ], "highlighted_evidence": [ "Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets.", "FLOAT SELECTED: Table 2: Entity classification results in accuracy. We run all models 10 times and report mean \u00b1 standard deviation. KANE significantly outperforms baselines on FB24K, DBP24K and Game30K." ] } ] } ], "1610.00879": [ { "question": "What baseline model is used?", "answers": [ { "answer": "Human evaluators", "type": "abstractive" } ], "q_uid": "6412e97373e8e9ae3aa20aa17abef8326dc05450", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: Performance of human evaluators and our classifiers (trained on all features), for Dataset-H as the test set" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Performance of human evaluators and our classifiers (trained on all features), for Dataset-H as the test set" ] } ] }, { "question": "What stylistic features are used to detect drunk texts?", "answers": [ { "answer": "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalisation, Length, Emoticon (Presence/Count ) \n and Sentiment Ratio", "type": "abstractive" }, { "answer": "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalization, Length, Emoticon (Presence/Count), Sentiment Ratio.", "type": "abstractive" } ], "q_uid": "957bda6b421ef7d2839c3cec083404ac77721f14", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction" ] }, { "raw_evidence": [ "FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction" ] } ] } ], "1704.05572": [ { "question": "What is the accuracy of the proposed technique?", "answers": [ { "answer": "51.7 and 51.6 on 4th and 8th grade question sets with no curated knowledge. 47.5 and 48.0 on 4th and 8th grade question sets when both solvers are given the same knowledge", "type": "abstractive" } ], "q_uid": "eb95af36347ed0e0808e19963fe4d058e2ce3c9f", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: TUPLEINF is significantly better at structured reasoning than TABLEILP.9" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: TUPLEINF is significantly better at structured reasoning than TABLEILP.9" ] } ] } ], "1911.07228": [ { "question": "How much better was the BLSTM-CNN-CRF than the BLSTM-CRF?", "answers": [ { "answer": "Best BLSTM-CNN-CRF had F1 score 86.87 vs 86.69 of best BLSTM-CRF ", "type": "abstractive" } ], "q_uid": "71d59c36225b5ee80af11d3568bdad7425f17b0c", "evidence": [ { "raw_evidence": [ "Table 2 shows our experiments on two models with and without different pre-trained word embedding \u2013 KP means the Kyubyong Park\u2019s pre-trained word embeddings and EG means Edouard Grave\u2019s pre-trained word embeddings.", "FLOAT SELECTED: Table 2. F1 score of two models with different pre-trained word embeddings" ], "highlighted_evidence": [ "Table 2 shows our experiments on two models with and without different pre-trained word embedding \u2013 KP means the Kyubyong Park\u2019s pre-trained word embeddings and EG means Edouard Grave\u2019s pre-trained word embeddings.", "FLOAT SELECTED: Table 2. F1 score of two models with different pre-trained word embeddings" ] } ] } ], "1603.07044": [ { "question": "How much performance gap between their approach and the strong handcrafted method?", "answers": [ { "answer": "0.007 MAP on Task A, 0.032 MAP on Task B, 0.055 MAP on Task C", "type": "abstractive" } ], "q_uid": "08333e4dd1da7d6b5e9b645d40ec9d502823f5d7", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Compared with other systems (bold is best)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Compared with other systems (bold is best)." ] } ] } ], "1902.09314": [ { "question": "How big is their model?", "answers": [ { "answer": "Proposed model has 1.16 million parameters and 11.04 MB.", "type": "abstractive" } ], "q_uid": "8434974090491a3c00eed4f22a878f0b70970713", "evidence": [ { "raw_evidence": [ "To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU .", "FLOAT SELECTED: Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold." ], "highlighted_evidence": [ "Statistical results are reported in Table TABREF37 .", "FLOAT SELECTED: Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold." ] } ] } ], "1910.11769": [ { "question": "Which tested technique was the worst performer?", "answers": [ { "answer": "Depeche + SVM", "type": "extractive" } ], "q_uid": "a4e66e842be1438e5cd8d7cb2a2c589f494aee27", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Benchmark results (averaged 5-fold cross validation)", "We computed bag-of-words-based benchmarks using the following methods:", "Classification with TF-IDF + Linear SVM (TF-IDF + SVM)", "Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)", "Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)", "Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Benchmark results (averaged 5-fold cross validation)", "We computed bag-of-words-based benchmarks using the following methods:\n\nClassification with TF-IDF + Linear SVM (TF-IDF + SVM)\n\nClassification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)\n\nClassification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)\n\nCombination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)" ] } ] } ], "1909.13375": [ { "question": "What is difference in peformance between proposed model and state-of-the art on other question types?", "answers": [ { "answer": "For single-span questions, the proposed LARGE-SQUAD improve performance of the MTMSNlarge baseline for 2.1 EM and 1.55 F1.\nFor number type question, MTMSNlarge baseline have improvement over LARGE-SQUAD for 3,11 EM and 2,98 F1. \nFor date question, LARGE-SQUAD have improvements in 2,02 EM but MTMSNlarge have improvement of 4,39 F1.", "type": "abstractive" } ], "q_uid": "579941de2838502027716bae88e33e79e69997a6", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2. Performance of different models on DROP\u2019s development set in terms of Exact Match (EM) and F1." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2. Performance of different models on DROP\u2019s development set in terms of Exact Match (EM) and F1." ] } ] }, { "question": "What is the performance of proposed model on entire DROP dataset?", "answers": [ { "answer": "The proposed model achieves EM 77,63 and F1 80,73 on the test and EM 76,95 and F1 80,25 on the dev", "type": "abstractive" } ], "q_uid": "9a65cfff4d99e4f9546c72dece2520cae6231810", "evidence": [ { "raw_evidence": [ "Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.", "FLOAT SELECTED: Table 3. Comparing test and development set results of models from the official DROP leaderboard" ], "highlighted_evidence": [ "Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.", "FLOAT SELECTED: Table 3. Comparing test and development set results of models from the official DROP leaderboard" ] } ] } ], "1909.00430": [ { "question": "Does the system trained only using XR loss outperform the fully supervised neural system?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "47a30eb4d0d6f5f2ff4cdf6487265a25c1b18fd8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. \u2217 indicates that the method\u2019s result is significantly better than all baseline methods, \u2020 indicates that the method\u2019s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. \u2217 indicates that the method\u2019s result is significantly better than all baseline methods, \u2020 indicates that the method\u2019s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)." ] } ] }, { "question": "How accurate is the aspect based sentiment classifier trained only using the XR loss?", "answers": [ { "answer": "BiLSTM-XR-Dev Estimation accuracy is 83.31 for SemEval-15 and 87.68 for SemEval-16.\nBiLSTM-XR accuracy is 83.31 for SemEval-15 and 88.12 for SemEval-16.\n", "type": "abstractive" } ], "q_uid": "e42fbf6c183abf1c6c2321957359c7683122b48e", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. \u2217 indicates that the method\u2019s result is significantly better than all baseline methods, \u2020 indicates that the method\u2019s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. \u2217 indicates that the method\u2019s result is significantly better than all baseline methods, \u2020 indicates that the method\u2019s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)." ] } ] } ], "1910.00912": [ { "question": "What metrics other than entity tagging are compared?", "answers": [ { "answer": "We also report the metrics in BIBREF7 for consistency, we report the span F1, Exact Match (EM) accuracy of the entire sequence of labels, metric that combines intent and entities", "type": "extractive" } ], "q_uid": "7c794fa0b2818d354ca666969107818a2ffdda0c", "evidence": [ { "raw_evidence": [ "Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems. The statistical analysis shows a significant improvement over Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.803, p = .005]$.", "FLOAT SELECTED: Table 4: Comparison of HERMIT with the results in (Liu et al., 2019) by combining Intent and Entity.", "In this section we report the experiments performed on the ROMULUS dataset (Table TABREF27). Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions \u2013 e.g., a match is when all the three sequences are correct.", "Results in terms of EM reflect the complexity of the different tasks, motivating their position within the hierarchy. Specifically, dialogue act identification is the easiest task ($89.31\\%$) with respect to frame ($82.60\\%$) and frame element ($79.73\\%$), due to the shallow semantics it aims to catch. However, when looking at the span F1, its score ($89.42\\%$) is lower than the frame element identification task ($92.26\\%$). What happens is that even though the label set is smaller, dialogue act spans are supposed to be longer than frame element ones, sometimes covering the whole sentence. Frame elements, instead, are often one or two tokens long, that contribute in increasing span based metrics. Frame identification is the most complex task for several reasons. First, lots of frame spans are interlaced or even nested; this contributes to increasing the network entropy. Second, while the dialogue act label is highly related to syntactic structures, frame identification is often subject to the inherent ambiguity of language (e.g., get can evoke both Commerce_buy and Arriving). We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks. However, the frame element scores are comparable to the benchmark, since the task is very similar." ], "highlighted_evidence": [ "Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems.", "FLOAT SELECTED: Table 4: Comparison of HERMIT with the results in (Liu et al., 2019) by combining Intent and Entity.", "Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions \u2013 e.g., a match is when all the three sequences are correct.", "We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks." ] } ] } ], "1910.03814": [ { "question": "What is the results of multimodal compared to unimodal models?", "answers": [ { "answer": "Unimodal LSTM vs Best Multimodal (FCM)\n- F score: 0.703 vs 0.704\n- AUC: 0.732 vs 0.734 \n- Mean Accuracy: 68.3 vs 68.4 ", "type": "abstractive" } ], "q_uid": "4e9684fd68a242cb354fa6961b0e3b5c35aae4b6", "evidence": [ { "raw_evidence": [ "Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models.", "FLOAT SELECTED: Table 1. Performance of the proposed models, the LSTM and random scores. The Inputs column indicate which inputs are available at training and testing time." ], "highlighted_evidence": [ "Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available.", "FLOAT SELECTED: Table 1. Performance of the proposed models, the LSTM and random scores. The Inputs column indicate which inputs are available at training and testing time." ] } ] } ], "1701.00185": [ { "question": "What were their performance results?", "answers": [ { "answer": "On SearchSnippets dataset ACC 77.01%, NMI 62.94%, on StackOverflow dataset ACC 51.14%, NMI 49.08%, on Biomedical dataset ACC 43.00%, NMI 38.18%", "type": "abstractive" } ], "q_uid": "9e04730907ad728d62049f49ac828acb4e0a1a2a", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.", "FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.", "FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models." ] } ] }, { "question": "By how much did they outperform the other methods?", "answers": [ { "answer": "on SearchSnippets dataset by 6.72% in ACC, by 6.94% in NMI; on Biomedical dataset by 5.77% in ACC, 3.91% in NMI", "type": "abstractive" } ], "q_uid": "5a0841cc0628e872fe473874694f4ab9411a1d10", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.", "FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.", "FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models." ] } ] } ], "1912.01673": [ { "question": "What are all 15 types of modifications ilustrated in the dataset?", "answers": [ { "answer": "- paraphrase 1\n- paraphrase 2\n- different meaning\n- opposite meaning\n- nonsense\n- minimal change\n- generalization\n- gossip\n- formal sentence\n- non-standard sentence\n- simple sentence\n- possibility\n- ban\n- future\n- past", "type": "abstractive" } ], "q_uid": "2d536961c6e1aec9f8491e41e383dc0aac700e0a", "evidence": [ { "raw_evidence": [ "We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.", "FLOAT SELECTED: Table 2: Sentences transformations requested in the second round of annotation with the instructions to the annotators. The annotators were given no examples (with the exception of nonsense) not to be influenced as much as in the first round." ], "highlighted_evidence": [ "We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.", "FLOAT SELECTED: Table 2: Sentences transformations requested in the second round of annotation with the instructions to the annotators. The annotators were given no examples (with the exception of nonsense) not to be influenced as much as in the first round." ] } ] } ], "1706.08032": [ { "question": "What were their results on the three datasets?", "answers": [ { "answer": "accuracy of 86.63 on STS, 85.14 on Sanders and 80.9 on HCR", "type": "abstractive" } ], "q_uid": "efb3a87845460655c53bd7365bcb8393c99358ec", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION" ], "highlighted_evidence": [ "FLOAT SELECTED: Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION" ] } ] }, { "question": "What semantic rules are proposed?", "answers": [ { "answer": "rules that compute polarity of words after POS tagging or parsing steps", "type": "abstractive" } ], "q_uid": "d60a3887a0d434abc0861637bbcd9ad0c596caf4", "evidence": [ { "raw_evidence": [ "In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:", "@lonedog bwahahah...you are amazing! However, it was quite the letdown.", "@kirstiealley my dentist is great but she's expensive...=(", "In two tweets above, the overall sentiment is negative. However, the main sentiment is only in the sub-sentences following but and however. This inspires a processing step to remove unessential parts in a tweet. Rule-based approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 . BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 . We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps. We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method. The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules. Table TABREF16 illustrates the number of processed sentences on each dataset.", "FLOAT SELECTED: Table I SEMANTIC RULES [12]" ], "highlighted_evidence": [ "In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:\n\n@lonedog bwahahah...you are amazing! However, it was quite the letdown.\n\n@kirstiealley my dentist is great but she's expensive...=(\n\nIn two tweets above, the overall sentiment is negative. However, the main sentiment is only in the sub-sentences following but and however. This inspires a processing step to remove unessential parts in a tweet. Rule-based approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 . BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 . We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps. We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method. The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules. Table TABREF16 illustrates the number of processed sentences on each dataset.", "FLOAT SELECTED: Table I SEMANTIC RULES [12]" ] } ] } ], "1911.01799": [ { "question": "What was the performance of both approaches on their dataset?", "answers": [ { "answer": "ERR of 19.05 with i-vectors and 15.52 with x-vectors", "type": "abstractive" } ], "q_uid": "8c0a0747a970f6ea607ff9b18cfeb738502d9a95", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets." ] } ] }, { "question": "What genres are covered?", "answers": [ { "answer": "genre, entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement", "type": "abstractive" } ], "q_uid": "a2be2bd84e5ae85de2ab9968147b3d49c84dfb7f", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1. The distribution over genres." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1. The distribution over genres." ] } ] }, { "question": "Which of the two speech recognition models works better overall on CN-Celeb?", "answers": [ { "answer": "x-vector", "type": "abstractive" } ], "q_uid": "944d5dbe0cfc64bf41ea36c11b1d378c408d40b8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets." ] } ] }, { "question": "By how much is performance on CN-Celeb inferior to performance on VoxCeleb?", "answers": [ { "answer": "For i-vector system, performances are 11.75% inferior to voxceleb. For x-vector system, performances are 10.74% inferior to voxceleb", "type": "abstractive" } ], "q_uid": "327e6c6609fbd4c6ae76284ca639951f03eb4a4c", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets." ] } ] } ], "1812.06705": [ { "question": "On what datasets is the new model evaluated on?", "answers": [ { "answer": "SST (Stanford Sentiment Treebank), Subj (Subjectivity dataset), MPQA Opinion Corpus, RT is another movie review sentiment dataset, TREC is a dataset for classification of the six question types", "type": "extractive" } ], "q_uid": "df8cc1f395486a12db98df805248eb37c087458b", "evidence": [ { "raw_evidence": [ "SST BIBREF25 SST (Stanford Sentiment Treebank) is a dataset for sentiment classification on movie reviews, which are annotated with five labels (SST5: very positive, positive, neutral, negative, or very negative) or two labels (SST2: positive or negative).", "Subj BIBREF26 Subj (Subjectivity dataset) is annotated with whether a sentence is subjective or objective.", "MPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences, which contains news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.).", "RT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee.", "TREC BIBREF29 TREC is a dataset for classification of the six question types (whether the question is about person, location, numeric information, etc.).", "FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. \u201cw/\u201d represents \u201cwith\u201d, lines marked with \u201c*\u201d are experiments results from Kobayashi(Kobayashi, 2018)." ], "highlighted_evidence": [ "SST BIBREF25 SST (Stanford Sentiment Treebank) is a dataset for sentiment classification on movie reviews, which are annotated with five labels (SST5: very positive, positive, neutral, negative, or very negative) or two labels (SST2: positive or negative).\n\nSubj BIBREF26 Subj (Subjectivity dataset) is annotated with whether a sentence is subjective or objective.\n\nMPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences, which contains news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.).\n\nRT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee.\n\nTREC BIBREF29 TREC is a dataset for classification of the six question types (whether the question is about person, location, numeric information, etc.).", "FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. \u201cw/\u201d represents \u201cwith\u201d, lines marked with \u201c*\u201d are experiments results from Kobayashi(Kobayashi, 2018)." ] } ] }, { "question": "How do the authors measure performance?", "answers": [ { "answer": "Accuracy across six datasets", "type": "abstractive" } ], "q_uid": "6e97c06f998f09256be752fa75c24ba853b0db24", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. \u201cw/\u201d represents \u201cwith\u201d, lines marked with \u201c*\u201d are experiments results from Kobayashi(Kobayashi, 2018)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. \u201cw/\u201d represents \u201cwith\u201d, lines marked with \u201c*\u201d are experiments results from Kobayashi(Kobayashi, 2018)." ] } ] }, { "question": "Are other pretrained language models also evaluated for contextual augmentation? ", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "63bb39fd098786a510147f8ebc02408de350cb7c", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. \u201cw/\u201d represents \u201cwith\u201d, lines marked with \u201c*\u201d are experiments results from Kobayashi(Kobayashi, 2018)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. \u201cw/\u201d represents \u201cwith\u201d, lines marked with \u201c*\u201d are experiments results from Kobayashi(Kobayashi, 2018)." ] } ] } ], "1905.08949": [ { "question": "What is the latest paper covered by this survey?", "answers": [ { "answer": "Kim et al. (2019)", "type": "abstractive" } ], "q_uid": "999b20dc14cb3d389d9e3ba5466bc3869d2d6190", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Existing NQG models with their best-reported performance on SQuAD. Legend: QW: question word generation, PC: paragraph-level context, CP: copying mechanism, LF: linguistic features, PG: policy gradient." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Existing NQG models with their best-reported performance on SQuAD. Legend: QW: question word generation, PC: paragraph-level context, CP: copying mechanism, LF: linguistic features, PG: policy gradient." ] } ] } ], "2001.06286": [ { "question": "What is the state of the art?", "answers": [ { "answer": "BERTje BIBREF8, an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19., mBERT", "type": "extractive" } ], "q_uid": "6e962f1f23061f738f651177346b38fd440ff480", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Results of RobBERT fine-tuned on several downstream tasks compared to the state of the art on the tasks. For accuracy, we also report the 95% confidence intervals. (Results annotated with * from van der Burgh and Verberne (2019), ** = from de Vries et al. (2019), *** from Allein et al. (2020))", "We replicated the high-level sentiment analysis task used to evaluate BERTje BIBREF8 to be able to compare our methods. This task uses a dataset called Dutch Book Reviews Dataset (DBRD), in which book reviews scraped from hebban.nl are labeled as positive or negative BIBREF19. Although the dataset contains 118,516 reviews, only 22,252 of these reviews are actually labeled as positive or negative. The DBRD dataset is already split in a balanced 10% test and 90% train split, allowing us to easily compare to other models trained for solving this task. This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results of RobBERT fine-tuned on several downstream tasks compared to the state of the art on the tasks. For accuracy, we also report the 95% confidence intervals. (Results annotated with * from van der Burgh and Verberne (2019), ** = from de Vries et al. (2019), *** from Allein et al. (2020))", "This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19." ] } ] } ], "1902.00672": [ { "question": "How does the model compare with the MMR baseline?", "answers": [ { "answer": " Moreover, our TL-TranSum method also outperforms other approaches such as MaxCover ( $5\\%$ ) and MRMR ( $7\\%$ )", "type": "extractive" } ], "q_uid": "babe72f0491e65beff0e5889380e8e32d7a81f78", "evidence": [ { "raw_evidence": [ "Various classes of NP-hard problems involving a submodular and non-decreasing function can be solved approximately by polynomial time algorithms with provable approximation factors. Algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" are our core methods for the detection of approximations of maximal budgeted hypergraph transversals and minimal soft hypergraph transversals, respectively. In each case, a transversal is found and the summary is formed by extracting and aggregating the associated sentences. Algorithm \"Detection of hypergraph transversals for text summarization\" is based on an adaptation of an algorithm presented in BIBREF30 for the maximization of submodular functions under a Knaspack constraint. It is our primary transversal-based summarization model, and we refer to it as the method of Transversal Summarization with Target Length (TL-TranSum algorithm). Algorithm \"Detection of hypergraph transversals for text summarization\" is an application of the algorithm presented in BIBREF20 for solving the submodular set covering problem. We refer to it as Transversal Summarization with Target Coverage (TC-TranSum algorithm). Both algorithms produce transversals by iteratively appending the node inducing the largest increase in the total weight of the covered hyperedges relative to the node weight. While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges, the division by the node weights (i.e. the sentence lengths) balances this tendency and allows the inclusion of short sentences as well. In contrast, the methods of sentence selection based on a maximal relevance and a minimal redundancy such as, for instance, the maximal marginal relevance approach of BIBREF31 , tend to favor the selection of long sentences only. The main difference between algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" is the stopping criterion: in algorithm \"Detection of hypergraph transversals for text summarization\" , the approximate minimal soft transversal is obtained whenever the targeted hyperedge coverage is reached while algorithm \"Detection of hypergraph transversals for text summarization\" appends a given sentence to the approximate maximal budgeted transversal only if its addition does not make the summary length exceed the target length $L$ .", "FLOAT SELECTED: Table 2: Comparison with related graph- and hypergraph-based summarization systems." ], "highlighted_evidence": [ "While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges, the division by the node weights (i.e. the sentence lengths) balances this tendency and allows the inclusion of short sentences as well. In contrast, the methods of sentence selection based on a maximal relevance and a minimal redundancy such as, for instance, the maximal marginal relevance approach of BIBREF31 , tend to favor the selection of long sentences only.", "FLOAT SELECTED: Table 2: Comparison with related graph- and hypergraph-based summarization systems." ] } ] } ], "2001.10161": [ { "question": "How well did the system do?", "answers": [ { "answer": "the neural approach is generally preferred by a greater percentage of participants than the rules or random, human-made game outperforms them all", "type": "extractive" } ], "q_uid": "c180f44667505ec03214d44f4970c0db487a8bae", "evidence": [ { "raw_evidence": [ "We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk. The first evaluation tests the knowledge graph construction phase, in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models. The second study compares full games\u2014including description generation and game assembly, which can't easily be isolated from graph construction\u2014generated by different methods. This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance. Both studies are performed across two genres: mystery and fairy-tales. This is done in part to test the relative effectiveness of our approach across different genres with varying thematic commonsense knowledge. The dataset used was compiled via story summaries that were scraped from Wikipedia via a recursive crawling bot. The bot searched pages for both for plot sections as well as links to other potential stories. From the process, 695 fairy-tales and 536 mystery stories were compiled from two categories: novels and short stories. We note that the mysteries did not often contain many fantasy elements, i.e. they consisted of mysteries set in our world such as Sherlock Holmes, while the fairy-tales were much more removed from reality. Details regarding how each of the studies were conducted and the corresponding setup are presented below.", "Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre. The completion criteria for each game is collect half the total score possible in the game, i.e. explore half of all possible rooms and examine half of all possible entities. This provided the participant with multiple possible methods of finishing a particular game. On completion, the participant was asked to rank the two games according to overall perceived coherence, interestingness, and adherence to the genre. We additionally provided a required initial tutorial game which demonstrated all of these mechanics. The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations. We had 75 participants in total, 39 for mystery and 36 for fairy-tales. As each player played the neural model created game and one from each of the other approaches\u2014this gave us 13 on average for the other approaches in the mystery genre and 12 for fairy-tales.", "FLOAT SELECTED: Table 4: Results of the full game evaluation participant study. *Indicates statistical significance (p < 0.05).", "In the mystery genre, the neural approach is generally preferred by a greater percentage of participants than the rules or random. The human-made game outperforms them all. A significant exception to is that participants thought that the rules-based game was more interesting than the neural game. The trends in the fairy-tale genre are in general similar with a few notable deviations. The first deviation is that the rules-based and random approaches perform significantly worse than neural in this genre. We see also that the neural game is as coherent as the human-made game." ], "highlighted_evidence": [ "We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk. The first evaluation tests the knowledge graph construction phase, in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models. The second study compares full games\u2014including description generation and game assembly, which can't easily be isolated from graph construction\u2014generated by different methods. This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance. Both studies are performed across two genres: mystery and fairy-tales.", "Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre. The completion criteria for each game is collect half the total score possible in the game, i.e. explore half of all possible rooms and examine half of all possible entities. This provided the participant with multiple possible methods of finishing a particular game. On completion, the participant was asked to rank the two games according to overall perceived coherence, interestingness, and adherence to the genre. We additionally provided a required initial tutorial game which demonstrated all of these mechanics. The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations. We had 75 participants in total, 39 for mystery and 36 for fairy-tales. As each player played the neural model created game and one from each of the other approaches\u2014this gave us 13 on average for the other approaches in the mystery genre and 12 for fairy-tales.", "FLOAT SELECTED: Table 4: Results of the full game evaluation participant study. *Indicates statistical significance (p < 0.05).", "In the mystery genre, the neural approach is generally preferred by a greater percentage of participants than the rules or random. The human-made game outperforms them all. A significant exception to is that participants thought that the rules-based game was more interesting than the neural game. The trends in the fairy-tale genre are in general similar with a few notable deviations. The first deviation is that the rules-based and random approaches perform significantly worse than neural in this genre. We see also that the neural game is as coherent as the human-made game." ] } ] } ], "1909.00279": [ { "question": "How much is proposed model better in perplexity and BLEU score than typical UMT models?", "answers": [ { "answer": "Perplexity of the best model is 65.58 compared to best baseline 105.79.\nBleu of the best model is 6.57 compared to best baseline 5.50.", "type": "abstractive" } ], "q_uid": "d484a71e23d128f146182dccc30001df35cdf93f", "evidence": [ { "raw_evidence": [ "As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.", "FLOAT SELECTED: Table 3: Perplexity and BLEU scores of generating poems from vernacular translations. Since perplexity and BLEU scores on the test set fluctuates from epoch to epoch, we report the mean perplexity and BLEU scores over 5 consecutive epochs after convergence." ], "highlighted_evidence": [ "We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.", "FLOAT SELECTED: Table 3: Perplexity and BLEU scores of generating poems from vernacular translations. Since perplexity and BLEU scores on the test set fluctuates from epoch to epoch, we report the mean perplexity and BLEU scores over 5 consecutive epochs after convergence." ] } ] } ], "1701.02877": [ { "question": "What web and user-generated NER datasets are used for the analysis?", "answers": [ { "answer": "MUC, CoNLL, ACE, OntoNotes, MSM, Ritter, UMBC", "type": "abstractive" } ], "q_uid": "94e0cf44345800ef46a8c7d52902f074a1139e1a", "evidence": [ { "raw_evidence": [ "Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details). These datasets were chosen such that they have been annotated with the same or very similar entity classes, in particular, names of people, locations, and organisations. Thus corpora including only domain-specific entities (e.g. biomedical corpora) were excluded. The choice of corpora was also motivated by their chronological age; we wanted to ensure a good temporal spread, in order to study possible effects of entity drift over time.", "FLOAT SELECTED: Table 1 Corpora genres and number of NEs of different classes." ], "highlighted_evidence": [ "Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details).", "FLOAT SELECTED: Table 1 Corpora genres and number of NEs of different classes." ] } ] } ], "1911.00069": [ { "question": "How big are the datasets?", "answers": [ { "answer": "In-house dataset consists of 3716 documents \nACE05 dataset consists of 1635 documents", "type": "abstractive" } ], "q_uid": "5c90e1ed208911dbcae7e760a553e912f8c237a5", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Number of documents in the training/dev/test sets of the in-house and ACE05 datasets.", "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Number of documents in the training/dev/test sets of the in-house and ACE05 datasets.", "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)." ] } ] } ], "1810.00663": [ { "question": "What was the performance of their model?", "answers": [ { "answer": "For test-repeated set, EM score of 61.17, F1 of 93.54, ED of 0.75 and GM of 61.36. For test-new set, EM score of 41.71, F1 of 91.02, ED of 1.22 and GM of 41.81", "type": "abstractive" } ], "q_uid": "3aee5c856e0ee608a7664289ffdd11455d153234", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Performance of different models on the test datasets. EM and GM report percentages, and ED corresponds to average edit distance. The symbol \u2191 indicates that higher results are better in the corresponding column; \u2193 indicates that lower is better." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Performance of different models on the test datasets. EM and GM report percentages, and ED corresponds to average edit distance. The symbol \u2191 indicates that higher results are better in the corresponding column; \u2193 indicates that lower is better." ] } ] } ], "1809.05752": [ { "question": "What are their initial results on this task?", "answers": [ { "answer": "Achieved the highest per-domain scores on Substance (F1 \u2248 0.8) and the lowest scores on Interpersonal and Mood (F1 \u2248 0.5), and show consistency in per-domain performance rankings between MLP and RBF models.", "type": "abstractive" } ], "q_uid": "fbee81a9d90ff23603ee4f5986f9e8c0eb035b52", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs." ] } ] } ], "1910.05154": [ { "question": "What are the different bilingual models employed?", "answers": [ { "answer": " Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target", "type": "extractive" } ], "q_uid": "85abd60094c92eb16f39f861c6de8c2064807d02", "evidence": [ { "raw_evidence": [ "We use the bilingual neural-based Unsupervised Word Segmentation (UWS) approach from BIBREF6 to discover words in Mboshi. In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence). Due to the attention mechanism present in these networks BIBREF7, posterior to training, it is possible to retrieve soft-alignment probability matrices between source and target sequences. These matrices give us sentence-level source-to-target alignment information, and by using it for clustering neighbor phonemes aligned to the same translation word, we are able to create segmentation in the target side. The product of this approach is a set of (discovered-units, translation words) pairs.", "In this work we apply two simple methods for including multilingual information into the bilingual models from BIBREF6. The first one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries. The voting is performed by applying an agreement threshold $T$ over the output boundaries. This threshold balances between accepting all boundaries from all the bilingual models (zero agreement) and accepting only input boundaries discovered by all these models (total agreement). The second method is ANE Selection. For every language pair and aligned sentence in the dataset, a soft-alignment probability matrix is generated. We use Average Normalized Entropy (ANE) BIBREF8 computed over these matrices for selecting the most confident one for segmenting each phoneme sequence. This exploits the idea that models trained on different language pairs will have language-related behavior, thus differing on the resulting alignment and segmentation over the same phoneme sequence.", "Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models. Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation obo\u00e1+ng\u00e1). However, the models still differ for most of their alignments in the table. This hints that while a portion of the lexicon might be captured independently of the language used, other structures might be more dependent of the chosen language. On this note, BIBREF11 suggests the notion of word cannot always be meaningfully defined cross-linguistically.", "FLOAT SELECTED: Table 3: Top 10 confident (discovered type, translation) pairs for the five bilingual models. The \u201c+\u201d mark means the discovered type is a concatenation of two existing true types." ], "highlighted_evidence": [ "In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence).", "The first one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries. The voting is performed by applying an agreement threshold $T$ over the output boundaries. ", " Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation obo\u00e1+ng\u00e1). However, the models still differ for most of their alignments in the table. ", "FLOAT SELECTED: Table 3: Top 10 confident (discovered type, translation) pairs for the five bilingual models. The \u201c+\u201d mark means the discovered type is a concatenation of two existing true types." ] } ] } ], "1909.00754": [ { "question": "Does this approach perform better in the multi-domain or single-domain setting?", "answers": [ { "answer": "single-domain setting", "type": "abstractive" } ], "q_uid": "ed7a3e7fc1672f85a768613e7d1b419475950ab4", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model (Mrksic et al., 2017), while the baseline for the MultiWoZ dataset is taken from the official website of MultiWoZ (Budzianowski et al., 2018)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model (Mrksic et al., 2017), while the baseline for the MultiWoZ dataset is taken from the official website of MultiWoZ (Budzianowski et al., 2018)." ] } ] } ], "2002.11402": [ { "question": "What is the difference in recall score between the systems?", "answers": [ { "answer": "Between the model and Stanford, Spacy and Flair the differences are 42.91, 25.03, 69.8 with Traditional NERs as reference and 49.88, 43.36, 62.43 with Wikipedia titles as reference.", "type": "abstractive" } ], "q_uid": "1771a55236823ed44d3ee537de2e85465bf03eaf", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference", "FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference", "FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference" ] } ] }, { "question": "What is their f1 score and recall?", "answers": [ { "answer": "F1 score and Recall are 68.66, 80.08 with Traditional NERs as reference and 59.56, 69.76 with Wikipedia titles as reference.", "type": "abstractive" } ], "q_uid": "1d74fd1d38a5532d20ffae4abbadaeda225b6932", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference", "FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference", "FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference" ] } ] } ], "2002.00652": [ { "question": "How big is improvement in performances of proposed model over state of the art?", "answers": [ { "answer": "Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.", "type": "extractive" } ], "q_uid": "cc9f0ac8ead575a9b485a51ddc06b9ecb2e2a44d", "evidence": [ { "raw_evidence": [ "We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy).", "Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.", "FLOAT SELECTED: Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005." ], "highlighted_evidence": [ "EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy).", "Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.", "FLOAT SELECTED: Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005." ] } ] } ], "1905.06566": [ { "question": "Is the baseline a non-heirarchical model like BERT?", "answers": [ { "answer": "There were hierarchical and non-hierarchical baselines; BERT was one of those baselines", "type": "abstractive" } ], "q_uid": "fc8bc6a3c837a9d1c869b7ee90cf4e3c39bcd102", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Results of various models on the CNNDM test set using full-length F1 ROUGE-1 (R-1), ROUGE-2 (R2), and ROUGE-L (R-L).", "Our main results on the CNNDM dataset are shown in Table 1 , with abstractive models in the top block and extractive models in the bottom block. Pointer+Coverage BIBREF9 , Abstract-ML+RL BIBREF10 and DCA BIBREF42 are all sequence to sequence learning based models with copy and coverage modeling, reinforcement learning and deep communicating agents extensions. SentRewrite BIBREF26 and InconsisLoss BIBREF25 all try to decompose the word by word summary generation into sentence selection from document and \u201csentence\u201d level summarization (or compression). Bottom-Up BIBREF27 generates summaries by combines a word prediction model with the decoder attention model. The extractive models are usually based on hierarchical encoders (SummaRuNNer; BIBREF3 and NeuSum; BIBREF11 ). They have been extended with reinforcement learning (Refresh; BIBREF4 and BanditSum; BIBREF20 ), Maximal Marginal Relevance (NeuSum-MMR; BIBREF21 ), latent variable modeling (LatentSum; BIBREF5 ) and syntactic compression (JECS; BIBREF38 ). Lead3 is a baseline which simply selects the first three sentences. Our model $\\text{\\sc Hibert}_S$ (in-domain), which only use one pre-training stage on the in-domain CNNDM training set, outperforms all of them and differences between them are all significant with a 0.95 confidence interval (estimated with the ROUGE script). Note that pre-training $\\text{\\sc Hibert}_S$ (in-domain) is very fast and it only takes around 30 minutes for one epoch on the CNNDM training set. Our models with two pre-training stages ( $\\text{\\sc Hibert}_S$ ) or larger size ( $\\text{\\sc Hibert}_M$ ) perform even better and $\\text{\\sc Hibert}_M$ outperforms BERT by 0.5 ROUGE. We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in \"Extractive Summarization\" ) without pre-training. Note the setting for HeriTransfomer is ( $L=4$ , $H=300$ and $A=4$ ) . We can see that the pre-training (details in Section \"Pre-training\" ) leads to a +1.25 ROUGE improvement. Another baseline is based on a pre-trained BERT BIBREF0 and finetuned on the CNNDM dataset. We used the $\\text{BERT}_{\\text{base}}$ model because our 16G RAM V100 GPU cannot fit $\\text{BERT}_{\\text{large}}$ for the summarization task even with batch size of 1. The positional embedding of BERT supports input length up to 512 words, we therefore split documents with more than 10 sentences into multiple blocks (each block with 10 sentences). We feed each block (the BOS and EOS tokens of each sentence are replaced with [CLS] and [SEP] tokens) into BERT and use the representation at [CLS] token to classify each sentence. Our model $\\text{\\sc Hibert}_S$1 outperforms BERT by 0.4 to 0.5 ROUGE despite with only half the number of model parameters ( $\\text{\\sc Hibert}_S$2 54.6M v.s. BERT 110M). Results on the NYT50 dataset show the similar trends (see Table 2 ). EXTRACTION is a extractive model based hierarchical LSTM and we use the numbers reported by xu:2019:arxiv. The improvement of $\\text{\\sc Hibert}_S$3 over the baseline without pre-training (HeriTransformer) becomes 2.0 ROUGE. $\\text{\\sc Hibert}_S$4 (in-domain), $\\text{\\sc Hibert}_S$5 (in-domain), $\\text{\\sc Hibert}_S$6 and $\\text{\\sc Hibert}_S$7 all outperform BERT significantly according to the ROUGE script." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results of various models on the CNNDM test set using full-length F1 ROUGE-1 (R-1), ROUGE-2 (R2), and ROUGE-L (R-L).", "We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in \"Extractive Summarization\" ) without pre-training." ] } ] } ], "1901.04899": [ { "question": "Did they compare against other systems?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "40e3639b79e2051bf6bce300d06548e7793daee0", "evidence": [ { "raw_evidence": [ "The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer.", "FLOAT SELECTED: Table 3: Utterance-level Intent Recognition Results (10-fold CV)" ], "highlighted_evidence": [ "The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer.", "FLOAT SELECTED: Table 3: Utterance-level Intent Recognition Results (10-fold CV)" ] } ] } ], "1606.05320": [ { "question": "How large is the gap in performance between the HMMs and the LSTMs?", "answers": [ { "answer": "With similar number of parameters, the log likelihood is about 0.1 lower for LSTMs across datasets. When the number of parameters in LSTMs is increased, their log likelihood is up to 0.7 lower.", "type": "abstractive" } ], "q_uid": "6ea63327ffbab2fc734dd5c2414e59d3acc56ea5", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance." ] } ] } ], "1809.10644": [ { "question": "what was their system's f1 performance?", "answers": [ { "answer": "Proposed model achieves 0.86, 0.924, 0.71 F1 score on SR, HATE, HAR datasets respectively.", "type": "abstractive" } ], "q_uid": "a3f108f60143d13fe38d911b1cc3b17bdffde3bd", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: F1 Results3", "The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: F1 Results3", "Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1." ] } ] } ], "1910.03467": [ { "question": "Is the supervised morphological learner tested on Japanese?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "84737d871bde8058d8033e496179f7daec31c2d3", "evidence": [ { "raw_evidence": [ "We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.", "FLOAT SELECTED: Table 1: Results of Japanese-Vietnamese NMT systems" ], "highlighted_evidence": [ "We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.", "FLOAT SELECTED: Table 1: Results of Japanese-Vietnamese NMT systems" ] } ] } ], "1908.07816": [ { "question": "How better is proposed method than baselines perpexity wise?", "answers": [ { "answer": "Perplexity of proposed MEED model is 19.795 vs 19.913 of next best result on test set.", "type": "abstractive" } ], "q_uid": "c034f38a570d40360c3551a6469486044585c63c", "evidence": [ { "raw_evidence": [ "Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).", "FLOAT SELECTED: Table 2: Perplexity scores achieved by the models. Validation set 1 comes from the Cornell dataset, while validation set 2 comes from the DailyDialog dataset." ], "highlighted_evidence": [ "Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets.", "FLOAT SELECTED: Table 2: Perplexity scores achieved by the models. Validation set 1 comes from the Cornell dataset, while validation set 2 comes from the DailyDialog dataset." ] } ] } ], "1810.09774": [ { "question": "Which training dataset allowed for the best generalization to benchmark sets?", "answers": [ { "answer": "MultiNLI", "type": "abstractive" } ], "q_uid": "a48c6d968707bd79469527493a72bfb4ef217007", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. \u2206 is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. \u2206 is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined." ] } ] } ], "2004.03744": [ { "question": "How many natural language explanations are human-written?", "answers": [ { "answer": "Totally 6980 validation and test image-sentence pairs have been corrected.", "type": "abstractive" } ], "q_uid": "5dfa59c116e0ceb428efd99bab19731aa3df4bbd", "evidence": [ { "raw_evidence": [ "e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.", "FLOAT SELECTED: Table 3. Summary of e-SNLI-VE-2.0 (= SNLI-VE-2.0 + explanations). Image-sentence pairs labelled as neutral in the training set have not been corrected." ], "highlighted_evidence": [ "The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.", "FLOAT SELECTED: Table 3. Summary of e-SNLI-VE-2.0 (= SNLI-VE-2.0 + explanations). Image-sentence pairs labelled as neutral in the training set have not been corrected." ] } ] } ], "2001.06888": [ { "question": "What are the baseline state of the art models?", "answers": [ { "answer": "Stanford NER, BiLSTM+CRF, LSTM+CNN+CRF, T-NER and BiLSTM+CNN+Co-Attention", "type": "abstractive" } ], "q_uid": "8a871b136ccef78391922377f89491c923a77730", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Evaluation results of different approaches compared to ours" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Evaluation results of different approaches compared to ours" ] } ] } ], "1709.10217": [ { "question": "What was the result of the highest performing system?", "answers": [ { "answer": "For task 1 best F1 score was 0.9391 on closed and 0.9414 on open test.\nFor task2 best result had: Ratio 0.3175 , Satisfaction 64.53, Fluency 0, Turns -1 and Guide 2", "type": "abstractive" } ], "q_uid": "96c09ece36a992762860cde4c110f1653c110d96", "evidence": [ { "raw_evidence": [ "There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper.", "Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2.", "FLOAT SELECTED: Table 4: Top 5 results of the closed test of the task 1.", "FLOAT SELECTED: Table 5: Top 5 results of the open test of the task 1.", "FLOAT SELECTED: Table 6: The results of the task 2. Ratio, Satisfaction, Fluency, Turns and Guide indicate the task completion ratio, user satisfaction degree, response fluency, number of dialogue turns and guidance ability for out of scope input respectively." ], "highlighted_evidence": [ "Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively.", "Therefore, Table TABREF16 shows the complete results of the task 2.", "FLOAT SELECTED: Table 4: Top 5 results of the closed test of the task 1.", "FLOAT SELECTED: Table 5: Top 5 results of the open test of the task 1.", "FLOAT SELECTED: Table 6: The results of the task 2. Ratio, Satisfaction, Fluency, Turns and Guide indicate the task completion ratio, user satisfaction degree, response fluency, number of dialogue turns and guidance ability for out of scope input respectively." ] } ] } ], "1901.02262": [ { "question": "What are the baselines that Masque is compared against?", "answers": [ { "answer": "BiDAF, Deep Cascade QA, S-Net+CES2S, BERT+Multi-PGNet, Selector+CCG, VNET, DECAPROP, MHPGM+NOIC, ConZNet, RMR+A2D", "type": "abstractive" } ], "q_uid": "2d274c93901c193cf7ad227ab28b1436c5f410af", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); fWu et al. (2018). Whether the competing models are ensemble models or not is unreported.", "FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); fWu et al. (2018). Whether the competing models are ensemble models or not is unreported.", "FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set." ] } ] }, { "question": "What is the performance achieved on NarrativeQA?", "answers": [ { "answer": "Bleu-1: 54.11, Bleu-4: 30.43, METEOR: 26.13, ROUGE-L: 59.87", "type": "abstractive" } ], "q_uid": "e63bde5c7b154fbe990c3185e2626d13a1bad171", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set." ] } ] } ], "1911.12579": [ { "question": "How many uniue words are in the dataset?", "answers": [ { "answer": "908456 unique words are available in collected corpus.", "type": "abstractive" } ], "q_uid": "a1064307a19cd7add32163a70b6623278a557946", "evidence": [ { "raw_evidence": [ "The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens.", "FLOAT SELECTED: Table 2: Complete statistics of collected corpus from multiple resources." ], "highlighted_evidence": [ "The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens.", "FLOAT SELECTED: Table 2: Complete statistics of collected corpus from multiple resources." ] } ] } ], "1707.00110": [ { "question": "How much is the BLEU score?", "answers": [ { "answer": "Ranges from 44.22 to 100.00 depending on K and the sequence length.", "type": "abstractive" } ], "q_uid": "6e8c587b6562fafb43a7823637b84cd01487059a", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: BLEU scores and computation times with varyingK and sequence length compared to baseline models with and without attention." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: BLEU scores and computation times with varyingK and sequence length compared to baseline models with and without attention." ] } ] } ], "1909.01013": [ { "question": "What are new best results on standard benchmark?", "answers": [ { "answer": "New best results of accuracy (P@1) on Vecmap:\nOurs-GeoMMsemi: EN-IT 50.00 IT-EN 42.67 EN-DE 51.60 DE-EN 47.22 FI-EN 39.62 EN-ES 39.47 ES-EN 36.43", "type": "abstractive" } ], "q_uid": "d0c79f4a5d5c45fe673d9fcb3cd0b7dd65df7636", "evidence": [ { "raw_evidence": [ "Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima.", "FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. \u2020Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs." ], "highlighted_evidence": [ "Table TABREF15 shows the final results on Vecmap.", "FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. \u2020Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs." ] } ] }, { "question": "How better is performance compared to competitive baselines?", "answers": [ { "answer": "Proposed method vs best baseline result on Vecmap (Accuracy P@1):\nEN-IT: 50 vs 50\nIT-EN: 42.67 vs 42.67\nEN-DE: 51.6 vs 51.47\nDE-EN: 47.22 vs 46.96\nEN-FI: 35.88 vs 36.24\nFI-EN: 39.62 vs 39.57\nEN-ES: 39.47 vs 39.30\nES-EN: 36.43 vs 36.06", "type": "abstractive" } ], "q_uid": "54c7fc08598b8b91a8c0399f6ab018c45e259f79", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. \u2020Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs.", "Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. \u2020Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs.", "Table TABREF15 shows the final results on Vecmap." ] } ] }, { "question": "What 6 language pairs is experimented on?", "answers": [ { "answer": "EN<->ES\nEN<->DE\nEN<->IT\nEN<->EO\nEN<->MS\nEN<->FI", "type": "abstractive" } ], "q_uid": "03ce42ff53aa3f1775bc57e50012f6eb1998c480", "evidence": [ { "raw_evidence": [ "Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization.", "FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap." ], "highlighted_evidence": [ "Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12.", "FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap." ] } ] } ], "1605.08675": [ { "question": "Do they compare DeepER against other approaches?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "63496705fff20c55d4b3d8cdf4786f93e742dd3d", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid)." ] } ] } ], "1911.04952": [ { "question": "What are lyrical topics present in the metal genre?", "answers": [ { "answer": "Table TABREF10 displays the twenty resulting topics", "type": "extractive" } ], "q_uid": "447eb98e602616c01187960c9c3011c62afd7c27", "evidence": [ { "raw_evidence": [ "Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms.", "FLOAT SELECTED: Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2)" ], "highlighted_evidence": [ "Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA.", "FLOAT SELECTED: Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2)" ] } ] } ], "1910.00825": [ { "question": "By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?", "answers": [ { "answer": "SPNet vs best baseline:\nROUGE-1: 90.97 vs 90.68\nCIC: 70.45 vs 70.25", "type": "abstractive" } ], "q_uid": "f398587b9a0008628278a5ea858e01d3f5559f65", "evidence": [ { "raw_evidence": [ "We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.", "FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds." ], "highlighted_evidence": [ "We show all the models' results in Table TABREF24", "FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds." ] } ] } ], "1910.00458": [ { "question": "What are state of the art methods MMM is compared to?", "answers": [ { "answer": "FTLM++, BERT-large, XLNet", "type": "abstractive" } ], "q_uid": "9fe4a2a5b9e5cf29310ab428922cc8e7b2fc1d11", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines." ] } ] } ], "1909.08824": [ { "question": "By how much do they improve the accuracy of inferences over state-of-the-art methods?", "answers": [ { "answer": "ON Event2Mind, the accuracy of proposed method is improved by absolute BLUE 2.9, 10.87, 1.79 for xIntent, xReact and oReact respectively.\nOn Atomic dataset, the accuracy of proposed method is improved by absolute BLUE 3.95. 4.11, 4.49 for xIntent, xReact and oReact.respectively.", "type": "abstractive" } ], "q_uid": "8e2b125426d1220691cceaeaf1875f76a6049cbd", "evidence": [ { "raw_evidence": [ "We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.", "FLOAT SELECTED: Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.", "FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened." ], "highlighted_evidence": [ "Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. ", "FLOAT SELECTED: Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.", "FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened." ] } ] }, { "question": "Which models do they use as baselines on the Atomic dataset?", "answers": [ { "answer": "RNN-based Seq2Seq, Variational Seq2Seq, VRNMT , CWVAE-Unpretrained", "type": "extractive" } ], "q_uid": "42bc4e0cd0f3e238a4891142f1b84ebcd6594bf1", "evidence": [ { "raw_evidence": [ "We compared our proposed model with the following four baseline methods:", "RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.", "Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.", "VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.", "CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.", "Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.", "FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened." ], "highlighted_evidence": [ "We compared our proposed model with the following four baseline methods:\n\nRNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.\n\nVariational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.\n\nVRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.\n\nCWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.\n\nNote that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.", "FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened." ] } ] } ], "1701.03214": [ { "question": "How much improvement does their method get over the fine tuning baseline?", "answers": [ { "answer": "0.08 points on the 2011 test set, 0.44 points on the 2012 test set, 0.42 points on the 2013 test set for IWSLT-CE.", "type": "abstractive" } ], "q_uid": "a978a1ee73547ff3a80c66e6db3e6c3d3b6512f4", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE." ] } ] } ], "1611.02550": [ { "question": "By how much do they outpeform previous results on the word discrimination task?", "answers": [ { "answer": "Their best average precision tops previous best result by 0.202", "type": "abstractive" } ], "q_uid": "b6b5f92a1d9fa623b25c70c1ac67d59d84d9eec8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations." ] } ] } ], "1908.05434": [ { "question": "By how much do they outperform previous state-of-the-art models?", "answers": [ { "answer": "Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.)", "type": "abstractive" } ], "q_uid": "2d4d0735c50749aa8087d1502ab7499faa2f0dd8", "evidence": [ { "raw_evidence": [ "All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.", "We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss.", "FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.", "FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted." ], "highlighted_evidence": [ "We report the mean metrics from the CV in Table TABREF14 .", "We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models.", "FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.", "FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted." ] } ] } ], "1909.02480": [ { "question": "What is the performance difference between proposed method and state-of-the-arts on these datasets?", "answers": [ { "answer": "Difference is around 1 BLEU score lower on average than state of the art methods.", "type": "abstractive" } ], "q_uid": "ba6422e22297c7eb0baa381225a2f146b9621791", "evidence": [ { "raw_evidence": [ "Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in \u00a7 SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work.", "FLOAT SELECTED: Table 2: BLEU scores on two WMT datasets of models using advanced decoding methods. The first block are Transformer-base (Vaswani et al., 2017). The second and the third block are results of models trained w/w.o. knowledge distillation, respectively. n = l \u00d7 r is the total number of candidates for rescoring." ], "highlighted_evidence": [ "Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring.", "FLOAT SELECTED: Table 2: BLEU scores on two WMT datasets of models using advanced decoding methods. The first block are Transformer-base (Vaswani et al., 2017). The second and the third block are results of models trained w/w.o. knowledge distillation, respectively. n = l \u00d7 r is the total number of candidates for rescoring." ] } ] } ], "2004.01694": [ { "question": "What percentage fewer errors did professional translations make?", "answers": [ { "answer": "36%", "type": "abstractive" } ], "q_uid": "cc5d8e12f6aecf6a5f305e2f8b3a0c67f49801a9", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher\u2019s exact test (two-tailed) for each pair of translation outputs.", "To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher\u2019s exact test (two-tailed) for each pair of translation outputs.", "To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3", " The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32." ] } ] } ], "1904.10500": [ { "question": "Are the intent labels imbalanced in the dataset?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "e659ceb184777015c12db2da5ae396635192f0b0", "evidence": [ { "raw_evidence": [ "For in-cabin intent understanding, we described 4 groups of usages to support various natural commands for interacting with the vehicle: (1) Set/Change Destination/Route (including turn-by-turn instructions), (2) Set/Change Driving Behavior/Speed, (3) Finishing the Trip Use-cases, and (4) Others (open/close door/window/trunk, turn music/radio on/off, change AC/temperature, show map, etc.). According to those scenarios, 10 types of passenger intents are identified and annotated as follows: SetDestination, SetRoute, GoFaster, GoSlower, Stop, Park, PullOver, DropOff, OpenDoor, and Other. For slot filling task, relevant slots are identified and annotated as: Location, Position/Direction, Object, Time Guidance, Person, Gesture/Gaze (e.g., `this', `that', `over there', etc.), and None/O. In addition to utterance-level intents and slots, word-level intent related keywords are annotated as Intent. We obtained 1331 utterances having commands to AMIE agent from our in-cabin dataset. We expanded this dataset via the creation of similar tasks on Amazon Mechanical Turk BIBREF21 and reached 3418 utterances with intents in total. Intent and slot annotations are obtained on the transcribed utterances by majority voting of 3 annotators. Those annotation results for utterance-level intent types, slots and intent keywords can be found in Table TABREF7 and Table TABREF8 as a summary of dataset statistics.", "FLOAT SELECTED: Table 2: AMIE Dataset Statistics: Slots and Intent Keywords" ], "highlighted_evidence": [ " Those annotation results for utterance-level intent types, slots and intent keywords can be found in Table TABREF7 and Table TABREF8 as a summary of dataset statistics.", "FLOAT SELECTED: Table 2: AMIE Dataset Statistics: Slots and Intent Keywords" ] } ] } ], "1711.11221": [ { "question": "What evaluations did the authors use on their system?", "answers": [ { "answer": "BLEU scores, exact matches of words in both translations and topic cache, and cosine similarities of adjacent sentences for coherence.", "type": "abstractive" } ], "q_uid": "c1c611409b5659a1fd4a870b6cc41f042e2e9889", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Experiment results on the NIST Chinese-English translation tasks. [+Cd] is the proposed model with the dynamic cache. [+Cd,Ct] is the proposed model with both the dynamic and topic cache. The BLEU scores are case-insensitive. Avg means the average BLEU score on all test sets.", "FLOAT SELECTED: Table 3: The average number of words in translations of beginning sentences of documents that are also in the topic cache. Reference represents the average number of words in four human translations that are also in the topic cache.", "FLOAT SELECTED: Table 6: The average cosine similarity of adjacent sentences (coherence) on all test sets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Experiment results on the NIST Chinese-English translation tasks. [+Cd] is the proposed model with the dynamic cache. [+Cd,Ct] is the proposed model with both the dynamic and topic cache. The BLEU scores are case-insensitive. Avg means the average BLEU score on all test sets.", "FLOAT SELECTED: Table 3: The average number of words in translations of beginning sentences of documents that are also in the topic cache. Reference represents the average number of words in four human translations that are also in the topic cache.", "FLOAT SELECTED: Table 6: The average cosine similarity of adjacent sentences (coherence) on all test sets." ] } ] } ], "1809.09795": [ { "question": "What are the 7 different datasets?", "answers": [ { "answer": "SemEval 2018 Task 3, BIBREF20, BIBREF4, SARC 2.0, SARC 2.0 pol, Sarcasm Corpus V1 (SC-V1), Sarcasm Corpus V2 (SC-V2)", "type": "extractive" } ], "q_uid": "46570c8faaeefecc8232cfc2faab0005faaba35f", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Benchmark datasets: Tweets, Reddit posts and online debates for sarcasm and irony detection.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 ." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Benchmark datasets: Tweets, Reddit posts and online debates for sarcasm and irony detection.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC)." ] } ] } ], "2003.01769": [ { "question": "By how much does using phonetic feedback improve state-of-the-art systems?", "answers": [ { "answer": "Improved AECNN-T by 2.1 and AECNN-T-SM BY 0.9", "type": "abstractive" } ], "q_uid": "e1b36927114969f3b759cba056cfb3756de474e4", "evidence": [ { "raw_evidence": [ "In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result.", "FLOAT SELECTED: Table 2. Speech enhancement scores for the state-of-the-art system trained with the parallel data available in the CHiME4 corpus. Evaluation is done on channel 5 of the simulation et05 data. Mimic loss is applied to the AECNN model trained with time-domain mapping loss only, as well as time-domain and spectral magnitude mapping losses. The joint training system is done with an identical setup to the mimic system with all three losses." ], "highlighted_evidence": [ "In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result.", "FLOAT SELECTED: Table 2. Speech enhancement scores for the state-of-the-art system trained with the parallel data available in the CHiME4 corpus. Evaluation is done on channel 5 of the simulation et05 data. Mimic loss is applied to the AECNN model trained with time-domain mapping loss only, as well as time-domain and spectral magnitude mapping losses. The joint training system is done with an identical setup to the mimic system with all three losses." ] } ] } ], "1806.09103": [ { "question": "what are the baselines?", "answers": [ { "answer": "AS Reader, GA Reader, CAS Reader", "type": "abstractive" } ], "q_uid": "f513e27db363c28d19a29e01f758437d7477eb24", "evidence": [ { "raw_evidence": [ "Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline. Although WHU's model achieves the best besides our model on the valid set with only 0.75% below ours, their result on the test set is lower than ours by 2.27%, indicating our model has a satisfactory generalization ability.", "FLOAT SELECTED: Table 2: Accuracy on CMRC-2017 dataset. Results marked with \u2020 are from the latest official CMRC2017 Leaderboard 7. The best results are in bold face.", "FLOAT SELECTED: Table 3: Case study on CMRC-2017." ], "highlighted_evidence": [ "Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline", "FLOAT SELECTED: Table 2: Accuracy on CMRC-2017 dataset. Results marked with \u2020 are from the latest official CMRC2017 Leaderboard 7. The best results are in bold face.", "FLOAT SELECTED: Table 3: Case study on CMRC-2017." ] } ] } ], "1711.02013": [ { "question": "How do they measure performance of language model tasks?", "answers": [ { "answer": "BPC, Perplexity", "type": "abstractive" } ], "q_uid": "3070d6d6a52aa070f0c0a7b4de8abddd3da4f056", "evidence": [ { "raw_evidence": [ "In Table TABREF39 , our results are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BIBREF50 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table TABREF42 , our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix . In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention.", "FLOAT SELECTED: Table 1: BPC on the Penn Treebank test set", "Word-level Language Model" ], "highlighted_evidence": [ "In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. ", "FLOAT SELECTED: Table 1: BPC on the Penn Treebank test set", "Word-level Language Model" ] } ] } ], "1707.03764": [ { "question": "How do their results compare against other competitors in the PAN 2017 shared task on Author Profiling?", "answers": [ { "answer": "They achieved best result in the PAN 2017 shared task with accuracy for Variety prediction task 0.0013 more than the 2nd best baseline, accuracy for Gender prediction task 0.0029 more than 2nd best baseline and accuracy for Joint prediction task 0.0101 more than the 2nd best baseline", "type": "abstractive" } ], "q_uid": "157b9f6f8fb5d370fa23df31de24ae7efb75d6f3", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 8. Results (accuracy) on the test set for variety, gender and their joint prediction.", "For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). For the global scores, all languages are combined. We present finer-grained scores showing the breakdown per language in Table TABREF24 . We compare our gender and variety accuracies against the LDR-baseline BIBREF10 , a low dimensionality representation especially tailored to language variety identification, provided by the organisers. The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 8. Results (accuracy) on the test set for variety, gender and their joint prediction.", "For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). ", "We present finer-grained scores showing the breakdown per language in Table TABREF24 .", "The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline).\n\n" ] } ] } ], "1701.06538": [ { "question": "What improvement does the MOE model make over the SOTA on language modelling?", "answers": [ { "answer": "Perpexity is improved from 34.7 to 28.0.", "type": "abstractive" } ], "q_uid": "e8fcfb1412c3b30da6cbc0766152b6e11e17196c", "evidence": [ { "raw_evidence": [ "The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .", "In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.", "FLOAT SELECTED: Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C." ], "highlighted_evidence": [ "The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .", " Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.", "FLOAT SELECTED: Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C." ] } ] } ], "1905.10810": [ { "question": "What is the difference in performance between the interpretable system (e.g. vectors and cosine distance) and LSTM with ELMo system?", "answers": [ { "answer": "Accuracy of best interpretible system was 0.3945 while accuracy of LSTM-ELMo net was 0.6818.", "type": "abstractive" } ], "q_uid": "44104668796a6ca10e2ea3ecf706541da1cec2cf", "evidence": [ { "raw_evidence": [ "The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.", "FLOAT SELECTED: Table 1: Test results for all the methods used. The loss measure is cross-entropy." ], "highlighted_evidence": [ "The experimental results are presented in Table TABREF4 .", "FLOAT SELECTED: Table 1: Test results for all the methods used. The loss measure is cross-entropy." ] } ] } ], "1910.07481": [ { "question": "Which language-pair had the better performance?", "answers": [ { "answer": "French-English", "type": "abstractive" } ], "q_uid": "c1f4d632da78714308dc502fe4e7b16ea6f76f81", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: Results obtained for the English-French and French-English translation tasks, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: \u2217< .05, \u2217\u2217< .01, \u2217\u2217\u2217< .001." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Results obtained for the English-French and French-English translation tasks, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: \u2217< .05, \u2217\u2217< .01, \u2217\u2217\u2217< .001." ] } ] } ], "2001.05493": [ { "question": "Which psycholinguistic and basic linguistic features are used?", "answers": [ { "answer": "Emotion Sensor Feature, Part of Speech, Punctuation, Sentiment Analysis, Empath, TF-IDF Emoticon features", "type": "abstractive" } ], "q_uid": "e829f008d62312357e0354a9ed3b0827c91c9401", "evidence": [ { "raw_evidence": [ "Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term \"NLP Features\" to represent it in the entire paper.", "We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).", "FLOAT SELECTED: Table 1: Details of NLP features" ], "highlighted_evidence": [ "Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term \"NLP Features\" to represent it in the entire paper.", "We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).", "FLOAT SELECTED: Table 1: Details of NLP features" ] } ] } ], "1901.02257": [ { "question": "What baseline models do they compare against?", "answers": [ { "answer": "SLQA, Rusalka, HMA Model (single), TriAN (single), jiangnan (ensemble), MITRE (ensemble), TriAN (ensemble), HMA Model (ensemble)", "type": "abstractive" } ], "q_uid": "3aa7173612995223a904cc0f8eef4ff203cbb860", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Experimental Results of Models" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Experimental Results of Models" ] } ] } ], "2002.02492": [ { "question": "How much improvement is gained from the proposed approaches?", "answers": [ { "answer": "It eliminates non-termination in some models fixing for some models up to 6% of non-termination ratio.", "type": "abstractive" } ], "q_uid": "6f2f304ef292d8bcd521936f93afeec917cbe28a", "evidence": [ { "raw_evidence": [ "Table TABREF44 shows that consistent nucleus and top-$k$ sampling (\u00a7SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\\left<\\text{eos}\\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.", "For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.", "FLOAT SELECTED: Table 2. Non-termination ratio (rL (%)) of decoded sequences using consistent sampling methods.", "FLOAT SELECTED: Table 1. Non-termination ratio (rL (%)) of decoded sequences using ancestral sampling and incomplete decoding methods." ], "highlighted_evidence": [ "Table TABREF44 shows that consistent nucleus and top-$k$ sampling (\u00a7SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\\left<\\text{eos}\\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.", " This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.", "FLOAT SELECTED: Table 2. Non-termination ratio (rL (%)) of decoded sequences using consistent sampling methods.", "FLOAT SELECTED: Table 1. Non-termination ratio (rL (%)) of decoded sequences using ancestral sampling and incomplete decoding methods." ] } ] } ], "1910.08210": [ { "question": "How better is performance of proposed model compared to baselines?", "answers": [ { "answer": "Proposed model achive 66+-22 win rate, baseline CNN 13+-1 and baseline FiLM 32+-3 .", "type": "abstractive" } ], "q_uid": "37e8f5851133a748c4e3e0beeef0d83883117a98", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Final win rate on simplest variant of RTFM. The models are trained on one set of dynamics (e.g. training set) and evaluated on another set of dynamics (e.g. evaluation set). \u201cTrain\u201d and \u201cEval\u201d show final win rates on training and eval environments." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Final win rate on simplest variant of RTFM. The models are trained on one set of dynamics (e.g. training set) and evaluated on another set of dynamics (e.g. evaluation set). \u201cTrain\u201d and \u201cEval\u201d show final win rates on training and eval environments." ] } ] } ], "1911.02711": [ { "question": "What is the performance difference of using a generated summary vs. a user-written one?", "answers": [ { "answer": "2.7 accuracy points", "type": "abstractive" } ], "q_uid": "68e3f3908687505cb63b538e521756390c321a1c", "evidence": [ { "raw_evidence": [ "Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary.", "FLOAT SELECTED: Table 4: Experimental results. Predicted indicates the use of system-predicted summaries. Star (*) indicates that hard attention model is trained with golden summaries but does not require golden summaries during inference.", "FLOAT SELECTED: Table 5: Experimental results. Golden indicates the use of user-written (golden) summaries. Noted that joint modeling methods, such as HSSC (Ma et al., 2018) and SAHSSC (Wang and Ren, 2018), cannot make use of golden summaries during inference time, so their results are excluded in this table.", "Experiments ::: Datasets", "We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set." ], "highlighted_evidence": [ "Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets.", "FLOAT SELECTED: Table 4: Experimental results. Predicted indicates the use of system-predicted summaries. Star (*) indicates that hard attention model is trained with golden summaries but does not require golden summaries during inference.", "FLOAT SELECTED: Table 5: Experimental results. Golden indicates the use of user-written (golden) summaries. Noted that joint modeling methods, such as HSSC (Ma et al., 2018) and SAHSSC (Wang and Ren, 2018), cannot make use of golden summaries during inference time, so their results are excluded in this table.", "Experiments ::: Datasets\nWe empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies." ] } ] } ], "1912.06670": [ { "question": "Is audio data per language balanced in dataset?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "5fa464a158dc8abf7cef8ca7d42a7080670c1edd", "evidence": [ { "raw_evidence": [ "The data presented in Table (TABREF12) shows the currently available data. Each of the released languages is available for individual download as a compressed directory from the Mozilla Common Voice website. The directory contains six files with Tab-Separated Values (i.e. TSV files), and a single clips subdirectory which contains all of the audio data. Each of the six TSV files represents a different segment of the voice data, with all six having the following column headers: [client_id, path, sentence, up_votes, down_votes, age, gender, accent]. The first three columns refer to an anonymized ID for the speaker, the location of the audio file, and the text that was read. The next two columns contain information on how listeners judged the $<$audio,transcript$>$ pair. The last three columns represent demographic data which was optionally self-reported by the speaker of the audio.", "FLOAT SELECTED: Table 1: Current data statistics for Common Voice. Data in italics is as of yet unreleased. Other numbers refer to the data published in the June 12, 2019 release.", "We made dataset splits (c.f. Table (TABREF19)) such that one speaker's recordings are only present in one data split. This allows us to make a fair evaluation of speaker generalization, but as a result some training sets have very few speakers, making this an even more challenging scenario. The splits per language were made as close as possible to 80% train, 10% development, and 10% test.", "FLOAT SELECTED: Table 2: Data used in the experiments, from an earlier multilingual version of Common Voice. Number of audio clips and unique speakers." ], "highlighted_evidence": [ "The data presented in Table (TABREF12) shows the currently available data. Each of the released languages is available for individual download as a compressed directory from the Mozilla Common Voice website. ", "FLOAT SELECTED: Table 1: Current data statistics for Common Voice. Data in italics is as of yet unreleased. Other numbers refer to the data published in the June 12, 2019 release.", "We made dataset splits (c.f. Table (TABREF19)) such that one speaker's recordings are only present in one data split. ", "FLOAT SELECTED: Table 2: Data used in the experiments, from an earlier multilingual version of Common Voice. Number of audio clips and unique speakers." ] } ] } ], "1906.03538": [ { "question": "What is the average length of the claims?", "answers": [ { "answer": "Average claim length is 8.9 tokens.", "type": "abstractive" } ], "q_uid": "281cd4e78b27a62713ec43249df5000812522a89", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: A summary of PERSPECTRUM statistics", "We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 ." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: A summary of PERSPECTRUM statistics", "The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 )." ] } ] } ], "1803.09230": [ { "question": "By how much, the proposed method improves BiDAF and DCN on SQuAD dataset?", "answers": [ { "answer": "In terms of F1 score, the Hybrid approach improved by 23.47% and 1.39% on BiDAF and DCN respectively. The DCA approach improved by 23.2% and 1.12% on BiDAF and DCN respectively.", "type": "abstractive" } ], "q_uid": "9776156fc93daa36f4613df591e2b49827d25ad2", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Effect of Character Embedding" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Effect of Character Embedding" ] } ] } ], "2003.05377": [ { "question": "what genres do they songs fall under?", "answers": [ { "answer": "Gospel, Sertanejo, MPB, Forr\u00f3, Pagode, Rock, Samba, Pop, Ax\u00e9, Funk-carioca, Infantil, Velha-guarda, Bossa-nova and Jovem-guarda", "type": "abstractive" } ], "q_uid": "6b91fe29175be8cd8f22abf27fb3460e43b9889a", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: The number of songs and artists by genre", "From the Vagalume's music web page, we collect the song title and lyrics, and the artist name. The genre was collected from the page of styles, which lists all the musical genres and, for each one, all the artists. We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8. Figure FIGREF6 presents an example of the Vagalume's music Web page with the song \u201cComo \u00e9 grande o meu amor por voc\u00ea\u201d, of the Brazilian singer Roberto Carlos. Green boxes indicate information about music that can be extracted directly from the web page. From this information, the language in which the lyrics are available can be obtained by looking at the icon indicating the flag of Brazil preceded by the \u201cOriginal\u201d word." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: The number of songs and artists by genre", "We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8." ] } ] } ], "2001.05467": [ { "question": "To what other competitive baselines is this approach compared?", "answers": [ { "answer": "LSTMs with and without attention, HRED, VHRED with and without attention, MMI and Reranking-RL", "type": "abstractive" } ], "q_uid": "4b8a0e99bf3f2f6c80c57c0e474c47a5ee842b2c", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means \u201cwith attention\u201d). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means \u201cwith attention\u201d). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI." ] } ] }, { "question": "How much better were results of the proposed models than base LSTM-RNN model?", "answers": [ { "answer": "on diversity 6.87 and on relevance 4.6 points higher", "type": "abstractive" } ], "q_uid": "5e9732ff8595b31f81740082333b241d0a5f7c9a", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means \u201cwith attention\u201d). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means \u201cwith attention\u201d). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI." ] } ] } ], "1909.09484": [ { "question": "How much is proposed model better than baselines in performed experiments?", "answers": [ { "answer": "most of the models have similar performance on BPRA: DSTC2 (+0.0015), Maluuba (+0.0729)\nGDP achieves the best performance in APRA: DSTC2 (+0.2893), Maluuba (+0.2896)\nGDP significantly outperforms the baselines on BLEU: DSTC2 (+0.0791), Maluuba (+0.0492)", "type": "abstractive" } ], "q_uid": "c165ea43256d7ee1b1fb6f5c0c8af5f7b585e60d", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: The performance of baselines and proposed model on DSTC2 and Maluuba dataset. T imefull is the time spent on training the whole model, T imeDP is the time spent on training the dialogue policy maker.", "BPRA Results: As shown in Table TABREF35, most of the models have similar performance on BPRA on these two datasets, which can guarantee a consistent impact on the dialogue policy maker. All the models perform very well in BPRA on DSTC2 dataset. On Maluuba dataset, the BPRA decreases because of the complex domains. We can notice that BPRA of CDM is slightly worse than other models on Maluuba dataset, the reason is that the CDM's dialogue policy maker contains lots of classifications and has the bigger loss than other models because of complex domains, which affects the training of the dialogue belief tracker.", "APRA Results: Compared with baselines, GDP achieves the best performance in APRA on two datasets. It can be noted that we do not compare with the E2ECM baseline in APRA. E2ECM only uses a simple classifier to recognize the label of the acts and ignores the parameters information. In our experiment, APRA of E2ECM is slightly better than our method. Considering the lack of parameters of the acts, it's unfair for our GDP method. Furthermore, the CDM baseline considers the parameters of the act. But GDP is far better than CDM in supervised learning and reinforcement learning.", "BLEU Results: GDP significantly outperforms the baselines on BLEU. As mentioned above, E2ECM is actually slightly better than GDP in APRA. But in fact, we can find that the language quality of the response generated by GDP is still better than E2ECM, which proves that lack of enough parameters information makes it difficult to find the appropriate sentence template in NLG. It can be found that the BLEU of all models is very poor on Maluuba dataset. The reason is that Maluuba is a human-human task-oriented dialogue dataset, the utterances are very flexible, the natural language generator for all methods is difficult to generate an accurate utterance based on the context. And DSTC2 is a human-machine dialog dataset. The response is very regular so the effectiveness of NLG will be better than that of Maluuba. But from the results, the GDP is still better than the baselines on Maluuba dataset, which also verifies that our proposed method is more accurate in modeling dialogue policy on complex domains than the classification-based methods." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: The performance of baselines and proposed model on DSTC2 and Maluuba dataset. T imefull is the time spent on training the whole model, T imeDP is the time spent on training the dialogue policy maker.", "BPRA Results: As shown in Table TABREF35, most of the models have similar performance on BPRA on these two datasets, which can guarantee a consistent impact on the dialogue policy maker.", "APRA Results: Compared with baselines, GDP achieves the best performance in APRA on two datasets.", "Results: GDP significantly outperforms the baselines on BLEU." ] } ] } ], "1807.07961": [ { "question": "Do they evaluate only on English datasets?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "4a8bceb3b6d45f14c4749115d6aa83912f0b0a6e", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Tweet examples with emojis. The sentiment ground truth is given in the second column. The examples show that inconsistent sentiments exist between emojis and texts.", "We construct our own Twitter sentiment dataset by crawling tweets through the REST API which consists of 350,000 users and is magnitude larger comparing to previous work. We collect up to 3,200 tweets from each user and follow the standard tweet preprocessing procedures to remove the tweets without emojis and tweets containing less than ten words, and contents including the urls, mentions, and emails.", "For acquiring the sentiment annotations, we first use Vader which is a rule-based sentiment analysis algorithm BIBREF17 for text tweets only to generate weak sentiment labels. The algorithm outputs sentiment scores ranging from -1 (negative) to 1 (positive) with neutral in the middle. We consider the sentiment analysis as a binary classification problem (positive sentiment and negative sentiment), we filter out samples with weak prediction scores within INLINEFORM0 and keep the tweets with strong sentiment signals. Emoji occurrences are calculated separately for positive tweets and negative tweets, and threshold is set to 2,000 to further filter out emojis which are less frequently used in at least one type of sentimental text. In the end, we have constructed a dataset with 1,492,065 tweets and 55 frequently used emojis in total. For the tweets with an absolute sentiment score over 0.70, we keep the auto-generated sentiment label as ground truth because the automatic annotation is reliable with high sentiment scores. On the other hand, we select a subset of the tweets with absolute sentiment scores between INLINEFORM1 for manual labeling by randomly sampling, following the distribution of emoji occurrences where each tweet is labeled by two graduate students. Tweets are discarded if the two annotations disagree with each other or they are labeled as neutral. In the end, we have obtained 4,183 manually labeled tweets among which 60% are used for fine-tuning and 40% are used for testing purposes. The remainder of the tweets with automatic annotations are divided into three sets: 60% are used for pre-training the bi-sense and conventional emoji embedding, 10% for validation and 30% are for testing. We do not include a \u201cneutral\u201d class because it is difficult to obtain valid neutral samples. For auto-generated labels, the neutrals are the samples with low absolute confidence scores and their sentiments are more likely to be model failures other than \u201ctrue neutrals\u201d. Moreover, based on the human annotations, most of the tweets with emojis convey non-neutral sentiment and only few neutral samples are observed during the manual labeling which are excluded from the manually labeled subset." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Tweet examples with emojis. The sentiment ground truth is given in the second column. The examples show that inconsistent sentiments exist between emojis and texts.", "We construct our own Twitter sentiment dataset by crawling tweets through the REST API which consists of 350,000 users and is magnitude larger comparing to previous work. We collect up to 3,200 tweets from each user and follow the standard tweet preprocessing procedures to remove the tweets without emojis and tweets containing less than ten words, and contents including the urls, mentions, and emails.", "For acquiring the sentiment annotations, we first use Vader which is a rule-based sentiment analysis algorithm BIBREF17 for text tweets only to generate weak sentiment labels. " ] } ] } ], "1709.05413": [ { "question": "Do they evaluate only on English datasets?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "b8cee4782e05afaeb9647efdb8858554490feba5", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Example Twitter Customer Service Conversation" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Example Twitter Customer Service Conversation" ] } ] } ], "1804.00079": [ { "question": "Which data sources do they use?", "answers": [ { "answer": "- En-Fr (WMT14)\n- En-De (WMT15)\n- Skipthought (BookCorpus)\n- AllNLI (SNLI + MultiNLI)\n- Parsing (PTB + 1-billion word)", "type": "abstractive" } ], "q_uid": "e2f269997f5a01949733c2ec8169f126dabd7571", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: An approximate number of sentence pairs for each task." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: An approximate number of sentence pairs for each task." ] } ] } ], "2003.12738": [ { "question": "What approach performs better in experiments global latent or sequence of fine-grained latent variables?", "answers": [ { "answer": "PPL: SVT\nDiversity: GVT\nEmbeddings Similarity: SVT\nHuman Evaluation: SVT", "type": "abstractive" } ], "q_uid": "c69f4df4943a2ca4c10933683a02b179a5e76f64", "evidence": [ { "raw_evidence": [ "Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL.", "FLOAT SELECTED: Table 1: Results of Variational Transformer compared to baselines on automatic and human evaluations." ], "highlighted_evidence": [ "Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL.", "FLOAT SELECTED: Table 1: Results of Variational Transformer compared to baselines on automatic and human evaluations." ] } ] } ], "1909.03544": [ { "question": "What previous approaches did this method outperform?", "answers": [ { "answer": "Table TABREF44, Table TABREF44, Table TABREF47, Table TABREF47", "type": "extractive" } ], "q_uid": "7772cb23b7609f1d4cfd6511ac3fcdc20f8481ba", "evidence": [ { "raw_evidence": [ "The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations.", "Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS.", "Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines.", "Table TABREF47 shows NER results (F1 score) on CNEC 1.1 and CNEC 2.0. Our sequence-to-sequence (seq2seq) model which captures the nested entities, clearly surpasses the current Czech NER state of the art. Furthermore, significant improvement is gained when adding the contextualized word embeddings (BERT and Flair) as optional input to the LSTM encoder. The strongest model is a combination of the sequence-to-sequence architecture with both BERT and Flair contextual word embeddings.", "FLOAT SELECTED: Table 2. POS tagging and lemmatization results (accuracy) on PDT 3.5. Bold indicates the best result, italics related work. \u2020Reported on PDT 2.0, which has the same underlying corpus, with minor changes in morphological annotation (our model results differ at 0.1% on PDT 2.0).", "FLOAT SELECTED: Table 4. Dependency tree parsing results on PDT 2.0 a-layer. Bold indicates the best result, italics related work. \u2020Possibly using gold POS tags. \u2021Results as of 23 Mar 2019.", "FLOAT SELECTED: Table 5. Czech PDT UD 2.3 results for POS tagging (UPOS: universal POS, XPOS: languagespecific POS, UFeats: universal morphological features), lemmatization and dependency parsing (UAS, LAS, MLAS, and BLEX scores). Bold indicates the best result, italics related work.", "FLOAT SELECTED: Table 6. Named entity recognition results (F1) on the Czech Named Entity Corpus. Bold indicates the best result, italics related work." ], "highlighted_evidence": [ "The POS tagging and lemmatization results are presented in Table TABREF44.", "Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation).", "Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank.", "Table TABREF47 shows NER results (F1 score) on CNEC 1.1 and CNEC 2.0.", "FLOAT SELECTED: Table 2. POS tagging and lemmatization results (accuracy) on PDT 3.5. Bold indicates the best result, italics related work. \u2020Reported on PDT 2.0, which has the same underlying corpus, with minor changes in morphological annotation (our model results differ at 0.1% on PDT 2.0).", "FLOAT SELECTED: Table 4. Dependency tree parsing results on PDT 2.0 a-layer. Bold indicates the best result, italics related work. \u2020Possibly using gold POS tags. \u2021Results as of 23 Mar 2019.", "FLOAT SELECTED: Table 5. Czech PDT UD 2.3 results for POS tagging (UPOS: universal POS, XPOS: languagespecific POS, UFeats: universal morphological features), lemmatization and dependency parsing (UAS, LAS, MLAS, and BLEX scores). Bold indicates the best result, italics related work.", "FLOAT SELECTED: Table 6. Named entity recognition results (F1) on the Czech Named Entity Corpus. Bold indicates the best result, italics related work." ] } ] } ], "1811.01088": [ { "question": "Is the new model evaluated on the tasks that BERT and ELMo are evaluated on?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "6992f8e5a33f0af0f2206769484c72fecc14700b", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: GLUE results with and without STILTs, fine-tuning on full training data of each target task. Bold marks the best within each section. Strikethrough indicates cases where the intermediate task is the same as the target task\u2014we substitute the baseline result for that cell. A.Ex is the average excluding MNLI and QQP because of the overlap with intermediate tasks. See text for discussion of WNLI results. Test results on STILTs uses the supplementary training regime for each task based on the performance on the development set, corresponding to the numbers shown in Best of Each. The aggregated GLUE scores differ from the public leaderboard because we report performance on QNLIv1." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: GLUE results with and without STILTs, fine-tuning on full training data of each target task. Bold marks the best within each section. Strikethrough indicates cases where the intermediate task is the same as the target task\u2014we substitute the baseline result for that cell. A.Ex is the average excluding MNLI and QQP because of the overlap with intermediate tasks. See text for discussion of WNLI results. Test results on STILTs uses the supplementary training regime for each task based on the performance on the development set, corresponding to the numbers shown in Best of Each. The aggregated GLUE scores differ from the public leaderboard because we report performance on QNLIv1." ] } ] } ], "1902.10525": [ { "question": "Which language has the lowest error rate reduction?", "answers": [ { "answer": "thai", "type": "abstractive" } ], "q_uid": "097ab15f58cb1fce5b5ffb5082b8d7bbee720659", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 9 Character error rates on the validation data using successively more of the system components described above for English (en), Spanish (es), German (de), Arabic (ar), Korean (ko), Thai (th), Hindi (hi), and Chinese (zh) along with the respective number of items and characters in the test sets. Average latencies for all languages and models were computed on an Intel Xeon E5-2690 CPU running at 2.6GHz." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 9 Character error rates on the validation data using successively more of the system components described above for English (en), Spanish (es), German (de), Arabic (ar), Korean (ko), Thai (th), Hindi (hi), and Chinese (zh) along with the respective number of items and characters in the test sets. Average latencies for all languages and models were computed on an Intel Xeon E5-2690 CPU running at 2.6GHz." ] } ] } ], "2004.01878": [ { "question": "How big is dataset used?", "answers": [ { "answer": "553,451 documents", "type": "abstractive" } ], "q_uid": "5a23f436a7e0c33e4842425cf86d5fd8ba78ac92", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Statistics of the datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Statistics of the datasets." ] } ] } ], "1603.00968": [ { "question": "What are the baseline models?", "answers": [ { "answer": "MC-CNN\nMVCNN\nCNN", "type": "abstractive" } ], "q_uid": "085147cd32153d46dd9901ab0f9195bfdbff6a85", "evidence": [ { "raw_evidence": [ "We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .", "FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these." ], "highlighted_evidence": [ "We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .", "FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these." ] } ] }, { "question": "By how much of MGNC-CNN out perform the baselines?", "answers": [ { "answer": "In terms of Subj the Average MGNC-CNN is better than the average score of baselines by 0.5. Similarly, Scores of SST-1, SST-2, and TREC where MGNC-CNN has similar improvements. \nIn case of Irony the difference is about 2.0. \n", "type": "abstractive" } ], "q_uid": "c0035fb1c2b3de15146a7ce186ccd2e366fb4da2", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these.", "We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these.", "We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . " ] } ] }, { "question": "What are the comparable alternative architectures?", "answers": [ { "answer": "standard CNN, C-CNN, MVCNN ", "type": "extractive" } ], "q_uid": "34dd0ee1374a3afd16cf8b0c803f4ef4c6fec8ac", "evidence": [ { "raw_evidence": [ "We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .", "More similar to our work, Yin and Sch\u00fctze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).", "FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these." ], "highlighted_evidence": [ "We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. ", "More similar to our work, Yin and Sch\u00fctze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. ", "FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these." ] } ] } ], "2004.01980": [ { "question": "Which state-of-the-art model is surpassed by 9.68% attraction score?", "answers": [ { "answer": "pure summarization model NHG", "type": "extractive" } ], "q_uid": "53377f1c5eda961e438424d71d16150e669f7072", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. \u201cNone\u201d represents the original headlines in the dataset.", "In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the \u201cClickbait\u201d style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the \u201cClickbait\u201d style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. \u201cNone\u201d represents the original headlines in the dataset.", "In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1." ] } ] }, { "question": "What is increase in percentage of humor contained in headlines generated with TitleStylist method (w.r.t. baselines)?", "answers": [ { "answer": "Humor in headlines (TitleStylist vs Multitask baseline):\nRelevance: +6.53% (5.87 vs 5.51)\nAttraction: +3.72% (8.93 vs 8.61)\nFluency: 1,98% (9.29 vs 9.11)", "type": "abstractive" } ], "q_uid": "f37ed011e7eb259360170de027c1e8557371f002", "evidence": [ { "raw_evidence": [ "The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters.", "FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. \u201cNone\u201d represents the original headlines in the dataset.", "FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. \u201cNone\u201d represents the original headlines in the dataset." ], "highlighted_evidence": [ "We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57.", "FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. \u201cNone\u201d represents the original headlines in the dataset.", "FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. \u201cNone\u201d represents the original headlines in the dataset." ] } ] } ], "1804.08139": [ { "question": "What evaluation metrics are used?", "answers": [ { "answer": "Accuracy on each dataset and the average accuracy on all datasets.", "type": "abstractive" } ], "q_uid": "0fd678d24c86122b9ab27b73ef20216bbd9847d1", "evidence": [ { "raw_evidence": [ "Table TABREF34 shows the performances of the different methods. From the table, we can see that the performances of most tasks can be improved with the help of multi-task learning. FS-MTL shows the minimum performance gain from multi-task learning since it puts all private and shared information into a unified space. SSP-MTL and PSP-MTL achieve similar performance and are outperformed by ASP-MTL which can better separate the task-specific and task-invariant features by using adversarial training. Our proposed models (SA-MTL and DA-MTL) outperform ASP-MTL because we model a richer representation from these 16 tasks. Compared to SA-MTL, DA-MTL achieves a further improvement of INLINEFORM0 accuracy with the help of the dynamic and flexible query vector. It is noteworthy that our models are also space efficient since the task-specific information is extracted by using only a query vector, instead of a BiLSTM layer in the shared-private models.", "FLOAT SELECTED: Table 2: Performances on 16 tasks. The column of \u201cSingle Task\u201d includes bidirectional LSTM (BiLSTM), bidirectional LSTM with attention (att-BiLSTM) and the average accuracy of the two models. The column of \u201cMultiple Tasks\u201d shows several multi-task models. * is from [Liu et al., 2017] ." ], "highlighted_evidence": [ "Table TABREF34 shows the performances of the different methods.", "FLOAT SELECTED: Table 2: Performances on 16 tasks. The column of \u201cSingle Task\u201d includes bidirectional LSTM (BiLSTM), bidirectional LSTM with attention (att-BiLSTM) and the average accuracy of the two models. The column of \u201cMultiple Tasks\u201d shows several multi-task models. * is from [Liu et al., 2017] ." ] } ] } ], "1911.03597": [ { "question": "How much better are results of proposed model compared to pivoting method?", "answers": [ { "answer": "our method outperforms the baseline in both relevance and fluency significantly.", "type": "extractive" } ], "q_uid": "b9c0049a7a5639c33efdb6178c2594b8efdefabb", "evidence": [ { "raw_evidence": [ "First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence.", "As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators.", "Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies.", "FLOAT SELECTED: Table 3: Human evaluation results." ], "highlighted_evidence": [ "First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence.", "As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators.\n\nBoth round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies.", "FLOAT SELECTED: Table 3: Human evaluation results." ] } ] } ], "1909.07734": [ { "question": "Who was the top-scoring team?", "answers": [ { "answer": "IDEA", "type": "abstractive" } ], "q_uid": "d2fbf34cf4b5b1fd82394124728b03003884409c", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 6: F-scores for Friends (%)", "FLOAT SELECTED: Table 7: F-scores for EmotionPush (%)", "The submissions and the final results are summarized in Tables and . Two of the submissions did not follow up with technical papers and thus they do not appear in this summary. We note that the top-performing models used BERT, reflecting the recent state-of-the-art performance of this model in many NLP tasks. For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: F-scores for Friends (%)", "FLOAT SELECTED: Table 7: F-scores for EmotionPush (%)", " For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively." ] } ] } ], "2001.05970": [ { "question": "How strong is the correlation between the prevalence of the #MeToo movement and official reports [of sexual harassment]?", "answers": [ { "answer": "0.9098 correlation", "type": "abstractive" } ], "q_uid": "dd5c9a370652f6550b4fd13e2ac317eaf90973a8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Linear regression results.", "We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Linear regression results.", "We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college." ] } ] } ], "1710.06700": [ { "question": "What were their accuracy results on the task?", "answers": [ { "answer": "97.32%", "type": "abstractive" } ], "q_uid": "2fa0b9d0cb26e1be8eae7e782ada6820bc2c037f", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Lemmatization accuracy using WikiNews testset" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Lemmatization accuracy using WikiNews testset" ] } ] } ], "1912.10435": [ { "question": "How much F1 was improved after adding skip connections?", "answers": [ { "answer": "Simple Skip improves F1 from 74.34 to 74.81\nTransformer Skip improes F1 from 74.34 to 74.95 ", "type": "abstractive" } ], "q_uid": "707db46938d16647bf4b6407b2da84b5c7ab4a81", "evidence": [ { "raw_evidence": [ "Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip). The final column presents the result of the localized feature extraction added inside the C2Q/Q2C architecture (Inside Conv - Figure FIGREF8).", "FLOAT SELECTED: Table 2: Performance results for experiments relative to BERT base" ], "highlighted_evidence": [ "Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip).", "FLOAT SELECTED: Table 2: Performance results for experiments relative to BERT base" ] } ] } ], "1603.04513": [ { "question": "How much gain does the model achieve with pretraining MVCNN?", "answers": [ { "answer": "0.8 points on Binary; 0.7 points on Fine-Grained; 0.6 points on Senti140; 0.7 points on Subj", "type": "abstractive" } ], "q_uid": "d8de12f5eff64d0e9c9e88f6ebdabc4cdf042c22", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a \u201cchannel\u201d) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign \u201c-\u201d in MVCNN (-Huang) etc. means \u201cHuang\u201d is not used. \u201cversions / filters / tricks / layers\u201d denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a \u201cchannel\u201d) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign \u201c-\u201d in MVCNN (-Huang) etc. means \u201cHuang\u201d is not used. \u201cversions / filters / tricks / layers\u201d denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ] } ] }, { "question": "What are the effects of extracting features of multigranular phrases?", "answers": [ { "answer": "The system benefits from filters of each size., features of multigranular phrases are extracted with variable-size convolution filters.", "type": "extractive" } ], "q_uid": "9cba2ee1f8e1560e48b3099d0d8cf6c854ddea2e", "evidence": [ { "raw_evidence": [ "The block \u201cfilters\u201d indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).", "This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization \u2013 diverse versions of pretrained word embeddings are used \u2013 and variable-size filters \u2013 features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks.", "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a \u201cchannel\u201d) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign \u201c-\u201d in MVCNN (-Huang) etc. means \u201cHuang\u201d is not used. \u201cversions / filters / tricks / layers\u201d denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "highlighted_evidence": [ "The block \u201cfilters\u201d indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).", "This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization \u2013 diverse versions of pretrained word embeddings are used \u2013 and variable-size filters \u2013 features of multigranular phrases are extracted with variable-size convolution filters. ", "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a \u201cchannel\u201d) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign \u201c-\u201d in MVCNN (-Huang) etc. means \u201cHuang\u201d is not used. \u201cversions / filters / tricks / layers\u201d denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ] } ] }, { "question": "What are the effects of diverse versions of pertained word embeddings? ", "answers": [ { "answer": "each embedding version is crucial for good performance", "type": "extractive" } ], "q_uid": "7975c3e1f61344e3da3b38bb12e1ac6dcb153a18", "evidence": [ { "raw_evidence": [ "In the block \u201cversions\u201d, we see that each embedding version is crucial for good performance: performance drops in every single case. Though it is not easy to compare fairly different embedding versions in NLP tasks, especially when those embeddings were trained on different corpora of different sizes using different algorithms, our results are potentially instructive for researchers making decision on which embeddings to use for their own tasks.", "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a \u201cchannel\u201d) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign \u201c-\u201d in MVCNN (-Huang) etc. means \u201cHuang\u201d is not used. \u201cversions / filters / tricks / layers\u201d denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "highlighted_evidence": [ "In the block \u201cversions\u201d, we see that each embedding version is crucial for good performance: performance drops in every single case. ", "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a \u201cchannel\u201d) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign \u201c-\u201d in MVCNN (-Huang) etc. means \u201cHuang\u201d is not used. \u201cversions / filters / tricks / layers\u201d denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ] } ] } ], "1607.06025": [ { "question": "What is the highest accuracy score achieved?", "answers": [ { "answer": "82.0%", "type": "abstractive" } ], "q_uid": "ea6764a362bac95fb99969e9f8c773a61afd8f39", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: The performance of classifiers trained on the original and generated datasets. The classifiers were tested on original test set. The generated datasets were generated by the models from Table 3. The generated datasets were filtered with threshold 0.6." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: The performance of classifiers trained on the original and generated datasets. The classifiers were tested on original test set. The generated datasets were generated by the models from Table 3. The generated datasets were filtered with threshold 0.6." ] } ] } ], "1909.00252": [ { "question": "What is improvement in accuracy for short Jokes in relation other types of jokes?", "answers": [ { "answer": "It had the highest accuracy comparing to all datasets 0.986% and It had the highest improvement comparing to previous methods on the same dataset by 8%", "type": "abstractive" } ], "q_uid": "2815bac42db32d8f988b380fed997af31601f129", "evidence": [ { "raw_evidence": [ "Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).", "In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. We also note that the general human classification found 66.3% of the jokes to be humorous.", "The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.", "FLOAT SELECTED: Table 2: Results of Accuracy on Reddit Jokes dataset", "FLOAT SELECTED: Table 3: Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of filters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms.", "FLOAT SELECTED: Table 4: Results on Short Jokes Identification" ], "highlighted_evidence": [ "Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).", "In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. ", "The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.", "FLOAT SELECTED: Table 2: Results of Accuracy on Reddit Jokes dataset", "FLOAT SELECTED: Table 3: Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of filters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms.", "FLOAT SELECTED: Table 4: Results on Short Jokes Identification" ] } ] } ], "1808.09920": [ { "question": "What is the metric used with WIKIHOP?", "answers": [ { "answer": "Accuracy", "type": "extractive" } ], "q_uid": "63403ffc0232ff041f3da8fa6c30827cfd6404b7", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set. Our Entity-GCN outperforms recent prior work without learning any language model to process the input but relying on a pretrained one (ELMo \u2013 without fine-tunning it) and applying R-GCN to reason among entities in the text. * with coreference for unmasked dataset and without coreference for the masked one." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set. Our Entity-GCN outperforms recent prior work without learning any language model to process the input but relying on a pretrained one (ELMo \u2013 without fine-tunning it) and applying R-GCN to reason among entities in the text. * with coreference for unmasked dataset and without coreference for the masked one." ] } ] }, { "question": "What performance does the Entity-GCN get on WIKIHOP?", "answers": [ { "answer": "During testing: 67.6 for single model without coreference, 66.4 for single model with coreference, 71.2 for ensemble of 5 models", "type": "abstractive" } ], "q_uid": "a25c1883f0a99d2b6471fed48c5121baccbbae82", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set. Our Entity-GCN outperforms recent prior work without learning any language model to process the input but relying on a pretrained one (ELMo \u2013 without fine-tunning it) and applying R-GCN to reason among entities in the text. * with coreference for unmasked dataset and without coreference for the masked one." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set. Our Entity-GCN outperforms recent prior work without learning any language model to process the input but relying on a pretrained one (ELMo \u2013 without fine-tunning it) and applying R-GCN to reason among entities in the text. * with coreference for unmasked dataset and without coreference for the masked one." ] } ] } ], "2002.08899": [ { "question": "How do they damage different neural modules?", "answers": [ { "answer": "Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information.", "type": "abstractive" } ], "q_uid": "79ed71a3505cf6f5e8bf121fd7ec1518cab55cae", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Results for artificial Wernicke\u2019s and Broca\u2019s aphasia induced in the LLA-LSTM model. Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information. The inputs that we present are arbitrarily chosen, subject to the constraints listed in the text. Mean precision (Prec.) results on the test sets are also provided to demonstrate corpus-level results. An ellipses represents the repetition of the preceding word at least 1000 times." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results for artificial Wernicke\u2019s and Broca\u2019s aphasia induced in the LLA-LSTM model. Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information. The inputs that we present are arbitrarily chosen, subject to the constraints listed in the text. Mean precision (Prec.) results on the test sets are also provided to demonstrate corpus-level results. An ellipses represents the repetition of the preceding word at least 1000 times." ] } ] } ], "1705.00108": [ { "question": "what metrics are used in evaluation?", "answers": [ { "answer": "micro-averaged F1", "type": "abstractive" } ], "q_uid": "a5b67470a1c4779877f0d8b7724879bbb0a3b313", "evidence": [ { "raw_evidence": [ "We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0.", "FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text." ], "highlighted_evidence": [ "We report the official evaluation metric (micro-averaged INLINEFORM0 ). ", "FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text." ] } ] }, { "question": "what previous systems were compared to?", "answers": [ { "answer": "Chiu and Nichols (2016), Lample et al. (2016), Ma and Hovy (2016), Yang et al. (2017), Hashimoto et al. (2016), S\u00f8gaard and Goldberg (2016) ", "type": "abstractive" } ], "q_uid": "4640793d82aa7db30ad7b88c0bf0a1030e636558", "evidence": [ { "raw_evidence": [ "Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512).", "FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text.", "FLOAT SELECTED: Table 2: Test set F1 comparison on CoNLL 2000 Chunking task using only CoNLL 2000 data and unlabeled text.", "FLOAT SELECTED: Table 3: Improvements in test set F1 in CoNLL 2003 NER when including additional labeled data or task specific gazetteers (except the case of TagLM where we do not use additional labeled resources).", "FLOAT SELECTED: Table 4: Improvements in test set F1 in CoNLL 2000 Chunking when including additional labeled data (except the case of TagLM where we do not use additional labeled data)." ], "highlighted_evidence": [ "Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. ", "FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text.", "FLOAT SELECTED: Table 2: Test set F1 comparison on CoNLL 2000 Chunking task using only CoNLL 2000 data and unlabeled text.", "FLOAT SELECTED: Table 3: Improvements in test set F1 in CoNLL 2003 NER when including additional labeled data or task specific gazetteers (except the case of TagLM where we do not use additional labeled resources).", "FLOAT SELECTED: Table 4: Improvements in test set F1 in CoNLL 2000 Chunking when including additional labeled data (except the case of TagLM where we do not use additional labeled data)." ] } ] } ], "1712.03547": [ { "question": "When they say \"comparable performance\", how much of a performance drop do these new embeddings result in?", "answers": [ { "answer": "Performance was comparable, with the proposed method quite close and sometimes exceeding performance of baseline method.", "type": "abstractive" } ], "q_uid": "a4d8fdcaa8adf99bdd1d7224f1a85c610659a9d3", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Results on test data. The proposed method significantly improves interpretability while maintaining comparable performance on KG tasks (Section 4.3)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results on test data. The proposed method significantly improves interpretability while maintaining comparable performance on KG tasks (Section 4.3)." ] } ] } ], "1910.02339": [ { "question": "How does TP-N2F compare to LSTM-based Seq2Seq in terms of training and inference speed?", "answers": [ { "answer": "Full Testing Set accuracy: 84.02\nCleaned Testing Set accuracy: 93.48", "type": "abstractive" } ], "q_uid": "9c4a4dfa7b0b977173e76e2d2f08fa984af86f0e", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Results of AlgoLisp dataset", "Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results of AlgoLisp dataset", "As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set." ] } ] }, { "question": "What is the performance proposed model achieved on AlgoList benchmark?", "answers": [ { "answer": "Full Testing Set Accuracy: 84.02\nCleaned Testing Set Accuracy: 93.48", "type": "abstractive" } ], "q_uid": "4c7ac51a66c15593082e248451e8f6896e476ffb", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Results of AlgoLisp dataset", "Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results of AlgoLisp dataset", "As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set." ] } ] }, { "question": "What is the performance proposed model achieved on MathQA?", "answers": [ { "answer": "Operation accuracy: 71.89\nExecution accuracy: 55.95", "type": "abstractive" } ], "q_uid": "05671d068679be259493df638d27c106e7dd36d0", "evidence": [ { "raw_evidence": [ "Given a natural-language math problem, we need to generate a sequence of operations (operators and corresponding arguments) from a set of operators and arguments to solve the given problem. Each operation is regarded as a relational tuple by viewing the operator as relation, e.g., $(add, n1, n2)$. We test TP-N2F for this task on the MathQA dataset BIBREF16. The MathQA dataset consists of about 37k math word problems, each with a corresponding list of multi-choice options and the corresponding operation sequence. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed with the execution script from BIBREF16 to select a multi-choice answer. As there are about 30% noisy data (where the execution script returns the wrong answer when given the ground-truth program; see Sec. SECREF20 of the Appendix), we report both execution accuracy (of the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). TP-N2F is compared to a baseline provided by the seq2prog model in BIBREF16, an LSTM-based seq2seq model with attention. Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results. To verify the importance of the TP-N2F encoder and decoder, we conducted experiments to replace either the encoder with a standard LSTM (denoted LSTM2TP) or the decoder with a standard attentional LSTM (denoted TP2LSTM). We observe that both the TPR components of TP-N2F are important for achieving the observed performance gain relative to the baseline.", "FLOAT SELECTED: Table 1: Results on MathQA dataset testing set" ], "highlighted_evidence": [ "Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results.", "FLOAT SELECTED: Table 1: Results on MathQA dataset testing set" ] } ] } ], "2003.06044": [ { "question": "How do previous methods perform on the Switchboard Dialogue Act and DailyDialog datasets?", "answers": [ { "answer": "Table TABREF20 , Table TABREF22, Table TABREF23", "type": "extractive" } ], "q_uid": "a3a871ca2417b2ada9df1438d282c45e4b4ad668", "evidence": [ { "raw_evidence": [ "We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwDA has been widely used in previous work for the DA recognition task. It is annotated on 1155 human to human telephonic conversations about the given topic. Each utterance in the conversation is manually labeled as one of 42 dialogue acts according to SWBD-DAMSL taxonomy BIBREF25. In BIBREF10, they used 43 categories of dialogue acts, which is different from us and previous work. The difference in the number of labels is mainly due to the special label \u201c+\u201d, which represents that the utterance is interrupted by the other speaker (and thus split into two or more parts). We used the same processing with BIBREF26, which concatenated the parts of an interrupted utterance together, giving the result the tag of the first part and putting it in its place in the conversation sequence. It is critical for fair comparison because there are nearly 8% data has the label \u201c+\u201d. Lacking standard splits, we followed the training/validation/test splits by BIBREF14. DailyDialog dataset contains 13118 multi-turn dialogues, which mainly reflect our daily communication style. It covers various topics about our daily life. Each utterance in the conversation is manually labeled as one out of 4 dialogue act classes. Table TABREF18 presents the statistics for both datasets. In our preprocessing, the text was lowercased before tokenized, and then sentences were tokenized by WordPiece tokenizer BIBREF27 with a 30,000 token vocabulary to alleviate the Out-of-Vocabulary problem.", "In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. It is worth noting that BIBREF10 combined GloVeBIBREF28 and pre-trained ELMo representationsBIBREF29 as word embeddings. However, in our work, we only applied the pre-trained word embedding. To illustrate the importance of context information, we also evaluate several sentence classification methods (CNN, LSTM, BERT) as baselines. For baseline models, both CNN and LSTM, got similar accuracy (75.27% and 75.59% respectively). We also fine-tuned BERT BIBREF30 to do recognition based on single utterance. As seen, with the powerful unsupervised pre-trained language model, BERT (76.88% accuracy) outperformed LSTM and CNN models for single sentence classification. However, it was still much lower than the models based on context information. It indicates that context information is crucial in the DA recognition task. BERT can boost performance in a large margin. However, it costs too much time and resources. In this reason, we chose LSTM as our utterance encoder in further experiment.", "FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset.", "FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing.", "FLOAT SELECTED: Table 6: Experiment results on DailyDialog dataset.", "To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result. It is worth noting that it is actually the same as single sentence classification when $P = 0$ (without any context provided). First, we set $W$ to 1 to discuss how the length of context padding will affect. As seen in the result, the accuracy increased when more context padding was used for both LSTM+BLSTM and LSTM+Attention approaches, so we did not evaluate the performance of LSTM+LC Attention when context padding is small. There was no further accuracy improvement when the length of context padding was beyond 5. Therefore, we fixed the context padding length $P$ to 5 and increased the size of the sliding window to see how it works. With sliding window size increasing, the more context was involved together with more unnecessary information. From the experiments, we can see that both LSTM+BLSTM and LSTM+Attention achieved the best performance when window size was 1 and context padding length was 5. When window size increased, the performances of these two models dropped. However, our model (LSTM+LC Attention) can leverage the context information more efficiently, which achieved the best performance when window size was 10, and the model was more stable and robust to the different setting of window size.", "The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models. From table TABREF18 we can see that, the average dialogue length $|U|$ in DailyDialog is much shorter than the average length of SwDA. So, in our experiment, we set the maximum of the $W$ to 10, which almost covers the whole utterances in the dialogue. Using the same way as SwDA dataset, we, first, set W to 1 and increased the length of context padding. As seen, modeling local context information, hierarchical models yielded significant improvement than sentence classification. There was no further accuracy improvement when the length of context padding was beyond 2, so we fixed the context padding length P to 2 and increased the size of sliding window size W. From the experiments, we can see that LSTM+Attention always got a little better accuracy than LSTM+BLSTM. With window size increasing, the performances of these two models dropped. Relying on modeling local contextual information, LSTM+LC Attention achieved the best accuracy (85.81%) when the window size was 5. For the longer sliding window, the performance of LSTM+LC Attention was still better and more robust than the other two models. For online prediction, we added 2 preceding utterances as context padding, and the experiment shows that LSTM+LC Attention outperformed the other two models under the online setting, although the performances of these three models dropped without subsequent utterances." ], "highlighted_evidence": [ "We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. ", "In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. ", "FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset.", "FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing.", "FLOAT SELECTED: Table 6: Experiment results on DailyDialog dataset.", "To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result", "The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models." ] } ] }, { "question": "What previous methods is the proposed method compared against?", "answers": [ { "answer": "BLSTM+Attention+BLSTM\nHierarchical BLSTM-CRF\nCRF-ASN\nHierarchical CNN (window 4)\nmLSTM-RNN\nDRLM-Conditional\nLSTM-Softmax\nRCNN\nCNN\nCRF\nLSTM\nBERT", "type": "abstractive" } ], "q_uid": "0fcac64544842dd06d14151df8c72fc6de5d695c", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset.", "FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset.", "FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing." ] } ] } ], "2002.01359": [ { "question": "What domains are present in the data?", "answers": [ { "answer": "Alarm, Banks, Buses, Calendar, Events, Flights, Homes, Hotels, Media, Messaging, Movies, Music, Payment, Rental Cars, Restaurants, Ride Sharing, Services, Train, Travel, Weather", "type": "abstractive" } ], "q_uid": "b43fa27270eeba3e80ff2a03754628b5459875d6", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: The total number of intents (services in parentheses) and dialogues for each domain across train1, dev2 and test3 sets. Superscript indicates the datasets in which dialogues from the domain are present. Multi-domain dialogues contribute to counts of each domain. The domain Services includes salons, dentists, doctors, etc." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: The total number of intents (services in parentheses) and dialogues for each domain across train1, dev2 and test3 sets. Superscript indicates the datasets in which dialogues from the domain are present. Multi-domain dialogues contribute to counts of each domain. The domain Services includes salons, dentists, doctors, etc." ] } ] } ], "1612.05270": [ { "question": "How many texts/datapoints are in the SemEval, TASS and SENTIPOLC datasets?", "answers": [ { "answer": "Total number of annotated data:\nSemeval'15: 10712\nSemeval'16: 28632\nTass'15: 69000\nSentipol'14: 6428", "type": "abstractive" } ], "q_uid": "458dbf217218fcab9153e33045aac08a2c8a38c6", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Datasets details from each competition tested in this work" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Datasets details from each competition tested in this work" ] } ] }, { "question": "In which languages did the approach outperform the reported results?", "answers": [ { "answer": "Arabic, German, Portuguese, Russian, Swedish", "type": "abstractive" } ], "q_uid": "cebf3e07057339047326cb2f8863ee633a62f49f", "evidence": [ { "raw_evidence": [ "In BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases.", "FLOAT SELECTED: Table 5: Performance on multilingual sentiment analysis (not challenges). B4MSA was restricted to use only the multilingual set of parameters." ], "highlighted_evidence": [ "Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language.", "FLOAT SELECTED: Table 5: Performance on multilingual sentiment analysis (not challenges). B4MSA was restricted to use only the multilingual set of parameters." ] } ] } ], "1910.08987": [ { "question": "How close do clusters match to ground truth tone categories?", "answers": [ { "answer": "NMI between cluster assignments and ground truth tones for all sylables is:\nMandarin: 0.641\nCantonese: 0.464", "type": "abstractive" } ], "q_uid": "f1831b2e96ff8ef65b8fde8b4c2ee3e04b7ac4bf", "evidence": [ { "raw_evidence": [ "To test this hypothesis, we evaluate the model on only the first syllable of every word, which eliminates carry-over and declination effects (Table TABREF14). In both Mandarin and Cantonese, the clustering is more accurate when using only the first syllables, compared to using all of the syllables.", "FLOAT SELECTED: Table 3. Normalized mutual information (NMI) between cluster assignments and ground truth tones, considering only the first syllable of each word, or all syllables." ], "highlighted_evidence": [ "To test this hypothesis, we evaluate the model on only the first syllable of every word, which eliminates carry-over and declination effects (Table TABREF14). In both Mandarin and Cantonese, the clustering is more accurate when using only the first syllables, compared to using all of the syllables.", "FLOAT SELECTED: Table 3. Normalized mutual information (NMI) between cluster assignments and ground truth tones, considering only the first syllable of each word, or all syllables." ] } ] } ], "1701.09123": [ { "question": "what are the evaluation metrics?", "answers": [ { "answer": "Precision, Recall, F1", "type": "abstractive" } ], "q_uid": "20ec88c45c1d633adfd7bff7bbf3336d01fb6f37", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: CoNLL 2003 English results." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: CoNLL 2003 English results." ] } ] }, { "question": "which datasets were used in evaluation?", "answers": [ { "answer": "CoNLL 2003, GermEval 2014, CoNLL 2002, Egunkaria, MUC7, Wikigold, MEANTIME, SONAR-1, Ancora 2.0", "type": "abstractive" } ], "q_uid": "a4fe5d182ddee24e5bbf222d6d6996b3925060c8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Datasets used for training, development and evaluation. MUC7: only three classes (LOC, ORG, PER) of the formal run are used for out-of-domain evaluation. As there are not standard partitions of SONAR-1 and Ancora 2.0, the full corpus was used for training and later evaluated in-out-of-domain settings." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Datasets used for training, development and evaluation. MUC7: only three classes (LOC, ORG, PER) of the formal run are used for out-of-domain evaluation. As there are not standard partitions of SONAR-1 and Ancora 2.0, the full corpus was used for training and later evaluated in-out-of-domain settings." ] } ] } ], "1611.00514": [ { "question": "How well does their system perform on the development set of SRE?", "answers": [ { "answer": "EER 16.04, Cmindet 0.6012, Cdet 0.6107", "type": "abstractive" } ], "q_uid": "30803eefd7cdeb721f47c9ca72a5b1d750b8e03b", "evidence": [ { "raw_evidence": [ "In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 . The first part of the table indicates the result obtained by the primary system. As can be seen, the fusion of MFCC and PLP (a simple sum of both MFCC and PLP scores) resulted in a relative improvement of almost 10%, as compared to MFCC alone, in terms of both INLINEFORM0 and INLINEFORM1 . In order to quantify the contribution of the different system components we have defined different scenarios. In scenario A, we have analysed the effect of using LDA instead of NDA. As can be seen from the results, LDA outperforms NDA in the case of PLP, however, in fusion we can see that NDA resulted in better performance in terms of the primary metric. In scenario B, we analysed the effect of using the short-duration compensation technique proposed in Section SECREF7 . Results indicate superior performance using this technique. In scenario C, we investigated the effects of language normalization on the performance of the system. If we replace LN-LDA with simple LDA, we can see performance degradation in MFCC as well as fusion, however, PLP seems not to be adversely affected. The effect of using QMF is also investigated in scenario D. Finally in scenario E, we can see the major improvement obtained through the use of the domain adaptation technique explained in Section SECREF16 . For our secondary submission, we incorporated a disjoint portion of the labelled development set (10 out of 20 speakers) in either LN-LDA and in-domain PLDA training. We evaluated the system on almost 6k out of 24k trials from the other portion to avoid any over-fitting, particularly important for the domain adaptation technique. This resulted in a relative improvement of 11% compared to the primary system in terms of the primary metric. However, the results can be misleading, since the recording condition may be the same for all speakers in the development set.", "FLOAT SELECTED: Table 2. Performance comparison of the Intelligent Voice speaker recognition system with various analysis on the development protocol of NIST SRE 2016." ], "highlighted_evidence": [ "In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 .", "FLOAT SELECTED: Table 2. Performance comparison of the Intelligent Voice speaker recognition system with various analysis on the development protocol of NIST SRE 2016." ] } ] } ], "1704.08960": [ { "question": "What external sources are used?", "answers": [ { "answer": "Raw data from Gigaword, Automatically segmented text from Gigaword, Heterogenous training data from People's Daily, POS data from People's Daily", "type": "abstractive" } ], "q_uid": "25e4dbc7e211a1ebe02ee8dff675b846fb18fdc5", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Statistics of external data.", "Neural network models for NLP benefit from pretraining of word/character embeddings, learning distributed sementic information from large raw texts for reducing sparsity. The three basic elements in our neural segmentor, namely characters, character bigrams and words, can all be pretrained over large unsegmented data. We pretrain the five-character window network in Figure FIGREF13 as an unit, learning the MLP parameter together with character and bigram embeddings. We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors.", "Raw Text. Although raw texts do not contain explicit word boundary information, statistics such as mutual information between consecutive characters can be useful features for guiding segmentation BIBREF11 . For neural segmentation, these distributional statistics can be implicitly learned by pretraining character embeddings. We therefore consider a more explicit clue for pretraining our character window network, namely punctuations BIBREF10 .", "Automatically Segmented Text. Large texts automatically segmented by a baseline segmentor can be used for self-training BIBREF13 or deriving statistical features BIBREF12 . We adopt a simple strategy, taking automatically segmented text as silver data to pretrain the five-character window network. Given INLINEFORM0 , INLINEFORM1 is derived using the MLP in Figure FIGREF13 , and then used to classify the segmentation of INLINEFORM2 into B(begining)/M(middle)/E(end)/S(single character word) labels. DISPLAYFORM0", "Heterogenous Training Data. Multiple segmentation corpora exist for Chinese, with different segmentation granularities. There has been investigation on leveraging two corpora under different annotation standards to improve statistical segmentation BIBREF16 . We try to utilize heterogenous treebanks by taking an external treebank as labeled data, training a B/M/E/S classifier for the character windows network. DISPLAYFORM0", "POS Data. Previous research has shown that POS information is closely related to segmentation BIBREF14 , BIBREF15 . We verify the utility of POS information for our segmentor by pretraining a classifier that predicts the POS on each character, according to the character window representation INLINEFORM0 . In particular, given INLINEFORM1 , the POS of the word that INLINEFORM2 belongs to is used as the output. DISPLAYFORM0" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Statistics of external data.", "We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors.", "Raw Text.", "Automatically Segmented Text. ", "Heterogenous Training Data.", "POS Data." ] } ] } ], "2002.05058": [ { "question": "How much better peformance is achieved in human evaluation when model is trained considering proposed metric?", "answers": [ { "answer": "Pearson correlation to human judgement - proposed vs next best metric\nSample level comparison:\n- Story generation: 0.387 vs 0.148\n- Dialogue: 0.472 vs 0.341\nModel level comparison:\n- Story generation: 0.631 vs 0.302\n- Dialogue: 0.783 vs 0.553", "type": "abstractive" } ], "q_uid": "75b69eef4a38ec16df63d60be9708a3c44a79c56", "evidence": [ { "raw_evidence": [ "The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample.", "Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation.", "FLOAT SELECTED: Table 1: Sample-level correlation between metrics and human judgments, with p-values shown in brackets.", "FLOAT SELECTED: Table 2: Model-level correlation between metrics and human judgments, with p-values shown in brackets." ], "highlighted_evidence": [ "The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity.", "Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores.", "FLOAT SELECTED: Table 1: Sample-level correlation between metrics and human judgments, with p-values shown in brackets.", "FLOAT SELECTED: Table 2: Model-level correlation between metrics and human judgments, with p-values shown in brackets." ] } ] } ], "2002.06675": [ { "question": "How much transcribed data is available for for Ainu language?", "answers": [ { "answer": "Transcribed data is available for duration of 38h 54m 38s for 8 speakers.", "type": "abstractive" } ], "q_uid": "8a5254ca726a2914214a4c0b6b42811a007ecfc6", "evidence": [ { "raw_evidence": [ "The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2.", "FLOAT SELECTED: Table 1: Speaker-wise details of the corpus" ], "highlighted_evidence": [ "The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker.", "FLOAT SELECTED: Table 1: Speaker-wise details of the corpus" ] } ] } ], "1909.08041": [ { "question": "What baseline approaches do they compare against?", "answers": [ { "answer": "HotspotQA: Yang, Ding, Muppet\nFever: Hanselowski, Yoneda, Nie", "type": "abstractive" } ], "q_uid": "13d92cbc2c77134626e26166c64ca5c00aec0bf5", "evidence": [ { "raw_evidence": [ "We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .", "As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\\sim $8 absolute points increase on EM and $\\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation.", "Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\\sim $4 and $\\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval.", "FLOAT SELECTED: Table 1: Results of systems on HOTPOTQA.", "FLOAT SELECTED: Table 2: Performance of systems on FEVER. \u201cF1\u201d indicates the sentence-level evidence F1 score. \u201cLA\u201d indicates Label Acc. without considering the evidence prediction. \u201cFS\u201d=FEVER Score (Thorne et al., 2018)" ], "highlighted_evidence": [ "We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .", "As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. ", "Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9.", "FLOAT SELECTED: Table 1: Results of systems on HOTPOTQA.", "FLOAT SELECTED: Table 2: Performance of systems on FEVER. \u201cF1\u201d indicates the sentence-level evidence F1 score. \u201cLA\u201d indicates Label Acc. without considering the evidence prediction. \u201cFS\u201d=FEVER Score (Thorne et al., 2018)" ] } ] }, { "question": "Retrieval at what level performs better, sentence level or paragraph level?", "answers": [ { "answer": "This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval.", "type": "extractive" } ], "q_uid": "ac54a9c30c968e5225978a37032158a6ffd4ddb8", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on FEVER. \u201cLA\u201d=Label Accuracy; \u201cFS\u201d=FEVER Score; \u201cOrcl.\u201d is the oracle upperbound of FEVER Score assuming all downstream modules are perfect. \u201cL-F1 (S/R/N)\u201d means the classification f1 scores on the three verification labels: SUPPORT, REFUTE, and NOT ENOUGH INFO.", "FLOAT SELECTED: Table 3: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on HOTPOTQA.", "Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.", "Next, the removal of sentence-level retrieval module induces a $\\sim $2 point drop on EM and F1 score in the QA task, and a $\\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on FEVER. \u201cLA\u201d=Label Accuracy; \u201cFS\u201d=FEVER Score; \u201cOrcl.\u201d is the oracle upperbound of FEVER Score assuming all downstream modules are perfect. \u201cL-F1 (S/R/N)\u201d means the classification f1 scores on the three verification labels: SUPPORT, REFUTE, and NOT ENOUGH INFO.", "FLOAT SELECTED: Table 3: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on HOTPOTQA.", "Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.\n\nNext, the removal of sentence-level retrieval module induces a $\\sim $2 point drop on EM and F1 score in the QA task, and a $\\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval." ] } ] } ], "1909.09270": [ { "question": "What was their F1 score on the Bengali NER corpus?", "answers": [ { "answer": "52.0%", "type": "abstractive" } ], "q_uid": "a7510ec34eaec2c7ac2869962b69cc41031221e5", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: Bengali manual annotation results. Our methods improve on state of the art scores by over 5 points F1 given a relatively small amount of noisy and incomplete annotations from non-speakers." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Bengali manual annotation results. Our methods improve on state of the art scores by over 5 points F1 given a relatively small amount of noisy and incomplete annotations from non-speakers." ] } ] } ], "1903.00172": [ { "question": "Where did they get training data?", "answers": [ { "answer": "AmazonQA and ConciergeQA datasets", "type": "abstractive" } ], "q_uid": "1fb73176394ef59adfaa8fc7827395525f9a5af7", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Precision (P), Recall (R), and Relative Coverage (RC) results on ConciergeQA.", "FLOAT SELECTED: Table 4: Precision (P), Recall (R), and Relative Coverage (RC) results on AmazonQA dataset.", "FLOAT SELECTED: Table 1: Various types of training instances." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Precision (P), Recall (R), and Relative Coverage (RC) results on ConciergeQA.", "FLOAT SELECTED: Table 4: Precision (P), Recall (R), and Relative Coverage (RC) results on AmazonQA dataset.", "FLOAT SELECTED: Table 1: Various types of training instances." ] } ] }, { "question": "Which datasets did they experiment on?", "answers": [ { "answer": "ConciergeQA and AmazonQA", "type": "abstractive" } ], "q_uid": "d70ba6053e245ee4179c26a5dabcad37561c6af0", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Various types of training instances." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Various types of training instances." ] } ] } ], "1705.08142": [ { "question": "Do sluice networks outperform non-transfer learning approaches?", "answers": [ { "answer": "Yes", "type": "boolean" } ], "q_uid": "a1c5b95e407127c6bb2f9a19b7d9b1f1bcd4a7a5", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Accuracy scores on in-domain and out-of-domain test sets for chunking (main task) with POS tagging as auxiliary task for different target domains for baselines and Sluice networks. Out-of-domain results for each target domain are averages across the 6 remaining source domains. Average error reduction over single-task performance is 12.8% for in-domain; 8.9% for out-of-domain. In-domain error reduction over hard parameter sharing is 14.8%." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Accuracy scores on in-domain and out-of-domain test sets for chunking (main task) with POS tagging as auxiliary task for different target domains for baselines and Sluice networks. Out-of-domain results for each target domain are averages across the 6 remaining source domains. Average error reduction over single-task performance is 12.8% for in-domain; 8.9% for out-of-domain. In-domain error reduction over hard parameter sharing is 14.8%." ] } ] } ], "1704.05907": [ { "question": "what state of the accuracy did they obtain?", "answers": [ { "answer": "51.5", "type": "abstractive" } ], "q_uid": "bde6fa2057fa21b38a91eeb2bb6a3ae7fb3a2c62", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Accuracies on the Stanford Sentiment Treebank 5-class classification task; except for the MVN, all results are drawn from (Lei et al., 2015)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Accuracies on the Stanford Sentiment Treebank 5-class classification task; except for the MVN, all results are drawn from (Lei et al., 2015)." ] } ] } ], "2001.08051": [ { "question": "How is the proficiency score calculated?", "answers": [ { "answer": "They used 6 indicators for proficiency (same for written and spoken) each marked by bad, medium or good by one expert.", "type": "abstractive" } ], "q_uid": "9ebb2adf92a0f8db99efddcade02a20a219ca7d9", "evidence": [ { "raw_evidence": [ "Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively.", "The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible.", "FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences." ], "highlighted_evidence": [ "Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively.", "The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table ", "FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences." ] } ] }, { "question": "What proficiency indicators are used to the score the utterances?", "answers": [ { "answer": "6 indicators:\n- lexical richness\n- pronunciation and fluency\n- syntactical correctness\n- fulfillment of delivery\n- coherence and cohesion\n- communicative, descriptive, narrative skills", "type": "abstractive" } ], "q_uid": "973f6284664675654cc9881745880a0e88f3280e", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences.", "Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences.", " Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively." ] } ] }, { "question": "What accuracy is achieved by the speech recognition system?", "answers": [ { "answer": "Accuracy not available: WER results are reported 42.6 German, 35.9 English", "type": "abstractive" } ], "q_uid": "0a3a8d1b0cbac559f7de845d845ebbfefb91135e", "evidence": [ { "raw_evidence": [ "Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29.", "FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets." ], "highlighted_evidence": [ "Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages.", "FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets." ] } ] }, { "question": "How is the speech recognition system evaluated?", "answers": [ { "answer": "Speech recognition system is evaluated using WER metric.", "type": "abstractive" } ], "q_uid": "ec2b8c43f14227cf74f9b49573cceb137dd336e7", "evidence": [ { "raw_evidence": [ "Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29.", "FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets." ], "highlighted_evidence": [ "Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages.", "FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets." ] } ] }, { "question": "How many of the utterances are transcribed?", "answers": [ { "answer": "Total number of transcribed utterances including Train and Test for both Eng and Ger language is 5562 (2188 cleaned)", "type": "abstractive" } ], "q_uid": "5e5460ea955d8bce89526647dd7c4f19b173ab34", "evidence": [ { "raw_evidence": [ "Speakers were assigned either to training or evaluation sets, with proportions of $\\frac{2}{3}$ and $\\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded.", "FLOAT SELECTED: Table 7: Statistics from the spoken data sets (2017) used for ASR." ], "highlighted_evidence": [ "Speakers were assigned either to training or evaluation sets, with proportions of $\\frac{2}{3}$ and $\\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded.", "FLOAT SELECTED: Table 7: Statistics from the spoken data sets (2017) used for ASR." ] } ] }, { "question": "How many utterances are in the corpus?", "answers": [ { "answer": "Total number of utterances available is: 70607 (37344 ENG + 33263 GER)", "type": "abstractive" } ], "q_uid": "d7d611f622552142723e064f330d071f985e805c", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Spoken data collected during different evaluation campaigns. Column \u201c#Q\u201d indicates the total number of different (written) questions presented to the pupils.", "Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Spoken data collected during different evaluation campaigns. Column \u201c#Q\u201d indicates the total number of different (written) questions presented to the pupils.", "Table reports some statistics extracted from the acquired spoken data." ] } ] } ], "1611.03382": [ { "question": "By how much does their model outperform both the state-of-the-art systems?", "answers": [ { "answer": "w.r.t Rouge-1 their model outperforms by 0.98% and w.r.t Rouge-L their model outperforms by 0.45%", "type": "abstractive" } ], "q_uid": "9555aa8de322396a16a07a5423e6a79dcd76816a", "evidence": [ { "raw_evidence": [ "Evaluation on DUC2004: DUC 2004 ( BIBREF15 ) is a commonly used benchmark on summarization task consisting of 500 news articles. Each article is paired with 4 different human-generated reference summaries, capped at 75 characters. This dataset is evaluation-only. Similar to BIBREF2 , we train our neural model on the Gigaword training set, and show the models' performances on DUC2004. Following the convention, we also use ROUGE limited-length recall as our evaluation metric, and set the capping length to 75 characters. We generate summaries with 15 words using beam-size of 10. As shown in Table TABREF35 , our method outperforms all previous methods on Rouge-1 and Rouge-L, and is comparable on Rouge-2. Furthermore, our model only uses 15k decoder vocabulary, while previous methods use 69k or 200k.", "FLOAT SELECTED: Table 2: Rouge-N limited-length recall on DUC2004. Size denotes the size of decoder vocabulary in a model." ], "highlighted_evidence": [ "As shown in Table TABREF35 , our method outperforms all previous methods on Rouge-1 and Rouge-L, and is comparable on Rouge-2.", "FLOAT SELECTED: Table 2: Rouge-N limited-length recall on DUC2004. Size denotes the size of decoder vocabulary in a model." ] } ] } ], "1909.06937": [ { "question": "What was the performance on the self-collected corpus?", "answers": [ { "answer": "F1 scores of 86.16 on slot filling and 94.56 on intent detection", "type": "abstractive" } ], "q_uid": "fa3312ae4bbed11a5bebd77caf15d651962e0b26", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 6: Results on our CAIS dataset, where \u201c\u2020\u201d indicates our implementation of the S-LSTM." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Results on our CAIS dataset, where \u201c\u2020\u201d indicates our implementation of the S-LSTM." ] } ] }, { "question": "What is the size of their dataset?", "answers": [ { "answer": "10,001 utterances", "type": "abstractive" } ], "q_uid": "26c290584c97e22b25035f5458625944db181552", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Dataset statistics." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Dataset statistics." ] } ] } ], "1704.00939": [ { "question": "What was their performance?", "answers": [ { "answer": "beneficial impact of word-representations and basic pre-processing", "type": "extractive" } ], "q_uid": "e2e31ab279d3092418159dfd24760f0f0566e9d3", "evidence": [ { "raw_evidence": [ "In this section, we report the results obtained by our model according to challenge official evaluation metric, which is based cosine-similarity and described in BIBREF27 . Results are reported for three diverse configurations: (i) the full system; (ii) the system without using word embeddings (i.e. Glove and DepecheMood); and (iii) the system without using pre-processing. In Table TABREF17 we show model's performances on the challenge training data, in a 5-fold cross-validation setting.", "Further, the final performances obtained with our approach on the challenge test set are reported in Table TABREF18 . Consistently with the cross-validation performances shown earlier, we observe the beneficial impact of word-representations and basic pre-processing.", "FLOAT SELECTED: Table 1: Cross-validation results", "FLOAT SELECTED: Table 2: Final results" ], "highlighted_evidence": [ "In Table TABREF17 we show model's performances on the challenge training data, in a 5-fold cross-validation setting.\n\nFurther, the final performances obtained with our approach on the challenge test set are reported in Table TABREF18 . Consistently with the cross-validation performances shown earlier, we observe the beneficial impact of word-representations and basic pre-processing.", "FLOAT SELECTED: Table 1: Cross-validation results", "FLOAT SELECTED: Table 2: Final results" ] } ] } ], "1707.08559": [ { "question": "What were their results?", "answers": [ { "answer": "Best model achieved F-score 74.7 on NALCS and F-score of 70.0 on LMS on test set", "type": "abstractive" } ], "q_uid": "1a8b7d3d126935c09306cacca7ddb4b953ef68ab", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Test Results on the NALCS (English) and LMS (Traditional Chinese) datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Test Results on the NALCS (English) and LMS (Traditional Chinese) datasets." ] } ] } ], "1912.10806": [ { "question": "What is the prediction accuracy of the model?", "answers": [ { "answer": "mean prediction accuracy 0.99582651\nS&P 500 Accuracy 0.99582651", "type": "abstractive" } ], "q_uid": "2e1ededb7c8460169cf3c38e6cde6de402c1e720", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Predicted Mean MPA results.", "FLOAT SELECTED: Table 2: S&P 500 predicted results." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Predicted Mean MPA results.", "FLOAT SELECTED: Table 2: S&P 500 predicted results." ] } ] } ], "1902.09393": [ { "question": "How much does this system outperform prior work?", "answers": [ { "answer": "The system outperforms by 27.7% the LSTM model, 38.5% the RL-SPINN model and 41.6% the Gumbel Tree-LSTM", "type": "abstractive" } ], "q_uid": "af75ad21dda25ec72311c2be4589efed9df2f482", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018)." ] } ] }, { "question": "What are the baseline systems that are compared against?", "answers": [ { "answer": "The system is compared to baseline models: LSTM, RL-SPINN and Gumbel Tree-LSTM", "type": "abstractive" } ], "q_uid": "de12e059088e4800d7d89e4214a3997994dbc0d9", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018)." ] } ] } ], "1909.13695": [ { "question": "What systems are tested?", "answers": [ { "answer": "BULATS i-vector/PLDA\nBULATS x-vector/PLDA\nVoxCeleb x-vector/PLDA\nPLDA adaptation (X1)\n Extractor fine-tuning (X2) ", "type": "abstractive" } ], "q_uid": "52e8f79814736fea96fd9b642881b476243e1698", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2. % EER performance of VoxCeleb-based systems on BULATS and Linguaskill test sets.", "FLOAT SELECTED: Table 1. % EER performance of BULATS-trained baseline systems on BULATS and Linguaskill test sets.", "Performance of the two baseline systems is shown in Table TABREF9 in terms of equal error rate (EER). The x-vector system yielded lower EERs on both BULATS and Linguaskill test sets.", "In addition to the models trained on the BULATS data, it is also interesting to investigate the application of \u201cout-of-the-box\" models for standard speaker verification tasks to this non-native speaker verification task as there is limited amounts of non-native learner English data that is publicly available. In this paper, the Kaldi-released BIBREF19 VoxCeleb x-vector/PLDA system was used as imported models, which was trained on augmented VoxCeleb 1 BIBREF17 and VoxCeleb 2 BIBREF18. There are more than 7,000 speakers in the VoxCeleb dataset with more than 2,000 hours of audio data, making it the largest publicly available speaker recognition dataset. 30 dimensional mel-frequency cepstral coefficients (MFCCs) were used as input features and system configurations were the same as the BULATS x-vector/PLDA one. It can be seen from Table TABREF10 that these out-of-domain models gave worse performance than baseline systems trained on a far smaller amount of BULATS data due to domain mismatch. Thus, two kinds of in-domain adaptation strategies were explored to make use of the BULATS training set: PLDA adaptation and x-vector extractor fine-tuning. For PLDA adaptation, x-vectors of the BULATS training set were first extracted using the VoxCeleb-trained x-vector extractor, and then employed to adapt the VoxCeleb-trained PLDA model with their mean and variance. For x-vector extractor fine-tuning, with all other layers of the VoxCeleb-trained model kept still, the output layer was re-initialised using the BULATS training set with the number of targets adjusted accordingly, and then all layers were fine-tuned on the BULATS training set. Here the PLDA adaptation system is referred to as X1 and the extractor fine-tuning system is referred to as X2. Both adaptation approaches can yield good performance gains as can be seen from Table TABREF10. PLDA adaptation is a straightforward yet effective way, while the system with x-vector extractor fine-tuning gave slightly lower EERs on both BULATS and Linguaskill test sets by virtue of a relatively \u201cin-domain\" extractor prior to the PLDA back-end." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2. % EER performance of VoxCeleb-based systems on BULATS and Linguaskill test sets.", "FLOAT SELECTED: Table 1. % EER performance of BULATS-trained baseline systems on BULATS and Linguaskill test sets.", "Performance of the two baseline systems is shown in Table TABREF9 in terms of equal error rate (EER). The x-vector system yielded lower EERs on both BULATS and Linguaskill test sets.", "Both adaptation approaches can yield good performance gains as can be seen from Table TABREF10. PLDA adaptation is a straightforward yet effective way, while the system with x-vector extractor fine-tuning gave slightly lower EERs on both BULATS and Linguaskill test sets by virtue of a relatively \u201cin-domain\" extractor prior to the PLDA back-end." ] } ] } ], "1909.11467": [ { "question": "What are the 12 categories devised?", "answers": [ { "answer": "Economics, Genocide, Geography, History, Human Rights, Kurdish, Kurdology, Philosophy, Physics, Theology, Sociology, Social Study", "type": "abstractive" } ], "q_uid": "3d6015d722de6e6297ba7bfe7cb0f8a67f660636", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Statistics of the corpus - In the Course Level column, (i) represents Institute2 ." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Statistics of the corpus - In the Course Level column, (i) represents Institute2 ." ] } ] } ], "1909.00361": [ { "question": "How big are the datasets used?", "answers": [ { "answer": "Evaluation datasets used:\nCMRC 2018 - 18939 questions, 10 answers\nDRCD - 33953 questions, 5 answers\nNIST MT02/03/04/05/06/08 Chinese-English - Not specified\n\nSource language train data:\nSQuAD - Not specified", "type": "abstractive" } ], "q_uid": "3fb4334e5a4702acd44bd24eb1831bb7e9b98d31", "evidence": [ { "raw_evidence": [ "We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29.", "Note that, since the test and challenge sets are preserved by CMRC 2018 official to ensure the integrity of the evaluation process, we submitted our best-performing systems to the organizers to get these scores. The resource in source language was chosen as SQuAD BIBREF4 training data. The settings of the proposed approaches are listed below in detail.", "Translation: We use Google Neural Machine Translation (GNMT) system for translation. We evaluated GNMT system on NIST MT02/03/04/05/06/08 Chinese-English set and achieved an average BLEU score of 43.24, compared to previous best work (43.20) BIBREF17, yielding state-of-the-art performance.", "FLOAT SELECTED: Table 1: Statistics of CMRC 2018 and DRCD." ], "highlighted_evidence": [ "We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29.", "The resource in source language was chosen as SQuAD BIBREF4 training data.", "We evaluated GNMT system on NIST MT02/03/04/05/06/08 Chinese-English set and achieved an average BLEU score of 43.24, compared to previous best work (43.20) BIBREF17, yielding state-of-the-art performance.", "FLOAT SELECTED: Table 1: Statistics of CMRC 2018 and DRCD." ] } ] } ], "1908.11546": [ { "question": "How better is gCAS approach compared to other approaches?", "answers": [ { "answer": "For entity F1 in the movie, taxi and restaurant domain it results in scores of 50.86, 64, and 60.35. For success, it results it outperforms in the movie and restaurant domain with scores of 77.95 and 71.52", "type": "abstractive" } ], "q_uid": "8a0a51382d186e8d92bf7e78277a1d48958758da", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 5: Entity F1 and Success F1 at dialogue level." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Entity F1 and Success F1 at dialogue level." ] } ] } ], "1905.07464": [ { "question": "What were the sizes of the test sets?", "answers": [ { "answer": "Test set 1 contained 57 drug labels and 8208 sentences and test set 2 contained 66 drug labels and 4224 sentences", "type": "abstractive" } ], "q_uid": "4a4616e1a9807f32cca9b92ab05e65b05c2a1bf5", "evidence": [ { "raw_evidence": [ "Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or more sentences. Each sentence is annotated with a list of zero or more mentions and interactions. The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems. As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). We provide summary statistics about these datasets in Table TABREF3 . Test Set 1 closely resembles Training-22 with respect to the sections that are annotated. However, Test Set 1 is more sparse in the sense that there are more sentences per drug label (144 vs. 27), with a smaller proportion of those sentences having gold annotations (23% vs. 51%). Test Set 2 is unique in that it contains annotations from only two sections, namely DRUG INTERACTIONS and CLINICAL PHARMACOLOGY, the latter of which is not represented in Training-22 (nor Test Set 1). Lastly, Training-22, Test Set 1, and Test Set 2 all vary with respect to the distribution of interaction types, with Training-22, Test Set 1, and Test Set 2 containing a higher proportion of PD, UN, and PK interactions respectively.", "FLOAT SELECTED: Table 1: Characteristics of datasets" ], "highlighted_evidence": [ "Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems.", "We provide summary statistics about these datasets in Table TABREF3 . ", "FLOAT SELECTED: Table 1: Characteristics of datasets" ] } ] } ], "1901.09755": [ { "question": "Which datasets are used?", "answers": [ { "answer": "ABSA SemEval 2014-2016 datasets\nYelp Academic Dataset\nWikipedia dumps", "type": "abstractive" } ], "q_uid": "93b299acfb6fad104b9ebf4d0585d42de4047051", "evidence": [ { "raw_evidence": [ "Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one.", "FLOAT SELECTED: Table 1: ABSA SemEval 2014-2016 datasets for the restaurant domain. B-target indicates the number of opinion targets in each set; I-target refers to the number of multiword targets.", "Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range.", "In order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 ." ], "highlighted_evidence": [ "Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one.", "FLOAT SELECTED: Table 1: ABSA SemEval 2014-2016 datasets for the restaurant domain. B-target indicates the number of opinion targets in each set; I-target refers to the number of multiword targets.", "Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range.\n\nIn order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 ." ] } ] } ], "2002.05829": [ { "question": "How much does it minimally cost to fine-tune some model according to benchmarking framework?", "answers": [ { "answer": "$1,728", "type": "abstractive" } ], "q_uid": "02417455c05f09d89c2658f39705ac1df1daa0cd", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are estimated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are estimated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million." ] } ] }, { "question": "What models are included in baseline benchmarking results?", "answers": [ { "answer": "BERT, XLNET RoBERTa, ALBERT, DistilBERT", "type": "abstractive" } ], "q_uid": "6ce057d3b88addf97a30cb188795806239491154", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are estimated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are estimated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million." ] } ] } ], "1912.00864": [ { "question": "How much more accurate is the model than the baseline?", "answers": [ { "answer": "For the Oshiete-goo dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, Trans, by 0.021, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.037. For the nfL6 dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, CLSTM, by 0.028, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.040. Human evaluation of the NAGM's generated outputs for the Oshiete-goo dataset had 47% ratings of (1), the highest rating, while CLSTM only received 21% ratings of (1). For the nfL6 dataset, the comparison of (1)'s was NAGM's 50% to CLSTM's 30%. ", "type": "abstractive" } ], "q_uid": "572458399a45fd392c3a4e07ce26dcff2ad5a07d", "evidence": [ { "raw_evidence": [ "NAGMWA is much better than the other methods except NAGM, since it generates answers whose conclusions and supplements as well as their combinations closely match the questions. Thus, conclusions and supplements in the answers are consistent with each other and avoid confusion made by several different conclusion-supplement answers assigned to a single non-factoid questions. Finally, NAGM is consistently superior to the conventional attentive encoder-decoders regardless of the metric. Its ROUGE-L and BLEU-4 scores are much higher than those of CLSTM. Thus, NAGM generates more fluent sentences by assessing the context from conclusion to supplement sentences in addition to the closeness of the question and sentences as well as that of the question and sentence combinations.", "The experts asked questions, which were not included in our training datasets, to the AI system and rated the answers; one answer per question. The experts rated the answers as follows: (1) the content of the answer matched the question, and the grammar was okay; (2) the content was suitable, but the grammar was poor; (3) the content was not suitable, but the grammar was okay; (4) both the content and grammar were poor. Note that our evaluation followed the DUC-style strategy. Here, we mean \u201cgrammar\u201d to cover grammaticality, non-redundancy, and referential clarity in the DUC strategy, whereas we mean the \u201ccontent matched the questions\u201d to refer to \u201cfocus\u201d and \u201cstructure and coherence\u201d in the DUC strategy. The evaluators were given more than a week to carefully evaluate the generated answers, so we consider that their judgments are reliable. Each expert evaluated 50 questions. We combined the scores of the experts by summing them. They did not know the identity of the system in the evaluation and reached their decisions independently.", "These results indicate that the experts were much more satisfied with the outputs of NAGM than those of CLSTM. This is because, as can be seen in Table 7, NAGM generated longer and better question-related sentences than CLSTM did. NAGM generated grammatically good answers whose conclusion and supplement statements are well matched with the question and the supplement statement naturally follows the conclusion statement.", "FLOAT SELECTED: Table 4: ROUGE-L/BLEU-4 for nfL6.", "FLOAT SELECTED: Table 6: Human evaluation (nfL6)." ], "highlighted_evidence": [ "Finally, NAGM is consistently superior to the conventional attentive encoder-decoders regardless of the metric. Its ROUGE-L and BLEU-4 scores are much higher than those of CLSTM. ", "The experts asked questions, which were not included in our training datasets, to the AI system and rated the answers; one answer per question. The experts rated the answers as follows: (1) the content of the answer matched the question, and the grammar was okay; (2) the content was suitable, but the grammar was poor; (3) the content was not suitable, but the grammar was okay; (4) both the content and grammar were poor. ", "These results indicate that the experts were much more satisfied with the outputs of NAGM than those of CLSTM.", "FLOAT SELECTED: Table 4: ROUGE-L/BLEU-4 for nfL6.", "FLOAT SELECTED: Table 6: Human evaluation (nfL6)." ] } ] } ], "1910.11204": [ { "question": "What is new state-of-the-art performance on CoNLL-2009 dataset?", "answers": [ { "answer": "In closed setting 84.22 F1 and in open 87.35 F1.", "type": "abstractive" } ], "q_uid": "33d864153822bd378a98a732ace720e2c06a6bc6", "evidence": [ { "raw_evidence": [ "Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task.", "FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model." ], "highlighted_evidence": [ "Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings.", "FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model." ] } ] }, { "question": "What are two strong baseline methods authors refer to?", "answers": [ { "answer": "Marcheggiani and Titov (2017) and Cai et al. (2018)", "type": "abstractive" } ], "q_uid": "bab8c69e183bae6e30fc362009db9b46e720225e", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model." ] } ] } ], "1810.03459": [ { "question": "What languages do they use?", "answers": [ { "answer": "Train languages are: Cantonese, Bengali, Pashto, Turkish, Vietnamese, Haitian, Tamil, Kurdish, Tokpisin and Georgian, while Assamese, Tagalog, Swahili, Lao are used as target languages.", "type": "abstractive" } ], "q_uid": "1adbdb5f08d67d8b05328ccc86d297ac01bf076c", "evidence": [ { "raw_evidence": [ "In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation.", "FLOAT SELECTED: Table 1: Details of the BABEL data used for performing the multilingual experiments" ], "highlighted_evidence": [ "Table TABREF14 presents the details of the languages used in this work for training and evaluation.", "FLOAT SELECTED: Table 1: Details of the BABEL data used for performing the multilingual experiments" ] } ] } ], "2002.10361": [ { "question": "How large is the corpus?", "answers": [ { "answer": "It contains 106,350 documents", "type": "abstractive" } ], "q_uid": "38c74ab8292a94fc5a82999400ee9c06be19f791", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech)." ] } ] }, { "question": "How large is the dataset?", "answers": [ { "answer": "over 104k documents", "type": "abstractive" } ], "q_uid": "16af38f7c4774637cf8e04d4b239d6d72f0b0a3a", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech)." ] } ] } ], "1810.10254": [ { "question": "What was their perplexity score?", "answers": [ { "answer": "Perplexity score 142.84 on dev and 138.91 on test", "type": "abstractive" } ], "q_uid": "657edbf39c500b2446edb9cca18de2912c628b7d", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3. Language Modeling Results (in perplexity).", "UTF8gbsn The pointer-generator significantly outperforms the Seq2Seq with attention model by 3.58 BLEU points on the test set as shown in Table TABREF8 . Our language modeling result is given in Table TABREF9 . Based on the empirical result, adding generated samples consistently improve the performance of all models with a moderate margin around 10% in perplexity. After all, our proposed method still slightly outperforms the heuristic from linguistic constraint. In addition, we get a crucial gain on performance by adding syntax representation of the sequences." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3. Language Modeling Results (in perplexity).", "Our language modeling result is given in Table TABREF9 ." ] } ] } ], "1703.06492": [ { "question": "In which setting they achieve the state of the art?", "answers": [ { "answer": "in open-ended task esp. for counting-type questions ", "type": "abstractive" } ], "q_uid": "0c7823b27326b3f5dff51f32f45fc69c91a4e06d", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4. Evaluation results on VQA dataset [1]. \u201d-\u201d indicates the results are not available, and the Ours+VGG(1) and Ours+VGG(2) are the results by using different thresholds. Note that our VGGNet is same as CoAtt+VGG." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4. Evaluation results on VQA dataset [1]. \u201d-\u201d indicates the results are not available, and the Ours+VGG(1) and Ours+VGG(2) are the results by using different thresholds. Note that our VGGNet is same as CoAtt+VGG." ] } ] } ], "1909.01958": [ { "question": "On what dataset is Aristo system trained?", "answers": [ { "answer": "Aristo Corpus\nRegents 4th\nRegents 8th\nRegents `12th\nARC-Easy\nARC-challenge ", "type": "abstractive" } ], "q_uid": "384d571e4017628ebb72f3debb2846efaf0cb0cb", "evidence": [ { "raw_evidence": [ "Several methods make use of the Aristo Corpus, comprising a large Web-crawled corpus ($5 \\times 10^{10}$ tokens (280GB)) originally from the University of Waterloo, combined with targeted science content from Wikipedia, SimpleWikipedia, and several smaller online science texts (BID25).", "The Regents exam questions are taken verbatim from the New York Regents Examination board, using the 4th Grade Science, 8th Grade Science, and 12th Grade Living Environment examinations. The questions are partitioned into train/dev/test by exam, i.e., each exam is either in train, dev, or test but not split up between them. The ARC dataset is a larger corpus of science questions drawn from public resources across the country, spanning grades 3 to 9, and also includes the Regents 4th and 8th questions (using the same train/dev/test split). Further details of the datasets are described in (BID13). The datasets are publicly available. Dataset sizes are shown in Table TABREF34. All but 39 of the 9366 questions are 4-way multiple choice, the remaining 39 ($<$0.5%) being 3- or 5-way. A random score over the entire dataset is 25.02%.", "FLOAT SELECTED: Table 3: Dataset partition sizes (number of questions)." ], "highlighted_evidence": [ "Several methods make use of the Aristo Corpus, comprising a large Web-crawled corpus ($5 \\times 10^{10}$ tokens (280GB)) originally from the University of Waterloo, combined with targeted science content from Wikipedia, SimpleWikipedia, and several smaller online science texts (BID25).", "The Regents exam questions are taken verbatim from the New York Regents Examination board, using the 4th Grade Science, 8th Grade Science, and 12th Grade Living Environment examinations. The questions are partitioned into train/dev/test by exam, i.e., each exam is either in train, dev, or test but not split up between them. The ARC dataset is a larger corpus of science questions drawn from public resources across the country, spanning grades 3 to 9, and also includes the Regents 4th and 8th questions (using the same train/dev/test split). Further details of the datasets are described in (BID13). The datasets are publicly available. Dataset sizes are shown in Table TABREF34. All but 39 of the 9366 questions are 4-way multiple choice, the remaining 39 ($<$0.5%) being 3- or 5-way. A random score over the entire dataset is 25.02%.", "FLOAT SELECTED: Table 3: Dataset partition sizes (number of questions)." ] } ] } ], "1806.07711": [ { "question": "How many roles are proposed?", "answers": [ { "answer": "12", "type": "abstractive" } ], "q_uid": "0c09a0e8f9c5bdb678563be49f912ab6e3f97619", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Most common syntactic patterns for each semantic role." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Most common syntactic patterns for each semantic role." ] } ] } ], "1912.03457": [ { "question": "What language technologies have been introduced in the past?", "answers": [ { "answer": "- Font & Keyboard\n- Speech-to-Text\n- Text-to-Speech\n- Text Prediction\n- Spell Checker\n- Grammar Checker\n- Text Search\n- Machine Translation\n- Voice to Text Search\n- Voice to Speech Search", "type": "abstractive" } ], "q_uid": "50716cc7f589b9b9f3aca806214228b063e9695b", "evidence": [ { "raw_evidence": [ "Often, many state-of-the-art tools cannot be applied to low-resource languages due to the lack of data. Table TABREF6 describes the various technologies and their presence concerning languages with different levels of resource availability and the ease of data collection. We can observe that for low resource languages, there is considerable difficulty in adopting these tools. Machine Translation can potentially be used as a fix to bridge the gap. Translation engines can help in translating documents from minority languages to majority languages. This allows the pool of data to be used in a number of NLP tasks like sentiment analysis and summarization. Doing so allows us to leverage the existing body of work in NLP done on resource-rich languages and subsequently apply it to the resource-poor languages, thereby foregoing any attempt to reinvent the wheel for these languages. This ensures a quicker and wider impact.BIBREF16 performs sentiment analysis on Chinese customer reviews by translating them to English. They observe that the quality of machine translation systems are sufficient for sentiment analysis to be performed on the automatically translated texts without a substantial trade-off in accuracy.", "FLOAT SELECTED: Table 1: Enabling language technologies, their availability and quality ( ? ? ? - excellent quality technology, ?? - moderately good but usable, ? - rudimentary and not practically useful) for differently resourced languages, and their data/knowledge requirements (? ? ? - very high data/expertise, ?? - moderate, ? - nominal and easily procurable). This information is based on authors\u2019 analysis and personal experience." ], "highlighted_evidence": [ "Table TABREF6 describes the various technologies and their presence concerning languages with different levels of resource availability and the ease of data collection. We can observe that for low resource languages, there is considerable difficulty in adopting these tools.", "FLOAT SELECTED: Table 1: Enabling language technologies, their availability and quality ( ? ? ? - excellent quality technology, ?? - moderately good but usable, ? - rudimentary and not practically useful) for differently resourced languages, and their data/knowledge requirements (? ? ? - very high data/expertise, ?? - moderate, ? - nominal and easily procurable). This information is based on authors\u2019 analysis and personal experience." ] } ] } ], "1901.05280": [ { "question": "what were the baselines?", "answers": [ { "answer": "2008 Punyakanok et al. \n2009 Zhao et al. + ME \n2008 Toutanova et al. \n2010 Bjorkelund et al. \n2015 FitzGerald et al. \n2015 Zhou and Xu \n2016 Roth and Lapata \n2017 He et al. \n2017 Marcheggiani et al.\n2017 Marcheggiani and Titov \n2018 Tan et al. \n2018 He et al. \n2018 Strubell et al. \n2018 Cai et al. \n2018 He et al. \n2018 Li et al. \n", "type": "abstractive" } ], "q_uid": "73bbe0b6457423f08d9297a0951381098bd89a2b", "evidence": [ { "raw_evidence": [ "Generally, the above work is summarized in Table TABREF2 . Considering motivation, our work is most closely related to the work of BIBREF14 Fitzgerald2015, which also tackles span and dependency SRL in a uniform fashion. The essential difference is that their model employs the syntactic features and takes pre-identified predicates as inputs, while our model puts syntax aside and jointly learns and predicts predicates and arguments.", "FLOAT SELECTED: Table 1: A chronicle of related work for span and dependency SRL. SA represents syntax-aware system (no + indicates syntaxagnostic system) and ST indicates sequence tagging model. F1 is the result of single model on official test set." ], "highlighted_evidence": [ "Generally, the above work is summarized in Table TABREF2 . Considering motivation, our work is most closely related to the work of BIBREF14 Fitzgerald2015, which also tackles span and dependency SRL in a uniform fashion. The essential difference is that their model employs the syntactic features and takes pre-identified predicates as inputs, while our model puts syntax aside and jointly learns and predicts predicates and arguments.", "FLOAT SELECTED: Table 1: A chronicle of related work for span and dependency SRL. SA represents syntax-aware system (no + indicates syntaxagnostic system) and ST indicates sequence tagging model. F1 is the result of single model on official test set." ] } ] } ], "1909.11297": [ { "question": "Which soft-selection approaches are evaluated?", "answers": [ { "answer": "LSTM and BERT ", "type": "abstractive" } ], "q_uid": "e292676c8c75dd3711efd0e008423c11077938b1", "evidence": [ { "raw_evidence": [ "Previous attention-based methods can be categorized as soft-selection approaches since the attention weights scatter across the whole sentence and every word is taken into consideration with different weights. This usually results in attention distraction BIBREF7, i.e., attending on noisy or misleading words, or opinion words from other aspects. Take Figure FIGREF1 as an example, for the aspect place in the sentence \u201cthe food is usually good but it certainly is not a relaxing place to go\u201d, we visualize the attention weights from the model ATAE-LSTM BIBREF2. As we can see, the words \u201cgood\u201d and \u201cbut\u201d are dominant in attention weights. However, \u201cgood\u201d is used to describe the aspect food rather than place, \u201cbut\u201d is not so related to place either. The true opinion snippet \u201ccertainly is not a relaxing place\u201d receives low attention weights, leading to the wrong prediction towards the aspect place.", "FLOAT SELECTED: Table 2: Experimental results (accuracy %) on all the datasets. Models in the first part are baseline methods. The results in the first part (except BERT-Original) are obtained from the prior work (Tay et al., 2018). Avg column presents macro-averaged results across all the datasets." ], "highlighted_evidence": [ "Previous attention-based methods can be categorized as soft-selection approaches since the attention weights scatter across the whole sentence and every word is taken into consideration with different weights. ", "FLOAT SELECTED: Table 2: Experimental results (accuracy %) on all the datasets. Models in the first part are baseline methods. The results in the first part (except BERT-Original) are obtained from the prior work (Tay et al., 2018). Avg column presents macro-averaged results across all the datasets." ] } ] } ], "1911.01680": [ { "question": "How big is slot filing dataset?", "answers": [ { "answer": "Dataset has 1737 train, 497 dev and 559 test sentences.", "type": "abstractive" } ], "q_uid": "2d47cdf2c1e0c64c73518aead1b94e0ee594b7a5", "evidence": [ { "raw_evidence": [ "In our experiments, we use Onsei Intent Slot dataset. Table TABREF21 shows the statics of this dataset. We use the following hyper parameters in our model: We set the word embedding and POS embedding to 768 and 30 respectively; The pre-trained BERT BIBREF17 embedding are used to initialize word embeddings; The hidden dimension of the Bi-LSTM, GCN and feed forward networks are 200; the hyper parameters $\\alpha $, $\\beta $ and $\\gamma $ are all set to 0.1; We use Adam optimizer with learning rate 0.003 to train the model. We use micro-averaged F1 score on all labels as the evaluation metric.", "FLOAT SELECTED: Table 1: Label Statistics" ], "highlighted_evidence": [ "In our experiments, we use Onsei Intent Slot dataset. Table TABREF21 shows the statics of this dataset.", "FLOAT SELECTED: Table 1: Label Statistics" ] } ] } ], "1809.08298": [ { "question": "How large is the dataset they generate?", "answers": [ { "answer": "4.756 million sentences", "type": "abstractive" } ], "q_uid": "dafa760e1466e9eaa73ad8cb39b229abd5babbda", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Number of run-on (RO) and non-run-on (Non-RO) sentences in our datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Number of run-on (RO) and non-run-on (Non-RO) sentences in our datasets." ] } ] } ], "1805.04033": [ { "question": "Which existing models does this approach outperform?", "answers": [ { "answer": "RNN-context, SRB, CopyNet, RNN-distract, DRGD", "type": "abstractive" } ], "q_uid": "bd99aba3309da96e96eab3e0f4c4c8c70b51980a", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3. Comparisons with the Existing Models in Terms of ROUGE Metrics" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3. Comparisons with the Existing Models in Terms of ROUGE Metrics" ] } ] } ], "1910.06748": [ { "question": "What languages are represented in the dataset?", "answers": [ { "answer": "EN, JA, ES, AR, PT, KO, TH, FR, TR, RU, IT, DE, PL, NL, EL, SV, FA, VI, FI, CS, UK, HI, DA, HU, NO, RO, SR, LV, BG, UR, TA, MR, BN, IN, KN, ET, SL, GU, CY, ZH, CKB, IS, LT, ML, SI, IW, NE, KM, MY, TL, KA, BO", "type": "abstractive" } ], "q_uid": "8ad815b29cc32c1861b77de938c7269c9259a064", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2. Twitter corpus distribution by language label.", "We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus. Unsurprisingly, it is a very imbalanced distribution of languages, with English and Japanese together accounting for 60% of all tweets. This is consistent with other studies and statistics of language use on Twitter, going as far back as 2013. It does however make it very difficult to use this corpus to train a LID system for other languages, especially for one of the dozens of seldom-used languages. This was our motivation for creating a balanced Twitter dataset." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2. Twitter corpus distribution by language label.", "We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus." ] } ] } ], "1911.08673": [ { "question": "How faster is training and decoding compared to former models?", "answers": [ { "answer": "Proposed vs best baseline:\nDecoding: 8541 vs 8532 tokens/sec\nTraining: 8h vs 8h", "type": "abstractive" } ], "q_uid": "9aa52b898d029af615b95b18b79078e9bed3d766", "evidence": [ { "raw_evidence": [ "In order to verify the time complexity analysis of our model, we measured the running time and speed of BIAF, STACKPTR and our model on PTB training and development set using the projective algorithm. The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest. This is because the time cost of attention scoring in decoding is not negligible when compared with the processing speed and actually even accounts for a significant portion of the runtime.", "FLOAT SELECTED: Table 4: Training time and decoding speed. The experimental environment is on the same machine with Intel i9 9900k CPU and NVIDIA 1080Ti GPU." ], "highlighted_evidence": [ "The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest.", "FLOAT SELECTED: Table 4: Training time and decoding speed. The experimental environment is on the same machine with Intel i9 9900k CPU and NVIDIA 1080Ti GPU." ] } ] } ], "1611.04642": [ { "question": "What datasets are used to evaluate the model?", "answers": [ { "answer": "WN18 and FB15k", "type": "abstractive" } ], "q_uid": "b13d0e463d5eb6028cdaa0c36ac7de3b76b5e933", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: The knowledge base completion (link prediction) results on WN18 and FB15k." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: The knowledge base completion (link prediction) results on WN18 and FB15k." ] } ] } ], "1909.03242": [ { "question": "What metadata is included?", "answers": [ { "answer": "besides claim, label and claim url, it also includes a claim ID, reason, category, speaker, checker, tags, claim entities, article title, publish data and claim date", "type": "abstractive" } ], "q_uid": "e9ccc74b1f1b172224cf9f01e66b1fa9e34d2593", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: An example of a claim instance. Entities are obtained via entity linking. Article and outlink texts, evidence search snippets and pages are not shown." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: An example of a claim instance. Entities are obtained via entity linking. Article and outlink texts, evidence search snippets and pages are not shown." ] } ] } ], "1905.12260": [ { "question": "How much important is the visual grounding in the learning of the multilingual representations?", "answers": [ { "answer": "performance is significantly degraded without pixel data", "type": "abstractive" } ], "q_uid": "e8029ec69b0b273954b4249873a5070c2a0edb8a", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Crosslingual semantic similarity scores (Spearman\u2019s \u03c1) across six subtasks for ImageVec (our method) and previous work. Coverage is in brackets. The last column indicates the combined score across all subtasks. Best scores on each subtask are bolded." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Crosslingual semantic similarity scores (Spearman\u2019s \u03c1) across six subtasks for ImageVec (our method) and previous work. Coverage is in brackets. The last column indicates the combined score across all subtasks. Best scores on each subtask are bolded." ] } ] } ], "1804.08050": [ { "question": "By how much does their method outperform the multi-head attention model?", "answers": [ { "answer": "Their average improvement in Character Error Rate over the best MHA model was 0.33 percent points.", "type": "abstractive" } ], "q_uid": "5a9f94ae296dda06c8aec0fb389ce2f68940ea88", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Experimental results." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Experimental results." ] } ] }, { "question": "How large is the corpus they use?", "answers": [ { "answer": "449050", "type": "abstractive" } ], "q_uid": "85912b87b16b45cde79039447a70bd1f6f1f8361", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Experimental conditions." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Experimental conditions." ] } ] } ], "2002.06424": [ { "question": "How many shared layers are in the system?", "answers": [ { "answer": "1", "type": "abstractive" } ], "q_uid": "58f50397a075f128b45c6b824edb7a955ee8cba1", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets." ] } ] }, { "question": "How many additional task-specific layers are introduced?", "answers": [ { "answer": "2 for the ADE dataset and 3 for the CoNLL04 dataset", "type": "abstractive" } ], "q_uid": "9adcc8c4a10fa0d58f235b740d8d495ee622d596", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets." ] } ] } ], "1909.05246": [ { "question": "How many layers of self-attention does the model have?", "answers": [ { "answer": "1, 4, 8, 16, 32, 64", "type": "abstractive" } ], "q_uid": "8568c82078495ab421ecbae38ddd692c867eac09", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 6: Evaluation of effect of self-attention mechanism using DSTC2 dataset (Att: Attetnion mechanism; UT: Universal Transformers; ACT: Adaptive Computation Time; NH: Number of attention heads)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Evaluation of effect of self-attention mechanism using DSTC2 dataset (Att: Attetnion mechanism; UT: Universal Transformers; ACT: Adaptive Computation Time; NH: Number of attention heads)" ] } ] } ], "1606.04631": [ { "question": "what are the state of the art methods?", "answers": [ { "answer": "S2VT, RGB (VGG), RGB (VGG)+Flow (AlexNet), LSTM-E (VGG), LSTM-E (C3D) and Yao et al.", "type": "abstractive" } ], "q_uid": "b3fcab006a9e51a0178a1f64d1d084a895bd8d5c", "evidence": [ { "raw_evidence": [ "We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods. The result of \u201cLSTM\u201d in first row refer from BIBREF15 and the last row but one denotes the best model combining local temporal structure using C3D with global temporal structure utilizing temporal attention in BIBREF17 . From the first two rows, our unidirectional joint LSTM shows rapid improvement, and comparing with S2VT-VGG model in line 3, it also demonstrates some superiority. Even LSTM-E jointly models video and descriptions representation by minimizing the distance between video and corresponding sentence, our Joint-BiLSTM reinforced obtains better performance from bidirectional encoding and separated visual and language models.", "FLOAT SELECTED: Table 2: Comparing with several state-of-the-art models (reported in percentage, higher is better)." ], "highlighted_evidence": [ "We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods.", "FLOAT SELECTED: Table 2: Comparing with several state-of-the-art models (reported in percentage, higher is better)." ] } ] } ], "2003.07996": [ { "question": "Which four languages do they experiment with?", "answers": [ { "answer": "German, English, Italian, Chinese", "type": "abstractive" } ], "q_uid": "6baf5d7739758bdd79326ce8f50731c785029802", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Datasets used for various SER experiments." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Datasets used for various SER experiments." ] } ] } ], "1910.10288": [ { "question": "Does DCA or GMM-based attention perform better in experiments?", "answers": [ { "answer": "About the same performance", "type": "abstractive" } ], "q_uid": "5c4c8e91d28935e1655a582568cc9d94149da2b2", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3. MOS naturalness results along with 95% confidence intervals for the Lessac and LJ datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3. MOS naturalness results along with 95% confidence intervals for the Lessac and LJ datasets." ] } ] } ], "1908.06083": [ { "question": "What evaluation metric is used?", "answers": [ { "answer": "F1 and Weighted-F1", "type": "abstractive" } ], "q_uid": "3f326c003be29c8eac76b24d6bba9608c75aa7ea", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 10: Results of experiments on the multi-turn adversarial task. We denote the average and one standard deviation from the results of five runs. Models that use the context as input (\u201cwith context\u201d) perform better. Encoding this in the architecture as well (via BERT dialogue segment features) gives us the best results." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 10: Results of experiments on the multi-turn adversarial task. We denote the average and one standard deviation from the results of five runs. Models that use the context as input (\u201cwith context\u201d) perform better. Encoding this in the architecture as well (via BERT dialogue segment features) gives us the best results." ] } ] } ], "1910.12129": [ { "question": "Is any data-to-text generation model trained on this new corpus, what are the results?", "answers": [ { "answer": "Yes, Transformer based seq2seq is evaluated with average BLEU 0.519, METEOR 0.388, ROUGE 0.631 CIDEr 2.531 and SER 2.55%.", "type": "abstractive" } ], "q_uid": "14e259a312e653f8fc0d52ca5325b43c3bdfb968", "evidence": [ { "raw_evidence": [ "The NLG model we use to establish a baseline for this dataset is a standard Transformer-based BIBREF19 sequence-to-sequence model. For decoding we employ beam search of width 10 ($\\alpha = 1.0$). The generated candidates are then reranked according to the heuristically determined slot coverage score. Before training the model on the ViGGO dataset, we confirmed on the E2E dataset that it performed on par with, or even slightly better than, the strong baseline models from the E2E NLG Challenge, namely, TGen BIBREF20 and Slug2Slug BIBREF21.", "We evaluate our model's performance on the ViGGO dataset using the following standard NLG metrics: BLEU BIBREF22, METEOR BIBREF23, ROUGE-L BIBREF24, and CIDEr BIBREF25. Additionally, with our heuristic slot error rate (SER) metric we approximate the percentage of failed slot realizations (i.e., missed, incorrect, or hallucinated) across the test set. The results are shown in Table TABREF16.", "FLOAT SELECTED: Table 4: Baseline system performance on the ViGGO test set. Despite individual models (Bo3 \u2013 best of 3 experiments) often having better overall scores, we consider the Ao3 (average of 3) results the most objective." ], "highlighted_evidence": [ "The NLG model we use to establish a baseline for this dataset is a standard Transformer-based BIBREF19 sequence-to-sequence model.", "The results are shown in Table TABREF16.", "FLOAT SELECTED: Table 4: Baseline system performance on the ViGGO test set. Despite individual models (Bo3 \u2013 best of 3 experiments) often having better overall scores, we consider the Ao3 (average of 3) results the most objective." ] } ] } ], "1911.02821": [ { "question": "What dataset did they use?", "answers": [ { "answer": "weibo-100k, Ontonotes, LCQMC and XNLI", "type": "abstractive" } ], "q_uid": "34fab25d9ceb9c5942daf4ebdab6c5dd4ff9d3db", "evidence": [ { "raw_evidence": [ "Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets. Generally, our method consistently outperforms all baselines on all of four tasks, which clearly indicates the advantage of introducing word segmentation information into the encoding of character sequences. Moreover, the Wilcoxon\u2019s test shows that significant difference ($p< 0.01$) exits between our model with baseline models.", "FLOAT SELECTED: Table 2: Results of word-aligned attention models on multi NLP task. All of results are f1-score evaluated on test set and each experiment are enacted five times, the average is taken as result. Part of results are similar to results from BERT-wwm technical report (Cui et al., 2019)." ], "highlighted_evidence": [ "Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets.", "FLOAT SELECTED: Table 2: Results of word-aligned attention models on multi NLP task. All of results are f1-score evaluated on test set and each experiment are enacted five times, the average is taken as result. Part of results are similar to results from BERT-wwm technical report (Cui et al., 2019)." ] } ] } ], "1906.10551": [ { "question": "What are the 12 AV approaches which are examined?", "answers": [ { "answer": "MOCC, OCCAV, COAV, AVeer, GLAD, DistAV, Unmasking, Caravel, GenIM, ImpGI, SPATIUM and NNCD", "type": "abstractive" } ], "q_uid": "863d5c6305e5bb4b14882b85b6216fa11bcbf053", "evidence": [ { "raw_evidence": [ "As a basis for our experiments, we reimplemented 12 existing AV approaches, which have shown their potentials in the previous PAN-AV competitions BIBREF11 , BIBREF12 as well as in a number of AV studies. The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 .", "FLOAT SELECTED: Table 2: All 12 AVmethods, classified according to their properties." ], "highlighted_evidence": [ "The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 .", "FLOAT SELECTED: Table 2: All 12 AVmethods, classified according to their properties." ] } ] } ], "1808.03430": [ { "question": "What are the results achieved from the introduced method?", "answers": [ { "answer": "Their model resulted in values of 0.476, 0.672 and 0.893 for recall at position 1,2 and 5 respectively in 10 candidates.", "type": "abstractive" } ], "q_uid": "01edeca7b902ae3fd66264366bf548acea1db364", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Comparison of different models." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Comparison of different models." ] } ] } ], "1911.05153": [ { "question": "How big is performance improvement proposed methods are used?", "answers": [ { "answer": "Data augmentation (es) improved Adv es by 20% comparing to baseline \nData augmentation (cs) improved Adv cs by 16.5% comparing to baseline\nData augmentation (cs+es) improved both Adv cs and Adv es by at least 10% comparing to baseline \nAll models show improvements over adversarial sets \n", "type": "abstractive" } ], "q_uid": "234ccc1afcae4890e618ff2a7b06fc1e513ea640", "evidence": [ { "raw_evidence": [ "The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg). We also included the results for the ensemble model, which combines the decisions of five separate baseline models that differ in batch order, initialization, and dropout masking. We can see that, similar to the case in computer vision BIBREF4, the adversarial examples seem to stem from fundamental properties of the neural networks and ensembling helps only a little.", "FLOAT SELECTED: Table 3: Accuracy over clean and adversarial test sets. Note that data augmentation and logit pairing loss decrease accuracy on clean test sets and increase accuracy on the adversarial test sets." ], "highlighted_evidence": [ "The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg).", "FLOAT SELECTED: Table 3: Accuracy over clean and adversarial test sets. Note that data augmentation and logit pairing loss decrease accuracy on clean test sets and increase accuracy on the adversarial test sets." ] } ] } ], "1811.02906": [ { "question": "By how much does transfer learning improve performance on this task?", "answers": [ { "answer": "In task 1 best transfer learning strategy improves F1 score by 4.4% and accuracy score by 3.3%, in task 2 best transfer learning strategy improves F1 score by 2.9% and accuracy score by 1.7%", "type": "abstractive" } ], "q_uid": "4704cbb35762d0172f5ac6c26b67550921567a65", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Transfer learning performance (Task 1)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Transfer learning performance (Task 1)" ] } ] } ], "1903.09588": [ { "question": "How much do they outperform previous state-of-the-art?", "answers": [ { "answer": "On subtask 3 best proposed model has F1 score of 92.18 compared to best previous F1 score of 88.58.\nOn subtask 4 best proposed model has 85.9, 89.9 and 95.6 compared to best previous results of 82.9, 84.0 and 89.9 on 4-way, 3-way and binary aspect polarity.", "type": "abstractive" } ], "q_uid": "e9d9bb87a5c4faa965ceddd98d8b80d4b99e339e", "evidence": [ { "raw_evidence": [ "Results on SemEval-2014 are presented in Table TABREF35 and Table TABREF36 . We find that BERT-single has achieved better results on these two subtasks, and BERT-pair has achieved further improvements over BERT-single. The BERT-pair-NLI-B model achieves the best performance for aspect category detection. For aspect category polarity, BERT-pair-QA-B performs best on all 4-way, 3-way, and binary settings.", "FLOAT SELECTED: Table 4: Test set results for Semeval-2014 task 4 Subtask 3: Aspect Category Detection. We use the results reported in XRCE (Brun et al., 2014) and NRC-Canada (Kiritchenko et al., 2014).", "FLOAT SELECTED: Table 5: Test set accuracy (%) for Semeval-2014 task 4 Subtask 4: Aspect Category Polarity. We use the results reported in XRCE (Brun et al., 2014), NRCCanada (Kiritchenko et al., 2014) and ATAE-LSTM (Wang et al., 2016). \u201c-\u201d means not reported." ], "highlighted_evidence": [ "Results on SemEval-2014 are presented in Table TABREF35 and Table TABREF36 . We find that BERT-single has achieved better results on these two subtasks, and BERT-pair has achieved further improvements over BERT-single. The BERT-pair-NLI-B model achieves the best performance for aspect category detection. For aspect category polarity, BERT-pair-QA-B performs best on all 4-way, 3-way, and binary settings.", "FLOAT SELECTED: Table 4: Test set results for Semeval-2014 task 4 Subtask 3: Aspect Category Detection. We use the results reported in XRCE (Brun et al., 2014) and NRC-Canada (Kiritchenko et al., 2014).", "FLOAT SELECTED: Table 5: Test set accuracy (%) for Semeval-2014 task 4 Subtask 4: Aspect Category Polarity. We use the results reported in XRCE (Brun et al., 2014), NRCCanada (Kiritchenko et al., 2014) and ATAE-LSTM (Wang et al., 2016). \u201c-\u201d means not reported." ] } ] } ], "1904.01608": [ { "question": "What are the citation intent labels in the datasets?", "answers": [ { "answer": "Background, extends, uses, motivation, compare/contrast, and future work for the ACL-ARC dataset. Background, method, result comparison for the SciCite dataset.", "type": "abstractive" } ], "q_uid": "9349acbfce95cb5d6b4d09ac626b55a9cb90e55e", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Characteristics of SciCite compared with ACL-ARC dataset by Jurgens et al. (2018)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Characteristics of SciCite compared with ACL-ARC dataset by Jurgens et al. (2018)" ] } ] } ], "1911.13066": [ { "question": "What accuracy score do they obtain?", "answers": [ { "answer": "the best performing model obtained an accuracy of 0.86", "type": "abstractive" } ], "q_uid": "160e6d2fc6e04bb0b4ee8d59c06715355dec4a17", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3. Performance evaluation of variations of the proposed model and baseline. Showing highest scores in boldface." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3. Performance evaluation of variations of the proposed model and baseline. Showing highest scores in boldface." ] } ] }, { "question": "What is the 12 class bilingual text?", "answers": [ { "answer": "Appreciation, Satisfied, Peripheral complaint, Demanded inquiry, Corruption, Lagged response, Unresponsive, Medicine payment, Adverse behavior, Grievance ascribed and Obnoxious/irrelevant", "type": "abstractive" } ], "q_uid": "30dad5d9b4a03e56fa31f932c879aa56e11ed15b", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1. Description of class label along with distribution of each class (in %) in the acquired dataset" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1. Description of class label along with distribution of each class (in %) in the acquired dataset" ] } ] } ], "1910.13215": [ { "question": "Was evaluation metrics and criteria were used to evaluate the output of the cascaded multimodal speech translation?", "answers": [ { "answer": "BLEU scores", "type": "abstractive" } ], "q_uid": "98eb245c727c0bd050d7686d133fa7cd9d25a0fb", "evidence": [ { "raw_evidence": [ "We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7.", "FLOAT SELECTED: Table 1: BLEU scores for the test set: bold highlights our best results. \u2020 indicates a system is significantly different from its text-only counterpart (p-value \u2264 0.05)." ], "highlighted_evidence": [ "After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set.", "FLOAT SELECTED: Table 1: BLEU scores for the test set: bold highlights our best results. \u2020 indicates a system is significantly different from its text-only counterpart (p-value \u2264 0.05)." ] } ] } ], "1909.05855": [ { "question": "What are the domains covered in the dataset?", "answers": [ { "answer": "Alarm\nBank\nBus\nCalendar\nEvent\nFlight\nHome\nHotel\nMedia\nMovie\nMusic\nRentalCar\nRestaurant\nRideShare\nService\nTravel\nWeather", "type": "abstractive" } ], "q_uid": "6dcbe941a3b0d5193f950acbdc574f1cfb007845", "evidence": [ { "raw_evidence": [ "The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are a structured representation of dialogue semantics. We then used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps in detail and then present analyses of the collected dataset.", "FLOAT SELECTED: Table 2: The number of intents (services in parentheses) and dialogues for each domain in the train and dev sets. Multidomain dialogues contribute to counts of each domain. The domain Service includes salons, dentists, doctors etc." ], "highlighted_evidence": [ "The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. ", "FLOAT SELECTED: Table 2: The number of intents (services in parentheses) and dialogues for each domain in the train and dev sets. Multidomain dialogues contribute to counts of each domain. The domain Service includes salons, dentists, doctors etc." ] } ] } ], "1703.02507": [ { "question": "Which other unsupervised models are used for comparison?", "answers": [ { "answer": "Sequential (Denoising) Autoencoder, TF-IDF BOW, SkipThought, FastSent, Siamese C-BOW, C-BOW, C-PHRASE, ParagraphVector", "type": "extractive" } ], "q_uid": "37eba8c3cfe23778498d95a7dfddf8dfb725f8e2", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Comparison of the performance of different models on different supervised evaluation tasks. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of accuracy for each category (For MSRP, we take the accuracy). )", "We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.", "Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.", "We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.", "The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.", "BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE. This model first introduces noise in the input data: Firstly each word is deleted with probability INLINEFORM0 , then for each non-overlapping bigram, words are swapped with probability INLINEFORM1 . The model then uses an LSTM-based architecture to retrieve the original sentence from the corrupted version. The model can then be used to encode new sentences into vector representations. In the case of INLINEFORM2 , the model simply becomes a Sequential Autoencoder. BIBREF16 also propose a variant (S(D)AE + embs.) in which the words are represented by fixed pre-trained word vector embeddings.", "The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. Given a sentence INLINEFORM0 from an ordered corpus, the model is trained to predict INLINEFORM1 and INLINEFORM2 .", "FastSent BIBREF16 is a sentence-level log-linear bag-of-words model. Like SkipThought, it uses adjacent sentences as the prediction target and is trained in an unsupervised fashion. Using word sequences allows the model to improve over the earlier work of paragraph2vec BIBREF14 . BIBREF16 augment FastSent further by training it to predict the constituent words of the sentence as well. This model is named FastSent + AE in our comparisons.", "In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.", "Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. However, it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective.", "FLOAT SELECTED: Table 2: Unsupervised Evaluation Tasks: Comparison of the performance of different models on Spearman/Pearson correlation measures. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of entries for each correlation measure.", "In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models. Table TABREF21 in the last column shows the dramatic improvement in training time of our models (and other C-BOW-inspired models) in contrast to neural network based models. All our Sent2Vec models are trained on a machine with 2x Intel Xeon E5 INLINEFORM0 2680v3, 12 cores @2.5GHz.", "Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies. To ensure coherence, we only include unsupervised models in the main paper. Performance of supervised and semi-supervised models on these evaluations can be observed in Tables TABREF29 and TABREF30 in the supplementary material." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Comparison of the performance of different models on different supervised evaluation tasks. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of accuracy for each category (For MSRP, we take the accuracy). )", "We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16", "Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 .", "We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. ", "The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. ", "They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.", "BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE.", "The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. ", "FastSent BIBREF16 is a sentence-level log-linear bag-of-words model.", "In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.", "Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. ", "FLOAT SELECTED: Table 2: Unsupervised Evaluation Tasks: Comparison of the performance of different models on Spearman/Pearson correlation measures. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of entries for each correlation measure.", "In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models.", "Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies", "In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective." ] } ] } ], "1710.09340": [ { "question": "By how much does the new parser outperform the current state-of-the-art?", "answers": [ { "answer": "Proposed method achieves 94.5 UAS and 92.4 LAS compared to 94.3 and 92.2 of best state-of-the -art greedy based parser. Best state-of-the art parser overall achieves 95.8 UAS and 94.6 LAS.", "type": "abstractive" } ], "q_uid": "11dde2be9a69a025f2fc29ce647201fb5a4df580", "evidence": [ { "raw_evidence": [ "Table TABREF12 compares our novel system with other state-of-the-art transition-based dependency parsers on the PT-SD. Greedy parsers are in the first block, beam-search and dynamic programming parsers in the second block. The third block shows the best result on this benchmark, obtained with constituent parsing with generative re-ranking and conversion to dependencies. Despite being the only non-projective parser tested on a practically projective dataset, our parser achieves the highest score among greedy transition-based models (even above those trained with a dynamic oracle).", "We even slightly outperform the arc-swift system of Qi2017, with the same model architecture, implementation and training setup, but based on the projective arc-eager transition-based parser instead. This may be because our system takes into consideration any permissible attachment between the focus word INLINEFORM0 and any word in INLINEFORM1 at each configuration, while their approach is limited by the arc-eager logic: it allows all possible rightward arcs (possibly fewer than our approach as the arc-eager stack usually contains a small number of words), but only one leftward arc is permitted per parser state. It is also worth noting that the arc-swift and NL-Covington parsers have the same worst-case time complexity, ( INLINEFORM2 ), as adding non-local arc transitions to the arc-eager parser increases its complexity from linear to quadratic, but it does not affect the complexity of the Covington algorithm. Thus, it can be argued that this technique is better suited to Covington than to arc-eager parsing.", "FLOAT SELECTED: Table 2: Accuracy comparison of state-of-theart transition-based dependency parsers on PT-SD. The \u201cType\u201d column shows the type of parser: gs is a greedy parser trained with a static oracle, gd a greedy parser trained with a dynamic oracle, b(n) a beam search parser with beam size n, dp a parser that employs global training with dynamic programming, and c a constituent parser with conversion to dependencies." ], "highlighted_evidence": [ "Table TABREF12 compares our novel system with other state-of-the-art transition-based dependency parsers on the PT-SD. Greedy parsers are in the first block, beam-search and dynamic programming parsers in the second block. The third block shows the best result on this benchmark, obtained with constituent parsing with generative re-ranking and conversion to dependencies. Despite being the only non-projective parser tested on a practically projective dataset, our parser achieves the highest score among greedy transition-based models (even above those trained with a dynamic oracle).\n\nWe even slightly outperform the arc-swift system of Qi2017, with the same model architecture, implementation and training setup, but based on the projective arc-eager transition-based parser instead.", "FLOAT SELECTED: Table 2: Accuracy comparison of state-of-theart transition-based dependency parsers on PT-SD. The \u201cType\u201d column shows the type of parser: gs is a greedy parser trained with a static oracle, gd a greedy parser trained with a dynamic oracle, b(n) a beam search parser with beam size n, dp a parser that employs global training with dynamic programming, and c a constituent parser with conversion to dependencies." ] } ] } ], "1906.05474": [ { "question": "Could you tell me more about the metrics used for performance evaluation?", "answers": [ { "answer": "BLUE utilizes different metrics for each of the tasks: Pearson correlation coefficient, F-1 scores, micro-averaging, and accuracy", "type": "abstractive" } ], "q_uid": "b540cd4fe9dc4394f64d5b76b0eaa4d9e30fb728", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: BLUE tasks", "The aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence. We use the standard overall accuracy to evaluate the performance.", "The aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences. The relations with types were compared to annotated data. We use the standard micro-average precision, recall, and F1-score metrics.", "The aim of the named entity recognition task is to predict mention spans given in the text BIBREF20 . The results are evaluated through a comparison of the set of mention spans annotated within the document with the set of mention spans predicted by the model. We evaluate the results by using the strict version of precision, recall, and F1-score. For disjoint mentions, all spans also must be strictly correct. To construct the dataset, we used spaCy to split the text into a sequence of tokens when the original datasets do not provide such information.", "The sentence similarity task is to predict similarity scores based on sentence pairs. Following common practice, we evaluate similarity by using Pearson correlation coefficients.", "HoC (the Hallmarks of Cancers corpus) consists of 1,580 PubMed abstracts annotated with ten currently known hallmarks of cancer BIBREF27 . Annotation was performed at sentence level by an expert with 15+ years of experience in cancer research. We use 315 ( $\\sim $ 20%) abstracts for testing and the remaining abstracts for training. For the HoC task, we followed the common practice and reported the example-based F1-score on the abstract level BIBREF28 , BIBREF29 ." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: BLUE tasks", "We use the standard overall accuracy to evaluate the performance", "We use the standard micro-average precision, recall, and F1-score metrics", "We evaluate the results by using the strict version of precision, recall, and F1-score.", "Following common practice, we evaluate similarity by using Pearson correlation coefficients.", "we followed the common practice and reported the example-based F1-score on the abstract level", "we followed the common practice and reported the example-based F1-score on the abstract level" ] } ] }, { "question": "which tasks are used in BLUE benchmark?", "answers": [ { "answer": "Inference task\nThe aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence, Document multilabel classification\nThe multilabel classification task predicts multiple labels from the texts., Relation extraction\nThe aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences., Named entity recognition\nThe aim of the named entity recognition task is to predict mention spans given in the text , Sentence similarity\nThe sentence similarity task is to predict similarity scores based on sentence pairs", "type": "extractive" } ], "q_uid": "41173179efa6186eef17c96f7cbd8acb29105b0e", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: BLUE tasks" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: BLUE tasks" ] } ] } ], "1906.11180": [ { "question": "What's the precision of the system?", "answers": [ { "answer": "0.8320 on semantic typing, 0.7194 on entity matching", "type": "abstractive" } ], "q_uid": "a996b6aee9be88a3db3f4127f9f77a18ed10caba", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3. Overall typing performance of our method and the baselines on S-Lite and R-Lite.", "FLOAT SELECTED: Table 4. Overall performance of entity matching on R-Lite with and without type constraint." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3. Overall typing performance of our method and the baselines on S-Lite and R-Lite.", "FLOAT SELECTED: Table 4. Overall performance of entity matching on R-Lite with and without type constraint." ] } ] } ], "1908.06379": [ { "question": "What are the performances obtained for PTB and CTB?", "answers": [ { "answer": ". On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing., On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing.", "type": "extractive" } ], "q_uid": "a6665074b067abb2676d5464f36b2cb07f6919d3", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Dependency parsing on PTB and CTB.", "FLOAT SELECTED: Table 4: Comparison of constituent parsing on PTB.", "FLOAT SELECTED: Table 5: Comparison of constituent parsing on CTB.", "Tables TABREF17, TABREF18 and TABREF19 compare our model to existing state-of-the-art, in which indicator Separate with our model shows the results of our model learning constituent or dependency parsing separately, (Sum) and (Concat) respectively represent the results with the indicated input token representation setting. On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. The comparison again suggests that learning jointly in our model is superior to learning separately. In addition, we also augment our model with ELMo BIBREF48 or a larger version of BERT BIBREF49 as the sole token representation to compare with other pre-training models. Since BERT is based on sub-word, we only take the last sub-word vector of the word in the last layer of BERT as our sole token representation $x_i$. Moreover, our single model of BERT achieves competitive performance with other ensemble models." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Dependency parsing on PTB and CTB.", "FLOAT SELECTED: Table 4: Comparison of constituent parsing on PTB.", "FLOAT SELECTED: Table 5: Comparison of constituent parsing on CTB.", "On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. ", "On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. The comparison again suggests that learning jointly in our model is superior to learning separately." ] } ] } ], "1908.11365": [ { "question": "Is the proposed layer smaller in parameters than a Transformer?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "3288a50701a80303fd71c8c5ede81cbee14fa2c7", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Tokenized case-sensitive BLEU (in parentheses: sacreBLEU) on WMT14 En-De translation task. #Param: number of model parameters. 4Dec: decoding time (seconds)/speedup on newstest2014 dataset with a batch size of 32. 4Train: training time (seconds)/speedup per training step evaluated on 0.5K steps with a batch size of 1K target tokens. Time is averaged over 3 runs using Tensorflow on a single TITAN X (Pascal). \u201c-\u201d: optimization failed and no result. \u201c?\u201d: the same as model 1\u00a9. \u2020 and \u2021: comparison against 11\u00a9 and 14\u00a9 respectively rather than 1\u00a9. Base: the baseline Transformer with base setting. Bold indicates best BLEU score. dpa and dpr: dropout rate on attention weights and residual connection. bs: batch size in tokens.", "FLOAT SELECTED: Table 5: Translation results on different tasks. Settings for BLEU score is given in Section 7.1. Numbers in bracket denote chrF score. Our model outperforms the vanilla base Transformer on all tasks. \u201cOurs\u201d: DS-Init+MAtt." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Tokenized case-sensitive BLEU (in parentheses: sacreBLEU) on WMT14 En-De translation task. #Param: number of model parameters. 4Dec: decoding time (seconds)/speedup on newstest2014 dataset with a batch size of 32. 4Train: training time (seconds)/speedup per training step evaluated on 0.5K steps with a batch size of 1K target tokens. Time is averaged over 3 runs using Tensorflow on a single TITAN X (Pascal). \u201c-\u201d: optimization failed and no result. \u201c?\u201d: the same as model 1\u00a9. \u2020 and \u2021: comparison against 11\u00a9 and 14\u00a9 respectively rather than 1\u00a9. Base: the baseline Transformer with base setting. Bold indicates best BLEU score. dpa and dpr: dropout rate on attention weights and residual connection. bs: batch size in tokens.", "FLOAT SELECTED: Table 5: Translation results on different tasks. Settings for BLEU score is given in Section 7.1. Numbers in bracket denote chrF score. Our model outperforms the vanilla base Transformer on all tasks. \u201cOurs\u201d: DS-Init+MAtt." ] } ] } ], "1708.09609": [ { "question": "What are the four forums the data comes from?", "answers": [ { "answer": "Darkode, Hack Forums, Blackhat and Nulled.", "type": "abstractive" } ], "q_uid": "ce807a42370bfca10fa322d6fa772e4a58a8dca1", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 3: Test set results at the NP level in within-forum and cross-forum settings for a variety of different systems. Using either Brown clusters or gazetteers gives mixed results on cross-forum performance: only one of the improvements (\u2020) is statistically significant with p < 0.05 according to a bootstrap resampling test. Gazetteers are unavailable for Blackhat and Nulled since we have no training data for those forums." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Test set results at the NP level in within-forum and cross-forum settings for a variety of different systems. Using either Brown clusters or gazetteers gives mixed results on cross-forum performance: only one of the improvements (\u2020) is statistically significant with p < 0.05 according to a bootstrap resampling test. Gazetteers are unavailable for Blackhat and Nulled since we have no training data for those forums." ] } ] } ], "1911.11951": [ { "question": "What are the state-of-the-art models for the task?", "answers": [ { "answer": "To the best of our knowledge, our method achieves state-of-the-art results in weighted-accuracy and standard accuracy on the dataset", "type": "extractive" } ], "q_uid": "79620a2b4b121b6d3edd0f7b1d4a8cc7ada0b516", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Performance of various methods on the FNC-I benchmark. The first and second groups are methods introduced during and after the challenge period, respectively. Best results are in bold.", "Results of our proposed method, the top three methods in the original Fake News Challenge, and the best-performing methods since the challenge's conclusion on the FNC-I test set are displayed in Table TABREF12. A confusion matrix for our method is presented in the Appendix. To the best of our knowledge, our method achieves state-of-the-art results in weighted-accuracy and standard accuracy on the dataset. Notably, since the conclusion of the Fake News Challenge in 2017, the weighted-accuracy error-rate has decreased by 8%, signifying improved performance of NLP models and innovations in the domain of stance detection, as well as a continued interest in combating the spread of disinformation." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Performance of various methods on the FNC-I benchmark. The first and second groups are methods introduced during and after the challenge period, respectively. Best results are in bold.", "Results of our proposed method, the top three methods in the original Fake News Challenge, and the best-performing methods since the challenge's conclusion on the FNC-I test set are displayed in Table TABREF12. A confusion matrix for our method is presented in the Appendix. To the best of our knowledge, our method achieves state-of-the-art results in weighted-accuracy and standard accuracy on the dataset." ] } ] } ], "2004.03788": [ { "question": "How much improvement do they get?", "answers": [ { "answer": "Their GTRS approach got an improvement of 3.89% compared to SVM and 27.91% compared to Pawlak.", "type": "abstractive" } ], "q_uid": "1cbca15405632a2e9d0a7061855642d661e3b3a7", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 7. Experimental results" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 7. Experimental results" ] } ] } ], "1910.10869": [ { "question": "Is this approach compared to some baseline?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "c54de73b36ab86534d18a295f3711591ce9e1784", "evidence": [ { "raw_evidence": [ "Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. The leave-one-out analysis agrees that the words are the most important (largest drop in accuracy when removed), but on that criterion the prosodic features are more important than speech-activity. The combination of all features is 0.4% absolute better than any other subset, showing that all feature subsets are partly complementary.", "FLOAT SELECTED: Table 2. Hot spot classification results with individual feature subsets, all features, and with individual feature sets left out." ], "highlighted_evidence": [ "Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. ", "FLOAT SELECTED: Table 2. Hot spot classification results with individual feature subsets, all features, and with individual feature sets left out." ] } ] } ], "1911.08962": [ { "question": "What are the baselines?", "answers": [ { "answer": "CNN, LSTM, BERT", "type": "abstractive" } ], "q_uid": "a379c380ac9f67f824506951444c873713405eed", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Results of baselines and scores of top 3 participants on valid and test datasets." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results of baselines and scores of top 3 participants on valid and test datasets." ] } ] } ], "1810.12885": [ { "question": "Which models do they try out?", "answers": [ { "answer": "DocQA, SAN, QANet, ASReader, LM, Random Guess", "type": "abstractive" } ], "q_uid": "a516b37ad9d977cb9d4da3897f942c1c494405fe", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 4: Performance of various methods and human." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Performance of various methods and human." ] } ] } ], "1911.02086": [ { "question": "Do they compare executionttime of their model against other models?", "answers": [ { "answer": "No", "type": "boolean" } ], "q_uid": "7f5ab9a53aef7ea1a1c2221967057ee71abb27cb", "evidence": [ { "raw_evidence": [ "The base model composed of DSConv layers without grouping achieves the state-of-the-art accuracy of 96.6% on the Speech Commands test set. The low-parameter model with GDSConv achieves almost the same accuracy of 96.4% with only about half the parameters. This validates the effectiveness of GDSConv for model size reduction. Table TABREF15 lists these results in comparison with related work. Compared to the DSConv network in BIBREF1, our network is more efficient in terms of accuracy for a given parameter count. Their biggest model has a 1.2% lower accuracy than our base model while having about 4 times the parameters. Choi et al. BIBREF3 has the most competitive results while we are still able to improve upon their accuracy for a given number of parameters. They are using 1D convolution along the time dimension as well which may be evidence that this yields better performance for audio processing or at least KWS.", "FLOAT SELECTED: Table 1. Comparison of results on the Speech Commands dataset [19].", "FLOAT SELECTED: Table 2. Results on Speech Commands version 2 [19]." ], "highlighted_evidence": [ "The base model composed of DSConv layers without grouping achieves the state-of-the-art accuracy of 96.6% on the Speech Commands test set. The low-parameter model with GDSConv achieves almost the same accuracy of 96.4% with only about half the parameters. ", "FLOAT SELECTED: Table 1. Comparison of results on the Speech Commands dataset [19].", "FLOAT SELECTED: Table 2. Results on Speech Commands version 2 [19]." ] } ] } ], "1810.02100": [ { "question": "Which English domains do they evaluate on?", "answers": [ { "answer": "Conll, Weblogs, Newsgroups, Reviews, Answers", "type": "extractive" } ], "q_uid": "c38a48d65bb21c314194090d0cc3f1a45c549dd6", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2.3: Labelled attachment scores achieved by the MST, Malt, and Mate parsers trained on the Conll training set and tested on different domains.", "We further evaluate our approach on our main evaluation corpus. The method is tested on both in-domain and out-of-domain parsing. Our DLM-based approach achieved large improvement on all five domains evaluated (Conll, Weblogs, Newsgroups, Reviews, Answers). We achieved the labelled and unlabelled improvements of up to 0.91% and 0.82% on Newsgroups domain. On average we achieved 0.6% gains for both labelled and unlabelled scores on four out-of-domain test sets. We also improved the in-domain accuracy by 0.36% (LAS) and 0.4% (UAS)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2.3: Labelled attachment scores achieved by the MST, Malt, and Mate parsers trained on the Conll training set and tested on different domains.", "We further evaluate our approach on our main evaluation corpus. The method is tested on both in-domain and out-of-domain parsing. Our DLM-based approach achieved large improvement on all five domains evaluated (Conll, Weblogs, Newsgroups, Reviews, Answers). " ] } ] } ], "1910.11235": [ { "question": "What are the competing models?", "answers": [ { "answer": "TEACHER FORCING (TF), SCHEDULED SAMPLING (SS), SEQGAN, RANKGAN, LEAKGAN.", "type": "abstractive" } ], "q_uid": "12ac76b77f22ed3bcb6430bcd0b909441d79751b", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Results on EMNLP2017 WMT News dataset. The 95 % confidence intervals from multiple trials are reported." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results on EMNLP2017 WMT News dataset. The 95 % confidence intervals from multiple trials are reported." ] } ] } ], "1909.01247": [ { "question": "What writing styles are present in the corpus?", "answers": [ { "answer": "current news, historical news, free time, sports, juridical news pieces, personal adverts, editorials.", "type": "abstractive" } ], "q_uid": "0d7de323fd191a793858386d7eb8692cc924b432", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Stylistic domains and examples (bold marks annotated entities)" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Stylistic domains and examples (bold marks annotated entities)" ] } ] } ], "1908.06151": [ { "question": "What experiment result led to conclussion that reducing the number of layers of the decoder does not matter much?", "answers": [ { "answer": "Exp. 5.1", "type": "extractive" } ], "q_uid": "f9c5799091e7e35a8133eee4d95004e1b35aea00", "evidence": [ { "raw_evidence": [ "Last, we analyze the importance of our second encoder ($enc_{src \\rightarrow mt}$), compared to the source encoder ($enc_{src}$) and the decoder ($dec_{pe}$), by reducing and expanding the amount of layers in the encoders and the decoder. Our standard setup, which we use for fine-tuning, ensembling etc., is fixed to 6-6-6 for $N_{src}$-$N_{mt}$-$N_{pe}$ (cf. Figure FIGREF1), where 6 is the value that was proposed by Vaswani:NIPS2017 for the base model. We investigate what happens in terms of APE performance if we change this setting to 6-6-4 and 6-4-6. To handle out-of-vocabulary words and reduce the vocabulary size, instead of considering words, we consider subword units BIBREF19 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the $src$, $mt$ and $pe$, we define BPE tokens by jointly processing all triplets. Thus, $src$, $mt$ and $pe$ derive a single BPE vocabulary. Since $mt$ and $pe$ belong to the same language (German) and $src$ is a close language (English), they naturally share a good fraction of BPE tokens, which reduces the vocabulary size to 28k.", "The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder.", "FLOAT SELECTED: Table 1: Evaluation results on the WMT APE test set 2016, and test set 2017 for the PBSMT task; (\u00b1X) value is the improvement over wmt18smtbest (x4). The last section of the table shows the impact of increasing and decreasing the depth of the encoders and the decoder." ], "highlighted_evidence": [ "Last, we analyze the importance of our second encoder ($enc_{src \\rightarrow mt}$), compared to the source encoder ($enc_{src}$) and the decoder ($dec_{pe}$), by reducing and expanding the amount of layers in the encoders and the decoder. Our standard setup, which we use for fine-tuning, ensembling etc., is fixed to 6-6-6 for $N_{src}$-$N_{mt}$-$N_{pe}$ (cf. Figure FIGREF1), where 6 is the value that was proposed by Vaswani:NIPS2017 for the base model. We investigate what happens in terms of APE performance if we change this setting to 6-6-4 and 6-4-6. ", "The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder.", "FLOAT SELECTED: Table 1: Evaluation results on the WMT APE test set 2016, and test set 2017 for the PBSMT task; (\u00b1X) value is the improvement over wmt18smtbest (x4). The last section of the table shows the impact of increasing and decreasing the depth of the encoders and the decoder." ] } ] }, { "question": "How much is performance hurt when using too small amount of layers in encoder?", "answers": [ { "answer": "comparing to the results from reducing the number of layers in the decoder, the BLEU score was 69.93 which is less than 1% in case of test2016 and in case of test2017 it was less by 0.2 %. In terms of TER it had higher score by 0.7 in case of test2016 and 0.1 in case of test2017. ", "type": "abstractive" } ], "q_uid": "04012650a45d56c0013cf45fd9792f43916eaf83", "evidence": [ { "raw_evidence": [ "The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder.", "FLOAT SELECTED: Table 1: Evaluation results on the WMT APE test set 2016, and test set 2017 for the PBSMT task; (\u00b1X) value is the improvement over wmt18smtbest (x4). The last section of the table shows the impact of increasing and decreasing the depth of the encoders and the decoder." ], "highlighted_evidence": [ "Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder.", "FLOAT SELECTED: Table 1: Evaluation results on the WMT APE test set 2016, and test set 2017 for the PBSMT task; (\u00b1X) value is the improvement over wmt18smtbest (x4). The last section of the table shows the impact of increasing and decreasing the depth of the encoders and the decoder." ] } ] } ], "1901.03866": [ { "question": "How much does HAS-QA improve over baselines?", "answers": [ { "answer": "For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. , For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score.", "type": "extractive" } ], "q_uid": "efe49829725cfe54de01405c76149a4fe4d18747", "evidence": [ { "raw_evidence": [ "1) HAS-QA outperforms traditional RC baselines with a large gap, such as GA, BiDAF, AQA listed in the first part. For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. As RC task is just a special case of OpenQA task. Some experiments on standard SQuAD dataset(dev-set) BIBREF9 show that HAS-QA yields EM/F1:0.719/0.798, which is comparable with the best released single model Reinforced Mnemonic Reader BIBREF25 in the leaderboard (dev-set) EM/F1:0.721/0.816. Our performance is slightly worse because Reinforced Mnemonic Reader directly use the accurate answer span, while we use multiple distantly supervised answer spans. That may introduce noises in the setting of SQuAD, since only one span is accurate.", "2) HAS-QA outperforms recent OpenQA baselines, such as DrQA, R ${}^3$ and Shared-Norm listed in the second part. For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score.", "FLOAT SELECTED: Table 2: Experimental results on OpenQA datasets QuasarT, TriviaQA and SearchQA. EM: Exact Match." ], "highlighted_evidence": [ "HAS-QA outperforms traditional RC baselines with a large gap, such as GA, BiDAF, AQA listed in the first part. For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. As RC task is just a special case of OpenQA task. Some experiments on standard SQuAD dataset(dev-set) BIBREF9 show that HAS-QA yields EM/F1:0.719/0.798, which is comparable with the best released single model Reinforced Mnemonic Reader BIBREF25 in the leaderboard (dev-set) EM/F1:0.721/0.816. ", "HAS-QA outperforms recent OpenQA baselines, such as DrQA, R ${}^3$ and Shared-Norm listed in the second part. For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score.", "FLOAT SELECTED: Table 2: Experimental results on OpenQA datasets QuasarT, TriviaQA and SearchQA. EM: Exact Match." ] } ] } ], "1606.00189": [ { "question": "Which dataset do they evaluate grammatical error correction on?", "answers": [ { "answer": "CoNLL 2014", "type": "extractive" } ], "q_uid": "a49832c89a2d7f95c1fe6132902d74e4e7a3f2d0", "evidence": [ { "raw_evidence": [ "We conduct experiments by incorporating NNGLM and NNJM both independently and jointly into our baseline system. The results of our experiments are described in Section SECREF23 . The evaluation is performed similar to the CoNLL 2014 shared task setting using the the official test data of the CoNLL 2014 shared task with annotations from two annotators (without considering alternative annotations suggested by the participating teams). The test dataset consists of 1,312 error-annotated sentences with 30,144 tokens on the source side. We make use of the official scorer for the shared task, M INLINEFORM0 Scorer v3.2 BIBREF19 , for evaluation. We perform statistical significance test using one-tailed sign test with bootstrap resampling on 100 samples.", "FLOAT SELECTED: Table 2: Results of our experiments with NNGLM and NNJM on the CoNLL 2014 test set (* indicates statistical significance with p < 0.01)" ], "highlighted_evidence": [ ". The evaluation is performed similar to the CoNLL 2014 shared task setting using the the official test data of the CoNLL 2014 shared task with annotations from two annotators (without considering alternative annotations suggested by the participating teams). The test dataset consists of 1,312 error-annotated sentences with 30,144 tokens on the source side. We make use of the official scorer for the shared task, M INLINEFORM0 Scorer v3.2 BIBREF19 , for evaluation.", "FLOAT SELECTED: Table 2: Results of our experiments with NNGLM and NNJM on the CoNLL 2014 test set (* indicates statistical significance with p < 0.01)" ] } ] } ], "1605.07683": [ { "question": "How large is the Dialog State Tracking Dataset?", "answers": [ { "answer": "1,618 training dialogs, 500 validation dialogs, and 1,117 test dialogs", "type": "abstractive" } ], "q_uid": "a02696d4ab728ddd591f84a352df9375faf7d1b4", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (\u2217) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (\u2217) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words." ] } ] } ], "1711.00106": [ { "question": "How much is the gap between using the proposed objective and using only cross-entropy objective?", "answers": [ { "answer": "The mixed objective improves EM by 2.5% and F1 by 2.2%", "type": "abstractive" } ], "q_uid": "1f63ccc379f01ecdccaa02ed0912970610c84b72", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Ablation study on the development set of SQuAD.", "The contributions of each part of our model are shown in Table 2 . We note that the deep residual coattention yielded the highest contribution to model performance, followed by the mixed objective. The sparse mixture of experts layer in the decoder added minor improvements to the model performance." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Ablation study on the development set of SQuAD.", "The contributions of each part of our model are shown in Table 2 ." ] } ] } ], "1911.08976": [ { "question": "what are the three methods presented in the paper?", "answers": [ { "answer": "Optimized TF-IDF, iterated TF-IDF, BERT re-ranking.", "type": "abstractive" } ], "q_uid": "dac2591f19f5bbac3d4a7fa038ff7aa09f6f0d96", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: MAP scoring of new methods. The timings are in seconds for the whole dev-set, and the BERT Re-ranking figure includes the initial Iterated TF-IDF step." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: MAP scoring of new methods. The timings are in seconds for the whole dev-set, and the BERT Re-ranking figure includes the initial Iterated TF-IDF step." ] } ] } ], "1812.01704": [ { "question": "what datasets did the authors use?", "answers": [ { "answer": "Kaggle\nSubversive Kaggle\nWikipedia\nSubversive Wikipedia\nReddit\nSubversive Reddit ", "type": "abstractive" } ], "q_uid": "f62c78be58983ef1d77049738785ec7ab9f2a3ee", "evidence": [ { "raw_evidence": [ "We trained and tested our neural network with and without sentiment information, with and without subversion, and with each corpus three times to mitigate the randomness in training. In every experiment, we used a random 70% of messages in the corpus as training data, another 20% as validation data, and the final 10% as testing data. The average results of the three tests are given in Table TABREF40 . It can be seen that sentiment information helps improve toxicity detection in all cases. The improvement is smaller when the text is clean. However, the introduction of subversion leads to an important drop in the accuracy of toxicity detection in the network that uses the text alone, and the inclusion of sentiment information gives an important improvement in that case. Comparing the different corpora, it can be seen that the improvement is smallest in the Reddit dataset experiment, which is expected since it is also the dataset in which toxicity and sentiment had the weakest correlation in Table TABREF37 .", "FLOAT SELECTED: Table 7: Accuracy of toxicity detection with and without sentiment" ], "highlighted_evidence": [ "n every experiment, we used a random 70% of messages in the corpus as training data, another 20% as validation data, and the final 10% as testing data. The average results of the three tests are given in Table TABREF40 .", "FLOAT SELECTED: Table 7: Accuracy of toxicity detection with and without sentiment" ] } ] } ], "1712.03556": [ { "question": "How much performance improvements they achieve on SQuAD?", "answers": [ { "answer": "Compared to baselines SAN (Table 1) shows improvement of 1.096% on EM and 0.689% F1. Compared to other published SQuAD results (Table 2) SAN is ranked second. ", "type": "abstractive" } ], "q_uid": "39a450ac15688199575798e72a2cc016ef4316b5", "evidence": [ { "raw_evidence": [ "FLOAT SELECTED: Table 2: Test performance on SQuAD. Results are sorted by Test F1.", "Finally, we compare our results with other top models in Table 2 . Note that all the results in Table 2 are taken from the published papers. We see that SAN is very competitive in both single and ensemble settings (ranked in second) despite its simplicity. Note that the best-performing model BIBREF14 used a large-scale language model as an extra contextual embedding, which gave a significant improvement (+4.3% dev F1). We expect significant improvements if we add this to SAN in future work.", "The main experimental question we would like to answer is whether the stochastic dropout and averaging in the answer module is an effective technique for multi-step reasoning. To do so, we fixed all lower layers and compared different architectures for the answer module:", "FLOAT SELECTED: Table 1: Main results\u2014Comparison of different answer module architectures. Note that SAN performs best in both Exact Match and F1 metrics." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Test performance on SQuAD. Results are sorted by Test F1.", "We see that SAN is very competitive in both single and ensemble settings (ranked in second) despite its simplicity.", "The main experimental question we would like to answer is whether the stochastic dropout and averaging in the answer module is an effective technique for multi-step reasoning. To do so, we fixed all lower layers and compared different architectures for the answer module", "FLOAT SELECTED: Table 1: Main results\u2014Comparison of different answer module architectures. Note that SAN performs best in both Exact Match and F1 metrics." ] } ] } ] }