question_id
stringlengths 40
40
| question
stringlengths 4
171
| answer
sequence | evidence
sequence |
---|---|---|---|
77f04cd553df691e8f4ecbe19da89bc32c7ac734 | Is there any example where geometric property is visible for context similarity between words? | [
"Yes"
] | [
[
"The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance)."
]
] |
728a55c0f628f2133306b6bd88af00eb54017b12 | What geometric properties do embeddings display? | [
"Winter and summer words formed two separate clusters. Week day and week-end day words also formed separate clusters."
] | [
[
"The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance)."
]
] |
d5498d16e8350c9785782b57b1e5a82212dbdaad | How accurate is model trained on text exclusively? | [
"Relative error is less than 5%"
] | [
[
"The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series."
]
] |
3e839783d8a4f2fe50ece4a9b476546f0842b193 | What was their result on Stance Sentiment Emotion Corpus? | [
"F1 score of 66.66%"
] | [
[
"We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.",
"We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis."
]
] |
2869d19e54fb554fcf1d6888e526135803bb7d75 | What performance did they obtain on the SemEval dataset? | [
"F1 score of 82.10%"
] | [
[
"We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis.",
"We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18."
]
] |
894c086a2cbfe64aa094c1edabbb1932a3d7c38a | What are the state-of-the-art systems? | [
"For sentiment analysis UWB, INF-UFRGS-OPINION-MINING, LitisMind, pkudblab and SVM + n-grams + sentiment and for emotion analysis MaxEnt, SVM, LSTM, BiLSTM and CNN"
] | [
[
"Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.",
"We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise."
]
] |
722e9b6f55971b4c48a60f7a9fe37372f5bf3742 | How is multi-tasking performed? | [
"The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks., Each of the shared representations is then fed to the primary attention mechanism"
] | [
[
"We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections."
]
] |
9c2f306044b3d1b3b7fdd05d1c046e887796dd7a | What are the datasets used for training? | [
"SemEval 2016 Task 6 BIBREF7, Stance Sentiment Emotion Corpus (SSEC) BIBREF15"
] | [
[
"We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively."
]
] |
3d99bc8ab2f36d4742e408f211bec154bc6696f7 | How many parameters does the model have? | [
"Unanswerable"
] | [
[]
] |
9219eef636ddb020b9d394868959325562410f83 | What is the previous state-of-the-art model? | [
"BIBREF7, BIBREF39, BIBREF37, LitisMind, Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN"
] | [
[
"Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.",
"We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise."
]
] |
ff83eea2df9976c1a01482818340871b17ad4f8c | What is the previous state-of-the-art performance? | [
"Unanswerable"
] | [
[]
] |
0ee20a3a343e1e251b74a804e9aa1393d17b46d6 | How can the classifier facilitate the annotation task for human annotators? | [
"quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets"
] | [
[
"In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class)."
]
] |
f0e8f045e2e33a2129e67fb32f356242db1dc280 | What recommendations are made to improve the performance in future? | [
"applying reasoning BIBREF36 or irony detection methods BIBREF37"
] | [
[
"From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37."
]
] |
b6c235d5986914b380c084d9535a7b01310c0278 | What type of errors do the classifiers use? | [
"correct class can be directly inferred from the text content easily, even without background knowledge, correct class can be inferred from the text content, given that event-specific knowledge is provided, orrect class can be inferred from the text content if the text is interpreted correctly"
] | [
[
"In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error:",
"the correct class can be directly inferred from the text content easily, even without background knowledge",
"the correct class can be inferred from the text content, given that event-specific knowledge is provided",
"the correct class can be inferred from the text content if the text is interpreted correctly"
]
] |
e9b1e8e575809f7b80b1125305cfa76ae4f5bdfb | What neural classifiers are used? | [
" convolutional neural network (CNN) BIBREF29"
] | [
[
"As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4."
]
] |
1e4450e23ec81fdd59821055f998fd9db0398b16 | What is the hashtags does the hashtag-based baseline use? | [
"Unanswerable"
] | [
[]
] |
02ce4c288df14a90a210cb39973c6ac0fb4cec59 | What languages are included in the dataset? | [
"English"
] | [
[
"For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016.",
"BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates."
]
] |
60726d9792d301d5ff8e37fbb31d5104a520dea3 | What dataset is used for this study? | [
"MH17 Twitter dataset"
] | [
[
"For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016."
]
] |
e39d90b8d959697d9780eddce3a343e60543be65 | What proxies for data annotation were used in previous datasets? | [
"widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet, Natural Language Processing (NLP) models can be used to automatically label text content"
] | [
[
"Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11.",
"In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter."
]
] |
c6e63e3b807474e29bfe32542321d015009e7148 | What are the supported natural commands? | [
"Set/Change Destination, Set/Change Route, Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, Other "
] | [
[
"Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators."
]
] |
4ef2fd79d598accc54c084f0cca8ad7c1b3f892a | What is the size of their collected dataset? | [
"3347 unique utterances "
] | [
[
"Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators."
]
] |
40e3639b79e2051bf6bce300d06548e7793daee0 | Did they compare against other systems? | [
"Yes"
] | [
[
"The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer."
]
] |
8383e52b2adbbfb533fbe8179bc8dae11b3ed6da | What intents does the paper explore? | [
"Set/Change Destination, Set/Change Route, Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, Other "
] | [
[
"Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators."
]
] |
5f7850254b723adf891930c6faced1058b99bd57 | What kind of features are used by the HMM models, and how interpretable are those? | [
"A continuous emission HMM uses the hidden states of a 2-layer LSTM as features and a discrete emission HMM uses data as features. \nThe interpretability of the model is shown in Figure 2. "
] | [
[
"We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data).",
"We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data."
]
] |
4d05a264b2353cff310edb480a917d686353b007 | What kind of information do the HMMs learn that the LSTMs don't? | [
"The HMM can identify punctuation or pick up on vowels."
] | [
[
"We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data."
]
] |
7cdce4222cea6955b656c1a3df1129bb8119e2d0 | Which methods do the authors use to reach the conclusion that LSTMs and HMMs learn complementary information? | [
"decision trees to predict individual hidden state dimensions, apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters"
] | [
[
"Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ).",
"We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data."
]
] |
6ea63327ffbab2fc734dd5c2414e59d3acc56ea5 | How large is the gap in performance between the HMMs and the LSTMs? | [
"With similar number of parameters, the log likelihood is about 0.1 lower for LSTMs across datasets. When the number of parameters in LSTMs is increased, their log likelihood is up to 0.7 lower."
] | [
[]
] |
50690b72dc61748e0159739a9a0243814d37f360 | Do they report results only on English data? | [
"Yes"
] | [
[
"In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets.",
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 .",
"Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel.",
"Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two.",
"Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech:",
"@LoveAndLonging ...how is that example \"sexism\"?",
"@amberhasalamb ...in what way?"
]
] |
8266642303fbc6a1138b4e23ee1d859a6f584fbb | Which publicly available datasets are used? | [
"BIBREF3, BIBREF4, BIBREF9"
] | [
[
"In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets.",
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ."
]
] |
3685bf2409b23c47bfd681989fb4a763bcab6be2 | What embedding algorithm and dimension size are used? | [
"300 Dimensional Glove"
] | [
[
"We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search."
]
] |
19225e460fff2ac3aebc7fe31fcb4648eda813fb | What data are the embeddings trained on? | [
"Common Crawl "
] | [
[
"We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search."
]
] |
f37026f518ab56c859f6b80b646d7f19a7b684fa | how much was the parameter difference between their model and previous methods? | [
"our model requires 100k parameters , while BIBREF8 requires 250k parameters"
] | [
[
"On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters."
]
] |
1231934db6adda87c1b15e571468b8e9d225d6fe | how many parameters did their model use? | [
"Excluding the embedding weights, our model requires 100k parameters"
] | [
[
"On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters."
]
] |
81303f605da57ddd836b7c121490b0ebb47c60e7 | which datasets were used? | [
"Sexist/Racist (SR) data set, HATE dataset, HAR"
] | [
[
"Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ."
]
] |
a3f108f60143d13fe38d911b1cc3b17bdffde3bd | what was their system's f1 performance? | [
"Proposed model achieves 0.86, 0.924, 0.71 F1 score on SR, HATE, HAR datasets respectively."
] | [
[
"The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1."
]
] |
118ff1d7000ea0d12289d46430154cc15601fd8e | what was the baseline? | [
"logistic regression"
] | [
[
"All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets."
]
] |
102a0439739428aac80ac11795e73ce751b93ea1 | What datasets were used? | [
"KFTT BIBREF12 and BTEC BIBREF13"
] | [
[
"Dataset: We perform experiments on two widely-used tasks for the English-to-Japanese language pair: KFTT BIBREF12 and BTEC BIBREF13 . KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversation corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. The details of each corpus are depicted in Table TABREF19 ."
]
] |
d9c26c1bfb3830c9f3dbcccf4c8ecbcd3cb54404 | What language pairs did they experiment with? | [
"English-Japanese"
] | [
[
"We perform experiments (§ SECREF5 ) on two English-Japanese translation corpora to evaluate the method's utility in improving translation accuracy and reducing the time required for training."
]
] |
04f72eddb1fc73dd11135a80ca1cf31e9db75578 | How much more coverage is in the new dataset? | [
"278 more annotations"
] | [
[
"The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset."
]
] |
f74eaee72cbd727a6dffa1600cdf1208672d713e | How was coverage measured? | [
"QA pairs per predicate"
] | [
[
"The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others."
]
] |
068dbcc117c93fa84c002d3424bafb071575f431 | How was quality measured? | [
"Inter-annotator agreement, comparison against expert annotation, agreement with PropBank Data annotations."
] | [
[
"Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA)",
"To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency.",
"Dataset Quality Analysis ::: Dataset Assessment and Comparison",
"We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield.",
"Dataset Quality Analysis ::: Agreement with PropBank Data",
"It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol."
]
] |
96526a14820b7debfd6f7c5beeade0a854b93d1a | How was the corpus obtained? | [
" trained annotators BIBREF4, crowdsourcing BIBREF5 "
] | [
[
"Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s."
]
] |
32ba4d2d15194e889cbc9aa1d21ff1aa6fa27679 | How are workers trained? | [
"extensive personal feedback"
] | [
[
"Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations."
]
] |
78c010db6413202b4063dc3fb6e3cc59ec16e7e3 | What is different in the improved annotation protocol? | [
"a trained worker consolidates existing annotations "
] | [
[
"We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix."
]
] |
a69af5937cab861977989efd72ad1677484b5c8c | How was the previous dataset annotated? | [
"the annotation machinery of BIBREF5"
] | [
[
"We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix."
]
] |
8847f2c676193189a0f9c0fe3b86b05b5657b76a | How big is the dataset? | [
"1593 annotations"
] | [
[
"The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset."
]
] |
05196588320dfb0b9d9be7d64864c43968d329bc | Do the other multilingual baselines make use of the same amount of training data? | [
"Unanswerable"
] | [
[]
] |
e930f153c89dfe9cff75b7b15e45cd3d700f4c72 | How big is the impact of training data size on the performance of the multilingual encoder? | [
"Unanswerable"
] | [
[]
] |
545ff2f76913866304bfacdb4cc10d31dbbd2f37 | What data were they used to train the multilingual encoder? | [
"WMT 2014 En-Fr parallel corpus"
] | [
[
"For the MT task, we use the WMT 2014 En $\\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. We generated a shared sub-word vocabulary BIBREF37 , BIBREF38 of 32K units from all source and target training data. We use this sub-word vocabulary for all of our experiments below."
]
] |
cf93a209c8001ffb4ef505d306b6ced5936c6b63 | From when are many VQA datasets collected? | [
"late 2014"
] | [
[
"VQA research began in earnest in late 2014 when the DAQUAR dataset was released BIBREF0 . Including DAQUAR, six major VQA datasets have been released, and algorithms have rapidly improved. On the most popular dataset, `The VQA Dataset' BIBREF1 , the best algorithms are now approaching 70% accuracy BIBREF2 (human performance is 83%). While these results are promising, there are critical problems with existing datasets in terms of multiple kinds of biases. Moreover, because existing datasets do not group instances into meaningful categories, it is not easy to compare the abilities of individual algorithms. For example, one method may excel at color questions compared to answering questions requiring spatial reasoning. Because color questions are far more common in the dataset, an algorithm that performs well at spatial reasoning will not be appropriately rewarded for that feat due to the evaluation metrics that are used."
]
] |
fb5ce11bfd74e9d7c322444b006a27f2ff32a0cf | What is task success rate achieved? | [
"96-97.6% using the objects color or shape and 79% using shape alone"
] | [
[
"To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified."
]
] |
1e2ffa065b640e912d6ed299ff713a12195e12c4 | What simulations are performed by the authors to validate their approach? | [
"a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command"
] | [
[
"We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity."
]
] |
28b2a20779a78a34fb228333dc4b93fd572fda15 | Does proposed end-to-end approach learn in reinforcement or supervised learning manner? | [
"supervised learning"
] | [
[
"To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.",
"To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified."
]
] |
b367b823c5db4543ac421d0057b02f62ea16bf9f | Are synonymous relation taken into account in the Japanese-Vietnamese task? | [
"Yes"
] | [
[
"Due to the fact that Vietnamese WordNet is not available, we only exploit WordNet to tackle unknown words of Japanese texts in our Japanese$\\rightarrow $Vietnamese translation system. After using Kytea, Japanese texts are applied LSW algorithm to replace OOV words by their synonyms. We choose 1-best synonym for each OOV word. Table TABREF18 shows the number of OOV words replaced by their synonyms. The replaced texts are then BPEd and trained on the proposed architecture. The largest improvement is +0.92 between (1) and (3). We observed an improvement of +0.7 BLEU points between (3) and (5) without using data augmentation described in BIBREF21."
]
] |
84737d871bde8058d8033e496179f7daec31c2d3 | Is the supervised morphological learner tested on Japanese? | [
"No"
] | [
[
"We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15."
]
] |
7b3d207ed47ae58286029b62fd0c160a0145e73d | What is the dataset that is used in the paper? | [
"Unanswerable"
] | [
[]
] |
d58c264068d8ca04bb98038b4894560b571bab3e | What is the performance of the models discussed in the paper? | [
"Unanswerable"
] | [
[]
] |
f80d89fb905b3e7e17af1fe179b6f441405ad79b | Does the paper consider the use of perplexity in order to identify text anomalies? | [
"No"
] | [
[]
] |
5f6fac08c97c85d5f4f4d56d8b0691292696f8e6 | Does the paper report a baseline for the task? | [
"No"
] | [
[]
] |
6adec34d86095643e6b89cda5c7cd94f64381acc | What non-contextual properties do they refer to? | [
"These features are derived directly from the word and capture the general tendency of a word being echoed in explanations."
] | [
[
"Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations."
]
] |
62ba1fefc1eb826fe0cbac092d37a3e2098967e9 | What is the baseline? | [
"random method , LSTM "
] | [
[
"Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).",
"To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment."
]
] |
93ac147765ee2573923f68aa47741d4bcbf88fa8 | What are their proposed features? | [
"Non-contextual properties of a word, Word usage in an OP or PC (two groups), How a word connects an OP and PC., General OP/PC properties"
] | [
[
"Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):",
"[itemsep=0pt,leftmargin=*,topsep=0pt]",
"Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.",
"Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.",
"How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.",
"General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.",
"Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:"
]
] |
14c0328e8ec6360a913b8ecb3e50cb27650ff768 | What are overall baseline results on new this new task? | [
"all of our models outperform the random baseline by a wide margin, he best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116)"
] | [
[
"Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).",
"Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem."
]
] |
6073fa9050da76eeecd8aa3ccc7ecb16a238d83f | What metrics are used in evaluation of this task? | [
"F1 score"
] | [
[
"Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances)."
]
] |
eacd7e540cc34cb45770fcba463f4bf968681d59 | Do authors provide any explanation for intriguing patterns of word being echoed? | [
"No"
] | [
[
"Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations."
]
] |
1124804c3702499b78cf0678bab5867e81284b6c | What features are proposed? | [
"Non-contextual properties of a word, Word usage in an OP or PC (two groups), How a word connects an OP and PC, General OP/PC properties"
] | [
[
"Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):",
"[itemsep=0pt,leftmargin=*,topsep=0pt]",
"Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.",
"Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.",
"How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.",
"General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.",
"Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:",
"Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords."
]
] |
2b78052314cb730824836ea69bc968df7964b4e4 | Which datasets are used to train this model? | [
"SQUAD"
] | [
[
"We evaluate performance of our models on the SQUAD BIBREF16 dataset (denoted $\\mathcal {S}$ ). We use the same split as that of BIBREF4 , where a random subset of 70,484 instances from $\\mathcal {S}\\ $ are used for training ( ${\\mathcal {S}}^{tr}$ ), 10,570 instances for validation ( ${\\mathcal {S}}^{val}$ ), and 11,877 instances for testing ( ${\\mathcal {S}}^{te}$ )."
]
] |
11d2f0d913d6e5f5695f8febe2b03c6c125b667c | How is performance of this system measured? | [
"using the BLEU score as a quantitative metric and human evaluation for quality"
] | [
[
"We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.",
"Our chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation."
]
] |
1c85a25ec9d0c4f6622539f48346e23ff666cd5f | How many questions per image on average are available in dataset? | [
"5 questions per image"
] | [
[
"We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual."
]
] |
37d829cd42db9ae3d56ab30953a7cf9eda050841 | Is machine learning system underneath similar to image caption ML systems? | [
"Yes"
] | [
[
"Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems."
]
] |
4b41f399b193d259fd6e24f3c6e95dc5cae926dd | How big dataset is used for training this system? | [
"For the question generation model 15,000 images with 75,000 questions. For the chatbot model, around 460k utterances over 230k dialogues."
] | [
[
"We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual.",
"We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters."
]
] |
76377e5bb7d0a374b0aefc54697ac9cd89d2eba8 | How do they obtain word lattices from words? | [
"By considering words as vertices and generating directed edges between neighboring words within a sentence"
] | [
[
"Word Lattice",
"As shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 ."
]
] |
85aa125b3a15bbb6f99f91656ca2763e8fbdb0ff | Which metrics do they use to evaluate matching? | [
"Precision@1, Mean Average Precision, Mean Reciprocal Rank"
] | [
[
"For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used."
]
] |
4b128f9e94d242a8e926bdcb240ece279d725729 | Which dataset(s) do they evaluate on? | [
"DBQA, KBRE"
] | [
[
"Datasets",
"We conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .",
"DBQA is a document based question answering dataset. There are 8.8k questions with 182k question-sentence pairs for training and 6k questions with 123k question-sentence pairs in the test set. In average, each question has 20.6 candidate sentences and 1.04 golden answers. The average length for questions is 15.9 characters, and each candidate sentence has averagely 38.4 characters. Both questions and sentences are natural language sentences, possibly sharing more similar word choices and expressions compared to the KBQA case. But the candidate sentences are extracted from web pages, and are often much longer than the questions, with many irrelevant clauses.",
"KBRE is a knowledge based relation extraction dataset. We follow the same preprocess as BIBREF14 to clean the dataset and replace entity mentions in questions to a special token. There are 14.3k questions with 273k question-predicate pairs in the training set and 9.4k questions with 156k question-predicate pairs for testing. Each question contains only one golden predicate. Each question averagely has 18.1 candidate predicates and 8.1 characters in length, while a KB predicate is only 3.4 characters long on average. Note that a KB predicate is usually a concise phrase, with quite different word choices compared to the natural language questions, which poses different challenges to solve."
]
] |
f8f13576115992b0abb897ced185a4f9d35c5de9 | What languages do they look at? | [
"Unanswerable"
] | [
[]
] |
1fdcc650c65c11908f6bde67d5052087245f3dde | Do they report results only on English data? | [
"Unanswerable"
] | [
[]
] |
abad9beb7295d809d7e5e1407cbf673c9ffffd19 | Do they propose any further additions that could be made to improve generalisation to unseen speakers? | [
"Yes"
] | [
[
"There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal."
]
] |
265c9b733e4dfffb76acfbade4c0c9b14d3ccde1 | What are the characteristics of the dataset? | [
"synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male), data was aligned at the phone-level, 121fps with a 135 field of view, single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames)"
] | [
[
"We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances."
]
] |
0f928732f226185c76ad5960402e9342c0619310 | What type of models are used for classification? | [
"feedforward neural networks (DNNs), convolutional neural networks (CNNs)"
] | [
[
"The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept."
]
] |
11c5b12e675cfd8d1113724f019d8476275bd700 | Do they compare to previous work? | [
"No"
] | [
[]
] |
d24acc567ebaec1efee52826b7eaadddc0a89e8b | How many instances does their dataset have? | [
"10700"
] | [
[
"For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples."
]
] |
2d62a75af409835e4c123a615b06235a352a67fe | What model do they use to classify phonetic segments? | [
"feedforward neural networks, convolutional neural networks"
] | [
[
"The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept."
]
] |
fffbd6cafef96eeeee2f9fa5d8ab2b325ec528e6 | How many speakers do they have in the dataset? | [
"58"
] | [
[
"We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances."
]
] |
c034f38a570d40360c3551a6469486044585c63c | How better is proposed method than baselines perpexity wise? | [
"Perplexity of proposed MEED model is 19.795 vs 19.913 of next best result on test set."
] | [
[
"Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$)."
]
] |
9cbea686732b5b85f77868ca47d2f93cf34516ed | How does the multi-turn dialog system learns? | [
"we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution"
] | [
[
"Usually the probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ can be modeled by an RNN language model conditioned on $\\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\\mathbf {c}_t$ and $\\mathbf {e}$, and how they are combined in the decoding part."
]
] |
6aee16c4f319a190c2a451c1c099b66162299a28 | How is human evaluation performed? | [
"(1) grammatical correctness, (2) contextual coherence, (3) emotional appropriateness"
] | [
[
"For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral."
]
] |
4d4b9ff2da51b9e0255e5fab0b41dfe49a0d9012 | Is some other metrics other then perplexity measured? | [
"No"
] | [
[
"The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work."
]
] |
180047e1ccfc7c98f093b8d1e1d0479a4cca99cc | What two baseline models are used? | [
" sequence-to-sequence model (denoted as S2S), HRAN"
] | [
[
"We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one."
]
] |
fb3687ea05d38b5e65fdbbbd1572eacd82f56c0b | Do they evaluate on relation extraction? | [
"No"
] | [
[]
] |
b5d6357d3a9e3d5fdf9b344ae96cddd11a407875 | What is the baseline model for the agreement-based mode? | [
"PCFGLA-based parser, viz. Berkeley parser BIBREF5, minimal span-based neural parser BIBREF6"
] | [
[
"Our second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts."
]
] |
f33a21c6a9c75f0479ffdbb006c40e0739134716 | Do the authors suggest why syntactic parsing is so important for semantic role labelling for interlanguages? | [
"syntax-based system may generate correct syntactic analyses for partial grammatical fragments"
] | [
[
"While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent."
]
] |
8a1d4ed00d31c1f1cb05bc9d5e4f05fe87b0e5a4 | Who manually annotated the semantic roles for the set of learner texts? | [
"Authors"
] | [
[
"In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese. It is from the above corpus that we carefully select 600 pairs of L2-L1 parallel sentences. We would choose the most appropriate one among multiple versions of corrections and recorrect the L1s if necessary. Because word structure is very fundamental for various NLP tasks, our annotation also contains gold word segmentation for both L2 and L1 sentences. Note that there are no natural word boundaries in Chinese text. We first employ a state-of-the-art word segmentation system to produce initial segmentation results and then manually fix segmentation errors."
]
] |
17f5f4a5d943c91d46552fb75940b67a72144697 | By how much do they outperform existing state-of-the-art VQA models? | [
"the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X"
] | [
[
"Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding."
]
] |
83f22814aaed9b5f882168e22a3eac8f5fda3882 | How do they measure the correlation between manual groundings and model generated ones? | [
"rank-correlation BIBREF25"
] | [
[
"We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0"
]
] |
ed11b4ff7ca72dd80a792a6028e16ba20fccff66 | How do they obtain region descriptions and object annotations? | [
"they are available in the Visual Genome dataset"
] | [
[
"In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training."
]
] |
a48c6d968707bd79469527493a72bfb4ef217007 | Which training dataset allowed for the best generalization to benchmark sets? | [
"MultiNLI"
] | [
[]
] |
b69897deb5fb80bf2adb44f9cbf6280d747271b3 | Which model generalized the best? | [
"BERT"
] | [
[
"Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 ."
]
] |
ad1f230f10235413d1fe501e414358245b415476 | Which models were compared? | [
"BiLSTM-max, HBMP, ESIM, KIM, ESIM + ELMo, and BERT"
] | [
[
"For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments."
]
] |
0a521541b9e2b5c6d64fb08eb318778eba8ac9f7 | Which datasets were used? | [
"SNLI, MultiNLI and SICK"
] | [
[
"We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.",
"The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 .",
"The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data.",
"SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 ."
]
] |
11e376f98df42f487298ec747c32d485c845b5cd | What was the baseline? | [
"Unanswerable"
] | [
[]
] |