question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
284ea817fd79bc10b7a82c88d353e8f8a9d7e93c
Is the data all in Vietnamese?
[ "Yes" ]
[ [ "In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.", "The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16." ] ]
c0122190119027dc3eb51f0d4b4483d2dbedc696
What classifier do they use?
[ "Stacking method, LSTMCNN, SARNN, simple LSTM bidirectional model, TextCNN" ]
[ [ "The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.", "The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.", "The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.", "The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.", "The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.", "The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.", "Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class." ] ]
1ed6acb88954f31b78d2821bb230b722374792ed
What is private dashboard?
[ "Private dashboard is leaderboard where competitors can see results after competition is finished - on hidden part of test set (private test set)." ]
[ [ "For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set." ] ]
5a33ec23b4341584a8079db459d89a4e23420494
What is public dashboard?
[ "Public dashboard where competitors can see their results during competition, on part of the test set (public test set)." ]
[ [ "For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set." ] ]
1b9119813ea637974d21862a8ace83bc1acbab8e
What dataset do they use?
[ "They used Wiki Vietnamese language and Vietnamese newspapers to pretrain embeddings and dataset provided in HSD task to train model (details not mentioned in paper)." ]
[ [ "The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.", "The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes." ] ]
8abb96b2450ebccfcc5c98772cec3d86cd0f53e0
Do the authors report results only on English data?
[ "Yes" ]
[ [] ]
f52ec4d68de91dba66668f0affc198706949ff90
What other interesting correlations are observed?
[ "Women-Yoga" ]
[ [ "We observe interesting hidden correlation in data. Fig. FIGREF24 has Topic 2 as selected topic. Topic 2 contains top-4 co-occurring keywords \"vegan\", \"yoga\", \"job\", \"every_woman\" having the highest term frequency. We can infer different things from the topic that \"women usually practice yoga more than men\", \"women teach yoga and take it as a job\", \"Yogi follow vegan diet\". We would say there are noticeable correlation in data i.e. `Yoga-Veganism', `Women-Yoga'." ] ]
225a567eeb2698a9d3f1024a8b270313a6d15f82
what were the baselines?
[ "RNN model, CNN model , RNN-CNN model, attn1511 model, Deep Averaging Network model, avg mean of word embeddings in the sentence with projection matrix" ]
[ [ "We refer the reader to BIBREF6 and its references for detailed model descriptions. We evaluate an RNN model which uses bidirectionally summed GRU memory cells BIBREF18 and uses the final states as embeddings; a CNN model which uses sentence-max-pooled convolutional filters as embeddings BIBREF19 ; an RNN-CNN model which puts the CNN on top of per-token GRU outputs rather than the word embeddings BIBREF20 ; and an attn1511 model inspired by BIBREF20 that integrates the RNN-CNN model with per-word attention to build hypothesis-specific evidence embeddings. We also report the baseline results of avg mean of word embeddings in the sentence with projection matrix and DAN Deep Averaging Network model that employs word-level dropout and adds multiple nonlinear transformations on top of the averaged embeddings BIBREF21 ." ] ]
35b10e0dc2cb4a1a31d5692032dc3fbda933bf7d
what is the state of the art for ranking mc test answers?
[ "ensemble of hand-crafted syntactic and frame-semantic features BIBREF16" ]
[ [ "For the MCTest dataset, Fig. FIGREF30 compares our proposed models with the current state-of-art ensemble of hand-crafted syntactic and frame-semantic features BIBREF16 , as well as past neural models from the literature, all using attention mechanisms — the Attentive Reader of BIBREF26 , Neural Reasoner of BIBREF27 and the HABCNN model family of BIBREF17 . We see that averaging-based models are surprisingly effective on this task, and in particular on the MC-500 dataset it can beat even the best so far reported model of HABCNN-TE. Our proposed transfer model is statistically equivalent to the best model on both datasets (furthermore, previous work did not include confidence intervals, even though their models should also be stochastically initialized)." ] ]
f5eac66c08ebec507c582a2445e99317a83e9ebe
what is the size of the introduced dataset?
[ "Unanswerable" ]
[ [] ]
62613aca3d7c7d534c9f6d8cb91ff55626bb8695
what datasets did they use?
[ "Argus Dataset, AI2-8grade/CK12 Dataset, MCTest Dataset" ]
[ [ "Argus Dataset", "AI2-8grade/CK12 Dataset", "We consider this dataset as preliminary since it was not reviewed by a human and many hypotheses are apparently unprovable by the evidence we have gathered (i.e. the theoretical top accuracy is much lower than 1.0). However, we released it to the public and still included it in the comparison as these qualities reflect many realistic datasets of unknown qualities, so we find relative performances of models on such datasets instructive.", "MCTest Dataset", "The Machine Comprehension Test BIBREF8 dataset has been introduced to provide a challenge for researchers to come up with models that approach human-level reading comprehension, and serve as a higher-level alternative to semantic parsing tasks that enforce a specific knowledge representation. The dataset consists of a set of 660 stories spanning multiple sentences, written in simple and clean language (but with less restricted vocabulary than e.g. the bAbI dataset BIBREF9 ). Each story is accompanied by four questions and each of these lists four possible answers; the questions are tagged as based on just one in-story sentence, or requiring multiple sentence inference. We use an official extension of the dataset for RTE evaluation that again textually merges questions and answers." ] ]
6e4505609a280acc45b0a821755afb1b3b518ffd
What evaluation metric is used?
[ "The BLEU metric " ]
[ [ "During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation." ] ]
9bd938859a8b063903314a79f09409af8801c973
What datasets are used?
[ "WMT14 En-Fr and En-De datasets, IWSLT De-En and En-Vi datasets" ]
[ [ "WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0.", "IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$." ] ]
68ba5bf18f351e8c83fae7b444cc50bef7437f13
What are three main machine translation tasks?
[ "De-En, En-Fr and En-Vi translation tasks" ]
[ [ "During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation." ] ]
f6a1125c5621a2f32c9bcdd188dff14efa096083
How big is improvement in performance over Transformers?
[ "2.2 BLEU gains" ]
[ [ "As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr.", "Compared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation." ] ]
282aa4e160abfa7569de7d99b8d45cabee486ba4
How do they determine the opinion summary?
[ "the weighted sum of the new opinion representations, according to their associations with the current aspect representation" ]
[ [ "As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step." ] ]
ecfb2e75eb9a8eba8f640a039484874fa0d2fceb
Do they explore how useful is the detection history and opinion summary?
[ "Yes" ]
[ [ "Ablation Study", "To further investigate the efficacy of the key components in our framework, namely, THA and STN, we perform ablation study as shown in the second block of Table TABREF39 . The results show that each of THA and STN is helpful for improving the performance, and the contribution of STN is slightly larger than THA. “OURS w/o THA & STN” only keeps the basic bi-linear attention. Although it performs not bad, it is still less competitive compared with the strongest baseline (i.e., CMLA), suggesting that only using attention mechanism to distill opinion summary is not enough. After inserting the STN component before the bi-linear attention, i.e. “OURS w/o THA”, we get about 1% absolute gains on each dataset, and then the performance is comparable to CMLA. By adding THA, i.e. “OURS”, the performance is further improved, and all state-of-the-art methods are surpassed." ] ]
a6950c22c7919f86b16384facc97f2cf66e5941d
Which dataset(s) do they use to train the model?
[ "INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain." ]
[ [ "To evaluate the effectiveness of the proposed framework for the ATE task, we conduct experiments over four benchmark datasets from the SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 . Table TABREF24 shows their statistics. INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain. In these datasets, aspect terms have been labeled by the task organizer." ] ]
54be3541cfff6574dba067f1e581444537a417db
By how much do they outperform state-of-the-art methods?
[ "Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively." ]
[ [ "As shown in Table TABREF39 , the proposed framework consistently obtains the best scores on all of the four datasets. Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively.", "Our framework can outperform RNCRF, a state-of-the-art model based on dependency parsing, on all datasets. We also notice that RNCRF does not perform well on INLINEFORM0 and INLINEFORM1 (3.7% and 3.9% inferior than ours). We find that INLINEFORM2 and INLINEFORM3 contain many informal reviews, thus RNCRF's performance degradation is probably due to the errors from the dependency parser when processing such informal texts." ] ]
221e9189a9d2431902d8ea833f486a38a76cbd8e
What is the average number of turns per dialog?
[ "The average number of utterances per dialog is about 23 " ]
[ [ "Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors." ] ]
a276d5931b989e0a33f2a0bc581456cca25658d9
What baseline models are offered?
[ "3-gram and 4-gram conditional language model, Convolution, LSTM models BIBREF27 with and without attention BIBREF28, Transformer, GPT-2" ]
[ [ "n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model.", "Convolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2.", "LSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors.", "Transformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\\beta _{1} = 0.85$, $\\beta _{2} = 0.997$) and dropout probability set to 0.2.", "GPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters." ] ]
c21d26130b521c9596a1edd7b9ef3fe80a499f1e
Which six domains are covered in the dataset?
[ "ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations" ]
[ [ "To help solve the data problem we present Taskmaster-1, a dataset consisting of 13,215 dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. For the spoken dialogs, we created a “Wizard of Oz” (WOz) system BIBREF12 to collect two-person, spoken conversations. Crowdsourced workers playing the “user\" interacted with human operators playing the “digital assistant” using a web-based interface. In this way, users were led to believe they were interacting with an automated system while it was in fact a human, allowing them to express their turns in natural ways but in the context of an automated interface. We refer to this spoken dialog type as “two-person dialogs\". For the written dialogs, we engaged crowdsourced workers to write the full conversation themselves based on scenarios outlined for each task, thereby playing roles of both the user and assistant. We refer to this written dialog type as “self-dialogs\". In a departure from traditional annotation techniques BIBREF10, BIBREF8, BIBREF13, dialogs are labeled with simple API calls and arguments. This technique is much easier for annotators to learn and simpler to apply. As such it is more cost effective and, in addition, the same model can be used for multiple service providers." ] ]
ec8043290356fcb871c2f5d752a9fe93a94c2f71
What other natural processing tasks authors think could be studied by using word embeddings?
[ "general classification tasks, use of the methodology in other networked systems, a network could be enriched with embeddings obtained from graph embeddings techniques" ]
[ [ "Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links." ] ]
728c2fb445173fe117154a2a5482079caa42fe24
What is the reason that traditional co-occurrence networks fail in establishing links between similar words whenever they appear distant in the text?
[ "long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach" ]
[ [ "In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.", "While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges." ] ]
23d32666dfc29ed124f3aa4109e2527efa225fbc
Do the use word embeddings alone or they replace some previous features of the model with word embeddings?
[ "They use it as addition to previous model - they add new edge between words if word embeddings are similar." ]
[ [ "While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges." ] ]
076928bebde4dffcb404be216846d9d680310622
On what model architectures are previous co-occurence networks based?
[ "in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window, connects only adjacent words in the so called word adjacency networks" ]
[ [ "In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks." ] ]
f33236ebd6f5a9ccb9b9dbf05ac17c3724f93f91
Is model explanation output evaluated, what metric was used?
[ "balanced accuracy, i.e., the average of the three accuracies on each class" ]
[ [ "Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class." ] ]
66bf0d61ffc321f15e7347aaed191223f4ce4b4a
How many annotators are used to write natural language explanations to SNLI-VE-2.0?
[ "2,060 workers" ]
[ [ "We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location." ] ]
5dfa59c116e0ceb428efd99bab19731aa3df4bbd
How many natural language explanations are human-written?
[ "Totally 6980 validation and test image-sentence pairs have been corrected." ]
[ [ "e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40." ] ]
0c557b408183630d1c6c325b5fb9ff1573661290
How much is performance difference of existing model between original and corrected corpus?
[ "73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set" ]
[ [ "The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant." ] ]
a08b5018943d4428f067c08077bfff1af3de9703
What is the class with highest error rate in SNLI-VE?
[ "neutral class" ]
[ [ "Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\\sim }31\\%$ errors in this class, and ${\\sim }1\\%$ for the contradiction and entailment classes." ] ]
9447ec36e397853c04dcb8f67492ca9f944dbd4b
What is the dataset used as input to the Word2Vec algorithm?
[ "Italian Wikipedia and Google News extraction producing final vocabulary of 618224 words" ]
[ [ "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences.", "The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation\" linked to certain words, it was also decided to replace every number present in the text with the particular $\\langle NUM \\rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\\,224$ words." ] ]
12c6ca435f4fcd4ad5ea5c0d76d6ebb9d0be9177
Are the word embeddings tested on a NLP task?
[ "Yes" ]
[ [ "To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\\,791$ analogies divided into 19 different categories: 6 related to the “semantic\" macro-area (8915 analogies) and 13 to the “syntactic\" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen\"); where $b^{*}$ is the word to be guessed (“queen\"), $b$ is the word coupled to it (“king\"), $a$ is the word for the components to be eliminated (“man\"), and $a^{*}$ is the word for the components to be added (“woman\")." ] ]
32c149574edf07b1a96d7f6bc49b95081de1abd2
Are the word embeddings evaluated?
[ "Yes" ]
[ [ "Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).", "As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs." ] ]
3de27c81af3030eb2d9de1df5ec9bfacdef281a9
How big is dataset used to train Word2Vec for the Italian Language?
[ "$421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences" ]
[ [ "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences." ] ]
cc680cb8f45aeece10823a3f8778cf215ccc8af0
How does different parameter settings impact the performance and semantic capacity of resulting model?
[ "number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others" ]
[ [ "In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others." ] ]
fab4ec639a0ea1e07c547cdef1837c774ee1adb8
Are the semantic analysis findings for Italian language similar to English language version?
[ "Unanswerable" ]
[ [] ]
9190c56006ba84bf41246a32a3981d38adaf422c
What dataset is used for training Word2Vec in Italian language?
[ "extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila)" ]
[ [ "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences." ] ]
7aab78e90ba1336950a2b0534cc0cb214b96b4fd
How are the auxiliary signals from the morphology table incorporated in the decoder?
[ "an additional morphology table including target-side affixes., We inject the decoder with morphological properties of the target language." ]
[ [ "In order to address the aforementioned problems we redesign the neural decoder in three different scenarios. In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models. Section \"Proposed Architecture\" provides more details on our models." ] ]
b7fe91e71da8f4dc11e799b3bd408d253230e8c6
What type of morphological information is contained in the "morphology table"?
[ "target-side affixes" ]
[ [ "In order to address the aforementioned problems we redesign the neural decoder in three different scenarios. In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models. Section \"Proposed Architecture\" provides more details on our models." ] ]
16fa6896cf4597154363a6c9a98deb49fffef15f
Do they report results only on English data?
[ "Yes" ]
[ [ "We henceforth refer to a tweet affirming climate change as a “positive\" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative\" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint\" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change\" or “global warming\", and further included disaster-specific search terms (e.g., “bomb cyclone,\" “blizzard,\" “snowstorm,\" etc.). We refer to the first data batch as “influential\" tweets, and the second data batch as “event-related\" tweets." ] ]
0f60864503ecfd5b048258e21d548ab5e5e81772
Do the authors mention any confounds to their study?
[ "No" ]
[ [ "There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting\" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters." ] ]
fe578842021ccfc295209a28cf2275ca18f8d155
Which machine learning models are used?
[ "RNNs, CNNs, Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel" ]
[ [ "Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 ." ] ]
00ef9cc1d1d60f875969094bb246be529373cb1d
What methodology is used to compensate for limited labelled data?
[ "Influential tweeters ( who they define as individuals certain to have a classifiable sentiment regarding the topic at hand) is used to label tweets in bulk in the absence of manually-labeled tweets." ]
[ [ "The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential\" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets." ] ]
279b633b90fa2fd69e84726090fadb42ebdf4c02
Which five natural disasters were examined?
[ "the East Coast Bomb Cyclone, the Mendocino, California wildfires, Hurricane Florence, Hurricane Michael, the California Camp Fires" ]
[ [ "The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets." ] ]
0106bd9d54e2f343cc5f30bb09a5dbdd171e964b
Which social media platform is explored?
[ "twitter " ]
[ [ "In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets." ] ]
e015d033d4ee1c83fe6f192d3310fb820354a553
What datasets did they use?
[ "BIBREF8 a refined collection of tweets gathered from twitter" ]
[ [ "In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets." ] ]
8a871b136ccef78391922377f89491c923a77730
What are the baseline state of the art models?
[ "Stanford NER, BiLSTM+CRF, LSTM+CNN+CRF, T-NER and BiLSTM+CNN+Co-Attention" ]
[ [] ]
acd05f31e25856b9986daa1651843b8dc92c2d99
What is the size of the dataset?
[ " 9,892 stories of sexual harassment incidents" ]
[ [ "We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser\", “time\", “location\", “trigger\"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below." ] ]
8c78b21ec966a5e8405e8b9d3d6e7099e95ea5fb
What model did they use?
[ "joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM)" ]
[ [ "2. We proposed joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) BIBREF9, BIBREF10 as basic units. Our models can automatically extract the key elements from the sexual harassment stories and at the same time categorize the stories in different dimensions. The proposed models outperformed the single task models, and achieved higher than previously reported accuracy in classifications of harassment forms BIBREF6." ] ]
af60462881b2d723adeb4acb5fbc07ea27b6bde2
What patterns were discovered from the stories?
[ "we demonstrate that harassment occurred more frequently during the night time than the day time, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) , We also found that the majority of young perpetrators engaged in harassment behaviors on the streets, we found that adult perpetrators of sexual harassment are more likely to act alone, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location , commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers." ]
[ [ "We plotted the distribution of harassment incidents in each categorization dimension (Figure FIGREF19). It displays statistics that provide important evidence as to the scale of harassment and that can serve as the basis for more effective interventions to be developed by authorities ranging from advocacy organizations to policy makers. It provides evidence to support some commonly assumed factors about harassment: First, we demonstrate that harassment occurred more frequently during the night time than the day time. Second, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives.", "Furthermore, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) (Figure FIGREF20). The significance of the correlation is tested by chi-square independence with p value less than 0.05. Identifying these patterns will enable interventions to be differentiated for and targeted at specific populations. For instance, the young harassers often engage in harassment activities as groups. This points to the influence of peer pressure and masculine behavioral norms for men and boys on these activities. We also found that the majority of young perpetrators engaged in harassment behaviors on the streets. These findings suggest that interventions with young men and boys, who are readily influenced by peers, might be most effective when education is done peer-to-peer. It also points to the locations where such efforts could be made, including both in schools and on the streets. In contrast, we found that adult perpetrators of sexual harassment are more likely to act alone. Most of the adult harassers engaged in harassment on public transportation. These differences in adult harassment activities and locations, mean that interventions should be responsive to these factors. For example, increasing the security measures on transit at key times and locations.", "In addition, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location (Figure FIGREF21). For example, young harassers are more likely to engage in behaviors of verbal harassment, rather than physical harassment as compared to adults. It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators. In contrast, commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. The nature and location of the harassment are particularly significant in developing strategies for those who are harassed or who witness the harassment to respond and manage the everyday threat of harassment. For example, some strategies will work best on public transport, a particular closed, shared space setting, while other strategies might be more effective on the open space of the street." ] ]
879bec20c0fdfda952444018e9435f91e34d8788
Did they use a crowdsourcing platform?
[ "Unanswerable" ]
[ [] ]
3c378074111a6cc7319c0db0aced5752c30bfffb
Does the performance increase using their method?
[ "The multi-task model outperforms the single-task model at all data sizes, but none have an overall benefit from the open vocabulary system" ]
[ [ "In Figure 1 we show the single-task vs. multi-task model performance for each of three different applications. The multi-task model outperforms the single-task model at all data sizes, and the relative performance increases as the size of the training data decreases. When only 200 sentences of training data are used, the performance of the multi-task model is about 60% better than the single-task model for both the Airbnb and Greyhound apps. The relative gain for the OpenTable app is 26%. Because the performance of the multi-task model decays much more slowly as the amount of training data is reduced, the multi-task model can deliver the same performance with a considerable reduction in the amount of labeled data.", "Table 4 reports F1 scores on the test set for both the closed and open vocabulary systems. The results differ between the tasks, but none have an overall benefit from the open vocabulary system. Looking at the subset of sentences that contain an OOV token, the open vocabulary system delivers increased performance on the Airbnb and Greyhound tasks. These two are the most difficult apps out of the four and therefore had the most room for improvement. The United app is also all lower case and casing is an important clue for detecting proper nouns that the open vocabulary model takes advantage of." ] ]
b464bc48f176a5945e54051e3ffaea9a6ad886d7
What tasks are they experimenting with in this paper?
[ "Slot filling, we consider the actions that a user might perform via apps on their phone, The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant" ]
[ [ "Slot filling models are a useful method for simple natural language understanding tasks, where information can be extracted from a sentence and used to perform some structured action. For example, dates, departure cities and destinations represent slots to fill in a flight booking task. This information is extracted from natural language queries leveraging typical context associated with each slot type. Researchers have been exploring data-driven approaches to learning models for automatic identification of slot information since the 90's, and significant advances have been made BIBREF0 . Our paper builds on recent work on slot-filling using recurrent neural networks (RNNs) with a focus on the problem of training from minimal annotated data, taking an approach of sharing data from multiple tasks to reduce the amount of data for developing a new task.", "As candidate tasks, we consider the actions that a user might perform via apps on their phone. Typically, a separate slot-filling model would be trained for each app. For example, one model understands queries about classified ads for cars BIBREF1 and another model handles queries about the weather BIBREF2 . As the number of apps increases, this approach becomes impractical due to the burden of collecting and labeling the training data for each model. In addition, using independent models for each task has high storage costs for mobile devices.", "Crowd-sourced data was collected simulating common use cases for four different apps: United Airlines, Airbnb, Greyhound bus service and OpenTable. The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant. In order to elicit natural language, crowd workers were instructed to simulate a conversation with a friend planning an activity as opposed to giving a command to the computer. Workers were prompted with a slot type/value pair and asked to form a reply to their friend using that information. The instructions were to not include any other potential slots in the sentence but this instruction was not always followed by the workers." ] ]
3b40799f25dbd98bba5b526e0a1d0d0bb51173e0
What is the size of the open vocabulary?
[ "Unanswerable" ]
[ [] ]
3c16d4cf5dc23223980d9c0f924cb9e4e6943f13
How do they select answer candidates for their QA task?
[ "AMS method." ]
[ [ "Each training sample is generated by three steps: align, mask and select, which we call as AMS method. Each sample in the dataset consists of a question and several candidate answers, which has the same form as the CommonsenseQA dataset. An example of constructing one training sample by masking concept $_2$ is shown in Table 2 ." ] ]
4c822bbb06141433d04bbc472f08c48bc8378865
How do they extract causality from text?
[ "They identify documents that contain the unigrams 'caused', 'causing', or 'causes'" ]
[ [ "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ] ]
1baf87437b70cc0375b8b7dc2cfc2830279bc8b5
What is the source of the "control" corpus?
[ "Randomly selected from a Twitter dump, temporally matched to causal documents" ]
[ [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.", "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ] ]
0b31eb5bb111770a3aaf8a3931d8613e578e07a8
What are the selection criteria for "causal statements"?
[ "Presence of only the exact unigrams 'caused', 'causing', or 'causes'" ]
[ [ "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ] ]
7348e781b2c3755b33df33f4f0cab4b94fcbeb9b
Do they use expert annotations, crowdsourcing, or only automatic methods to analyze the corpora?
[ "Only automatic methods" ]
[ [ "The rest of this paper is organized as follows: In Sec. \"Materials and Methods\" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. \"Results\" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. \"Discussion\" ." ] ]
f68bd65b5251f86e1ed89f0c858a8bb2a02b233a
how do they collect the comparable corpus?
[ "Randomly from a Twitter dump" ]
[ [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.", "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ] ]
e111925a82bad50f8e83da274988b9bea8b90005
How do they collect the control corpus?
[ "Randomly from Twitter" ]
[ [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.", "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ] ]
ba48c095c496d01c7717eaa271470c3406bf2d7c
What languages do they experiment with?
[ "Chinese" ]
[ [ "In order to train and evaluate open-domain factoid QA system for real-world questions, we build a new Chinese QA dataset named as WebQA. The dataset consists of tuples of (question, evidences, answer), which is similar to example in Figure FIGREF3 . All the questions, evidences and answers are collected from web. Table TABREF20 shows some statistics of the dataset." ] ]
42a61773aa494f7b12838f71a949034c12084de1
What are the baselines?
[ "MemN2N BIBREF12, Attentive and Impatient Readers BIBREF6" ]
[ [ "We compare our model with two sets of baselines:", "MemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 . It encodes question and evidence with a bag-of-word method and stores the representations of evidences in an external memory. A recurrent attention model is used to retrieve relevant information from the memory to answer the question.", "Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings. The simpler Attentive Reader uses a similar way as our work to compute attention for the evidence. And the more complex Impatient Reader computes attention after processing each question word." ] ]
48c3e61b2ed7b3f97706e2a522172bf9b51ec467
What was the inter-annotator agreement?
[ "correctness of all the question answer pairs are verified by at least two annotators" ]
[ [ "The questions and answers are mainly collected from a large community QA website Baidu Zhidao and a small portion are from hand collected web documents. Therefore, all these questions are indeed asked by real-world users in daily life instead of under controlled conditions. All the questions are of single-entity factoid type, which means (1) each question is a factoid question and (2) its answer involves only one entity (but may have multiple words). The question in Figure FIGREF3 is a positive example, while the question “Who are the children of Albert Enistein?” is a counter example because the answer involves three persons. The type and correctness of all the question answer pairs are verified by at least two annotators." ] ]
61fba3ab10f7b6906e27b028fb1d42ec601c3fb8
Did they use a crowdsourcing platform?
[ "Unanswerable" ]
[ [] ]
80de3baf97a55ea33e0fe0cafa6f6221ba347d0a
Are resolution mode variables hand crafted?
[ "No" ]
[ [ "According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:", "$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .", "$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.", "$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions." ] ]
f5707610dc8ae2a3dc23aec63d4afa4b40b7ec1e
What are resolution model variables?
[ "Variables in the set {str, prec, attr} indicating in which mode the mention should be resolved." ]
[ [ "According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:", "$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .", "$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.", "$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions." ] ]
e76139c63da0f861c097466983fbe0c94d1d9810
Is the model presented in the paper state of the art?
[ "No, supervised models perform better for this task." ]
[ [ "To make a thorough empirical comparison with previous studies, Table 3 (below the dashed line) also shows the results of some state-of-the-art supervised coreference resolution systems — IMS: the second best system in the CoNLL 2012 shared task BIBREF28 ; Latent-Tree: the latent tree model BIBREF29 obtaining the best results in the shared task; Berkeley: the Berkeley system with the final feature set BIBREF12 ; LaSO: the structured perceptron system with non-local features BIBREF30 ; Latent-Strc: the latent structure system BIBREF31 ; Model-Stack: the entity-centric system with model stacking BIBREF32 ; and Non-Linear: the non-linear mention-ranking model with feature representations BIBREF33 . Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 ." ] ]
b8b588ca1e876b3094ae561a875dd949c8965b2e
What problems are found with the evaluation scheme?
[ "no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue" ]
[ [ "From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue." ] ]
2ec640e6b4f1ebc158d13ee6589778b4c08a04c8
How is the data annotated?
[ "Unanswerable" ]
[ [] ]
ab0bb4d0a9796416d3d7ceba0ba9ab50c964e9d6
What collection steps do they mention?
[ "Unanswerable" ]
[ [] ]
0460019eb2186aef835f7852fc445b037bd43bb7
How many intents were classified?
[ "two" ]
[ [ "In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance." ] ]
96c09ece36a992762860cde4c110f1653c110d96
What was the result of the highest performing system?
[ "For task 1 best F1 score was 0.9391 on closed and 0.9414 on open test.\nFor task2 best result had: Ratio 0.3175 , Satisfaction 64.53, Fluency 0, Turns -1 and Guide 2" ]
[ [ "There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper.", "Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2." ] ]
a9cc4b17063711c8606b8fc1c5eaf057b317a0c9
What metrics are used in the evaluation?
[ "For task 1, we use F1-score, Task completion ratio, User satisfaction degree, Response fluency, Number of dialogue turns, Guidance ability for out of scope input" ]
[ [ "It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric.", "We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following.", "Task completion ratio: The number of completed tasks divided by the number of total tasks.", "User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively.", "Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency.", "Number of dialogue turns: The number of utterances in a task-completed dialogue.", "Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide." ] ]
6ead576ee5813164684a8cdda36e6a8c180455d9
How do they measure the quality of summaries?
[ "Rouge-L, Bleu-1" ]
[ [ "Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 ." ] ]
0117aa1266a37b0d2ef429f1b0653b9dde3677fe
Does their model also take the expected answer style as input?
[ "Yes" ]
[ [ "Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style." ] ]
5455b3cdcf426f4d5fc40bc11644a432fa7a5c8f
What do they mean by answer styles?
[ "well-formed sentences vs concise answers" ]
[ [ "We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL." ] ]
6c80bc3ed6df228c8ca6e02c0a8a1c2889498688
Is there exactly one "answer style" per dataset?
[ "Yes" ]
[ [ "We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL." ] ]
2d274c93901c193cf7ad227ab28b1436c5f410af
What are the baselines that Masque is compared against?
[ "BiDAF, Deep Cascade QA, S-Net+CES2S, BERT+Multi-PGNet, Selector+CCG, VNET, DECAPROP, MHPGM+NOIC, ConZNet, RMR+A2D" ]
[ [] ]
e63bde5c7b154fbe990c3185e2626d13a1bad171
What is the performance achieved on NarrativeQA?
[ "Bleu-1: 54.11, Bleu-4: 30.43, METEOR: 26.13, ROUGE-L: 59.87" ]
[ [] ]
cb8a6f5c29715619a137e21b54b29e9dd48dad7d
What is an "answer style"?
[ "well-formed sentences vs concise answers" ]
[ [ "We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL." ] ]
8a7bd9579d2783bfa81e055a7a6ebc3935da9d20
What was the previous state of the art model for this task?
[ "WAS, LipCH-Net-seq, CSSMCM-w/o video" ]
[ [ "WAS: The architecture used in BIBREF3 without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation.", "LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss BIBREF14 used in LipCH-Net BIBREF5 when converting picture to pinyin.", "CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character." ] ]
27b01883ed947b457d3bab0c66de26c0736e4f90
What syntactic structure is used to model tones?
[ "syllables" ]
[ [ "Based on the above considerations, in this paper, we present CSSMCM, a sentence-level Chinese Mandarin lip reading network, which contains three sub-networks. Same as BIBREF5 , in the first sub-network, pinyin sequence is predicted from the video. Different from BIBREF5 , which predicts pinyin characters from video, pinyin is taken as a whole in CSSMCM, also known as syllables. As we know, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation. Compared with pinyin characters, syllables are a longer linguistic unit, and can reduce the difficulty of syllable choices in the decoder by sequence-to-sequence attention-based models BIBREF6 . Chen et al. BIBREF7 find that there might be a relationship between the production of lexical tones and the visible movements of the neck, head, and mouth. Motivated by this observation, in the second sub-network, both video and pinyin sequence is used as input to predict tone. Then in the third sub-network, video, pinyin, and tone sequence work together to predict the Chinese character sequence. At last, three sub-networks are jointly finetuned to improve overall performance." ] ]
9714cb7203c18a0c53805f6c889f2e20b4cab5dd
What visual information characterizes tones?
[ "video sequence is first fed into the VGG model BIBREF9 to extract visual feature" ]
[ [ "As shown in Equation ( EQREF6 ), tone prediction sub-network ( INLINEFORM0 ) takes video and pinyin sequence as inputs and predict corresponding tone sequence. This problem is modeled as a sequence-to-sequence learning problem too. The corresponding model architecture is shown in Figure FIGREF8 .", "In order to take both video and pinyin information into consideration when producing tone, a dual attention mechanism BIBREF3 is employed. Two independent attention mechanisms are used for video and pinyin sequence. Video context vectors INLINEFORM0 and pinyin context vectors INLINEFORM1 are fused when predicting a tone character at each decoder step.", "The video encoder is the same as in Section SECREF7 and the pinyin encoder is: DISPLAYFORM0", "The pinyin prediction sub-network transforms video sequence into pinyin sequence, which corresponds to INLINEFORM0 in Equation ( EQREF6 ). This sub-network is based on the sequence-to-sequence architecture with attention mechanism BIBREF8 . We name the encoder and decoder the video encoder and pinyin decoder, for the encoder process video sequence, and the decoder predicts pinyin sequence. The input video sequence is first fed into the VGG model BIBREF9 to extract visual feature. The output of conv5 of VGG is appended with global average pooling BIBREF10 to get the 512-dim feature vector. Then the 512-dim feature vector is fed into video encoder. The video encoder can be denoted as: DISPLAYFORM0" ] ]
a22b900fcd76c3d36b5679691982dc6e9a3d34bf
Do they report results only on English data?
[ "Unanswerable" ]
[ [] ]
fb2593de1f5cc632724e39d92e4dd82477f06ea1
How do they demonstrate the robustness of their results?
[ "performances of a purely content-based model naturally stays stable" ]
[ [ "The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model." ] ]
476d0b5579deb9199423bb843e584e606d606bc7
What baseline and classification systems are used in experiments?
[ "BIBREF13, majority baseline" ]
[ [ "The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features.", "The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 )." ] ]
eddabb24bc6de6451bcdaa7940f708e925010912
How are the EAU text spans annotated?
[ "Answer with content missing: (Data and pre-processing section) The data is suited for our experiments because the annotators were explicitly asked to provide annotations on a clausal level." ]
[ [ "Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors.", "Results" ] ]
f0946fb9df9839977f4d16c43476e4c2724ff772
How are elementary argumentative units defined?
[ "Unanswerable" ]
[ [] ]
e51d0c2c336f255e342b5f6c3cf2a13231789fed
Which Twitter corpus was used to train the word vectors?
[ "They collected tweets in Russian language using a heuristic query specific to Russian" ]
[ [ "Twitter provides well-documented API, which allows to request any information about Tweets, users and their profiles, with respect to rate limits. There is special type of API, called Streaming API, that provides a real-time stream of tweets. The key difference with regular API is that connection is kept alive as long as possible, and Tweets are sent in real-time to the client. There are three endpoints of Streaming API of our interest: “sample”, “filter” and “firehose”. The first one provides a sample (random subset) of the full Tweet stream. The second one allows to receive Tweets matching some search criteria: matching to one or more search keywords, produced by subset of users, or coming from certain geo location. The last one provides the full set of Tweets, although it is not available by default. In order to get Twitter “firehose” one can contact Twitter, or buy this stream from third-parties.", "In our case the simplest approach would be to use “sample” endpoint, but it provides Tweets in all possible languages from all over the World, while we are concerned only about one language (Russian). In order to use this endpoint we implemented filtering based on language. The filter is simple: if Tweet does not contain a substring of 3 or more cyrillic symbols, it is considered non-Russian. Although this approach keeps Tweets in Mongolian, Ukrainian and other slavic languages (because they use cyrillic alphabet), the total amount of false-positives in this case is negligible. To demonstrate this we conducted simple experiment: on a random sample of 200 tweets only 5 were in a language different from Russian. In order not to rely on Twitter language detection, we chose to proceed with this method of language-based filtering.", "However, the amount of Tweets received through “sample” endpoint was not satisfying. This is probably because “sample” endpoint always streams the same content to all its clients, and small portion of it comes in Russian language. In order to force mining of Tweets in Russian language, we chose \"filter\" endpoint, which requires some search query. We constructed heuristic query, containing some auxiliary words, specific to Russian language: conjunctions, pronouns, prepositions. The full list is as follows:", "russian я, у, к, в, по, на, ты, мы, до, на, она, он, и, да." ] ]
5b6aec1b88c9832075cd343f59158078a91f3597
How does proposed word embeddings compare to Sindhi fastText word representations?
[ "Proposed SG model vs SINDHI FASTTEXT:\nAverage cosine similarity score: 0.650 vs 0.388\nAverage semantic relatedness similarity score between countries and their capitals: 0.663 vs 0.391" ]
[ [ "Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings.", "Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able." ] ]
a6717e334c53ebbb87e5ef878a77ef46866e3aed
Are trained word embeddings used for any other NLP task?
[ "No" ]
[ [ "In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet." ] ]
a1064307a19cd7add32163a70b6623278a557946
How many uniue words are in the dataset?
[ "908456 unique words are available in collected corpus." ]
[ [ "The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens." ] ]
8cb9006bcbd2f390aadc6b70d54ee98c674e45cc
How is the data collected, which web resources were used?
[ "daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary website, novels, history and religious books from Sindhi Adabi Board, tweets regarding news and sports are collected from twitter" ]
[ [ "The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter." ] ]
75043c17a2cddfce6578c3c0e18d4b7cf2f18933
What trends are found in musical preferences?
[ "audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious" ]
[ [ "How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases." ] ]
95bb3ea4ebc3f2174846e8d422abc076e1407d6a
Which decades did they look at?
[ "between 1900s and 2010s" ]
[ [ "time: decade (classes between 1900s and 2010s) and year representative of the time when the genre became meainstream" ] ]
3ebdc15480250f130cf8f5ab82b0595e4d870e2f
How many genres did they collect from?
[ "77 genres" ]
[ [ "From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1.", "From a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions:" ] ]
bbc58b193c08ccb2a1e8235a36273785a3b375fb
Does the paper mention other works proposing methods to detect anglicisms in Spanish?
[ "Yes" ]
[ [ "In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish." ] ]
3c34187a248d179856b766e9534075da1aa5d1cf
What is the performance of the CRF model on the task described?
[ "the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49)" ]
[ [ "Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences." ] ]