question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
60fd7ef7986a5752b31d3bd12bbc7da6843547a4
What methods is RelNet compared to?
[ "We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17" ]
[ [ "We evaluate the model's performance on the bAbI tasks BIBREF18 , a collection of 20 question answering tasks which have become a benchmark for evaluating memory-augmented neural networks. We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 . Performance is measured in terms of mean percentage error on the tasks.", "The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks." ] ]
7d59374d9301a0c09ea5d023a22ceb6ce07fb490
How do they measure the diversity of inferences?
[ "by number of distinct n-grams" ]
[ [ "We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens." ] ]
8e2b125426d1220691cceaeaf1875f76a6049cbd
By how much do they improve the accuracy of inferences over state-of-the-art methods?
[ "ON Event2Mind, the accuracy of proposed method is improved by absolute BLUE 2.9, 10.87, 1.79 for xIntent, xReact and oReact respectively.\nOn Atomic dataset, the accuracy of proposed method is improved by absolute BLUE 3.95. 4.11, 4.49 for xIntent, xReact and oReact.respectively." ]
[ [ "We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens." ] ]
42bc4e0cd0f3e238a4891142f1b84ebcd6594bf1
Which models do they use as baselines on the Atomic dataset?
[ "RNN-based Seq2Seq, Variational Seq2Seq, VRNMT , CWVAE-Unpretrained" ]
[ [ "We compared our proposed model with the following four baseline methods:", "RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.", "Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.", "VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.", "CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.", "Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively." ] ]
fb76e994e2e3fa129f1e94f1b043b274af8fb84c
How does the context-aware variational autoencoder learn event background information?
[ " CWVAE is trained on an auxiliary dataset to learn the event background information by using the context-aware latent variable. Then, in finetute stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target." ]
[ [ "In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.)." ] ]
99ef97336c0112d9f60df108f58c8b04b519a854
What is the size of the Atomic dataset?
[ "Unanswerable" ]
[ [] ]
95d8368b1055d97250df38d1e8c4a2b283d2b57e
what standard speech transcription pipeline was used?
[ "pipeline that is used at Microsoft for production data" ]
[ [ "The goal of reaching “human parity” in automatic CTS transcription raises the question of what should be considered human accuracy on this task. We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol. Using this methodology, and incorporating state-of-the-art convolutional and recurrent network architectures for both acoustic modeling BIBREF9 , BIBREF10 , BIBREF7 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 and language modeling BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 with extensive use of model combination, we obtained a machine error rate that was very slightly below that of the human transcription process (5.8% versus 5.9% on Switchboard data, and 11.0% versus 11.3% on CallHome English data) BIBREF19 . Since then, Saon et al. have reported even better results, along with a separate transcription experiment that puts the human error rate, on the same test data, at a lower point than measured by us (5.1% for Switchboard, 6.8% for CallHome) BIBREF20 ." ] ]
a978a1ee73547ff3a80c66e6db3e6c3d3b6512f4
How much improvement does their method get over the fine tuning baseline?
[ "0.08 points on the 2011 test set, 0.44 points on the 2012 test set, 0.42 points on the 2013 test set for IWSLT-CE." ]
[ [] ]
46ee1cbbfbf0067747b28bdf4c8c2f7dc8955650
What kinds of neural networks did they use in this paper?
[ "LSTMs" ]
[ [ "For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100." ] ]
4f12b41bd3bb2610abf7d7835291496aa69fb78c
How did they use the domain tags?
[ "Appending the domain tag “<2domain>\" to the source sentences of the respective corpora" ]
[ [ "The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain." ] ]
65e6a1cc2590b139729e7e44dce6d9af5dd2c3b5
Why mixed initiative multi-turn dialogs are the greatest challenge in building open-domain conversational agents?
[ "do not follow a particular plan or pursue a particular fixed information need, integrating content found via search with content from structured data, at each system turn, there are a large number of conversational moves that are possible, most other domains do not have such high quality structured data available, live search may not be able to achieve the required speed and efficiency" ]
[ [ "The Alexa Prize funded 12 international teams to compete to create a conversational agent that can discuss any topic for at least 20 minutes. UCSC's Slugbot was one of these funded teams. The greatest challenges with the competition arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. This paper describes some of the lessons we learned building SlugBot for the 2017 Alexa Prize, particularly focusing on the challenges of integrating content found via search with content from structured data in order to carry on an ongoing, coherent, open-domain, mixed-initiative conversation. SlugBot's conversations over the semi-finals user evaluation averaged 8:17 minutes.", "More challenging is that at each system turn, there are a large number of conversational moves that are possible. Making good decisions about what to say next requires balancing a dialogue policy as to what dialogue acts might be good in this context, with real-time information as to what types of content might be possible to use in this context. Slugbot could offer an opinion as in turn S3, ask a follow-on question as in S3, take the initiative to provide unasked for information, as in S5, or decide, e.g. in the case of the user's request for plot information, to use search to retrieve some relevant content. Search cannot be used effectively here without constructing an appropriate query, or knowing in advance where plot information might be available. In a real-time system, live search may not be able to achieve the required speed and efficiency, so preprocessing or caching of relevant information may be necessary. Finally, most other domains do not have such high quality structured data available, leaving us to develop or try to rely on more general models of discourse coherence." ] ]
b54fc86dc2cc6994e10c1819b6405de08c496c7b
How is speed measured?
[ "time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred" ]
[ [ "The first metric we report is the reaction type. Recent studies have found that 59% of bitly-URLs on Twitter are shared without ever being read BIBREF11 , and 73% of Reddit posts were voted on without reading the linked article BIBREF12 . Instead, users tend to rely on the commentary added to retweets or the comments section of Reddit-posts for information on the content and its credibility. Faced with this reality, we ask: what kind of reactions do users find when they browse sources of varying credibility? Discourse acts, or speech acts, can be used to identify the use of language within a conversation, e.g., agreement, question, or answer. Recent work by Zhang et al. zhang2017characterizing classified Reddit comments by their primary discourse act (e.g., question, agreement, humor), and further analyzed patterns from these discussions.", "The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.", "To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility." ] ]
b43a8a0f4b8496b23c89730f0070172cd5dca06a
What is the architecture of their model?
[ "we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent." ]
[ [ "Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources.", "We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent." ] ]
b161febf86cdd58bd247a934120410068b24b7d1
What are the nine types?
[ "agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, question, other" ]
[ [ "In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models." ] ]
d40662236eed26f17dd2a3a9052a4cee1482d7d6
How do they represent input features of their model to train embeddings?
[ "a vector of frame-level acoustic features" ]
[ [ "An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 ." ] ]
1d791713d1aa77358f11501f05c108045f53c8aa
Which dimensionality do they use for their embeddings?
[ "1061" ]
[ [ "Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set." ] ]
6b6360fab2edc836901195c0aba973eae4891975
Which dataset do they use?
[ "Switchboard conversational English corpus" ]
[ [ "The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments." ] ]
b6b5f92a1d9fa623b25c70c1ac67d59d84d9eec8
By how much do they outpeform previous results on the word discrimination task?
[ "Their best average precision tops previous best result by 0.202" ]
[ [] ]
86a93a2d1c19cd0cd21ad1608f2a336240725700
How does Frege's holistic and functional approach to meaning relates to general distributional hypothesis?
[ "interpretation of Frege's work are examples of holistic approaches to meaning" ]
[ [ "Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts. As we demonstrated on the opposition between Skip-gram and CBOW models, the distinction between semantic holism and atomism may play an essential role in semantic properties of neural language representations models." ] ]
6090d3187c41829613abe785f0f3665d9ecd90d9
What does Frege's holistic and functional approach to meaning states?
[ "Only in the context of a sentence does a word have a meaning." ]
[ [ "Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10 We will later use its modern reformulation to show an analogy with certain neural language models and therefore their holistic character." ] ]
117aa7811ed60e84d40cd8f9cb3ca78781935a98
Do they evaluate the quality of the paraphrasing model?
[ "No" ]
[ [] ]
c359ab8ebef6f60c5a38f5244e8c18d85e92761d
How many paraphrases are generated per question?
[ "10*n paraphrases, where n depends on the number of paraphrases that contain the entity mention spans" ]
[ [ "For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem." ] ]
ad362365656b0b218ba324ae60701eb25fe664c1
What latent variables are modeled in the PCFG?
[ "syntactic information, semantic and topical information" ]
[ [ "In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions." ] ]
423bb905e404e88a168e7e807950e24ca166306c
What are the baselines?
[ "GraphParser without paraphrases, monolingual machine translation based model for paraphrase generation" ]
[ [ "We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.", "We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions." ] ]
e5ae8ac51946db7475bb20b96e0a22083b366a6d
Do they evaluate only on English data?
[ "Yes" ]
[ [ "This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research." ] ]
18288c7b0f8bd7839ae92f9c293e7fb85c7e146a
How strong was the correlation between exercise and diabetes?
[ "weak correlation with p-value of 0.08" ]
[ [ "The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 )." ] ]
b5e883b15e63029eb07d6ff42df703a64613a18a
How were topics of interest about DDEO identified?
[ "using topic modeling model Latent Dirichlet Allocation (LDA)" ]
[ [ "To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .", "Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .", "We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the\"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics." ] ]
c45a160d31ca8eddbfea79907ec8e59f543aab86
What datasets are used for evaluation?
[ "Swissmetro dataset" ]
[ [ "The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments." ] ]
7358a1ce2eae380af423d4feeaa67d2bd23ae9dd
How do their train their embeddings?
[ "The embeddings are learned several times using the training set, then the average is taken." ]
[ [ "For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).", "Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set." ] ]
1165fb0b400ec1c521c1aef7a4e590f76fee1279
How do they model travel behavior?
[ "The data from collected travel surveys is used to model travel behavior." ]
[ [ "Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair." ] ]
f2c5da398e601e53f9f545947f61de5f40ede1ee
How do their interpret the coefficients?
[ "The coefficients are projected back to the dummy variable space." ]
[ [ "We will apply the methodology to the well-known “Swissmetro\" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space." ] ]
2d4d0735c50749aa8087d1502ab7499faa2f0dd8
By how much do they outperform previous state-of-the-art models?
[ "Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.)" ]
[ [ "All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.", "We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss." ] ]
43761478c26ad65bec4f0fd511ec3181a100681c
Do they use pretrained word embeddings?
[ "Yes" ]
[ [ "Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon." ] ]
01866fe392d9196dda1d0b472290edbd48a99f66
How is the lexicon of trafficking flags expanded?
[ "re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones" ]
[ [ "If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos." ] ]
394cf73c0aac8ccb45ce1b133f4e765e8e175403
Do they experiment with the dataset?
[ "Yes" ]
[ [ "The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section." ] ]
2c4003f25e8d95a3768204f52a7a5f5e17cb2102
Do they use a crowdsourcing platform for annotation?
[ "No" ]
[ [ "Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.", "We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”." ] ]
65e32f73357bb26a29a58596e1ac314f7e9c6c91
What is an example of a difficult-to-classify case?
[ "The lack of background, Non-cursing aggressions and insults, the presence of controversial topic words , shallow meaning representation, directly ask the suspected troll if he/she is trolling or not, a blurry line between “Frustrate” and “Neutralize”, distinction between the classes “Troll” and “Engage”" ]
[ [ "In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.", "Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.", "Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.", "Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.", "Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.", "Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.", "Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.", "Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes." ] ]
46f175e1322d648ab2c0258a9609fe6f43d3b44e
What potential solutions are suggested?
[ " inclusion of longer parts of the conversation" ]
[ [ "Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes." ] ]
7cc22fd8c9d0e1ce5e86d0cbe90bf3a177f22a68
What is the size of the dataset?
[ "1000 conversations composed of 6833 sentences and 88047 tokens" ]
[ [ "We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”." ] ]
3fa638e6167e1c7a931c8ee5c0e2e397ec1b6cda
What Reddit communities do they look at?
[ "Unanswerable" ]
[ [] ]
d2b3f2178a177183b1aeb88784e48ff7e3e5070c
How strong is negative correlation between compound divergence and accuracy in performed experiment?
[ " between 0.81 and 0.88" ]
[ [ "Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios." ] ]
d5ff8fc4d3996db2c96cb8af5a6d215484991e62
What are results of comparison between novel method to other approaches for creating compositional generalization benchmarks?
[ "The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments" ]
[ [ "The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. The reason for this is that, instead of focusing on only one intuitive but rather arbitrary aspect of compositional generalization, the MCD splits aim to optimize divergence across all compounds directly." ] ]
d9c6493e1c3d8d429d4ca608f5acf29e4e7c4c9b
How authors justify that question answering dataset presented is realistic?
[ "CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets" ]
[ [ "CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution." ] ]
0427ca83d6bf4ec113bc6fec484b2578714ae8ec
What three machine architectures are analyzed?
[ "LSTM+attention, Transformer , Universal Transformer" ]
[ [ "We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22." ] ]
f1c70baee0fd02b8ecb0af4b2daa5a56f3e9ccc3
How big is new question answering dataset?
[ "239,357 English question-answer pairs" ]
[ [ "CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution." ] ]
8db45a8217f6be30c31f9b9a3146bf267de68389
What are other approaches into creating compositional generalization benchmarks?
[ "random , Output length, Input length, Output pattern, Input pattern" ]
[ [ "For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\\mathcal {D}_A \\le 0.02$). Table TABREF18 compares the compound divergence $\\mathcal {D}_C$ and atom divergence $\\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:", "Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\\le \\hspace{-2.5pt} N$, while the test set consists of examples with output length $> \\hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.", "Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.", "Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.", "Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice." ] ]
4e379d6d5f87554fabf6f7f7b6ed92d2025e7280
What problem do they apply transfer learning to?
[ "CSKS task" ]
[ [ "We took inspiration from these works to design our experiments to solve the CSKS task." ] ]
518d0847b02b4f23a8f441faa38b935c9b892e1e
What are the baselines?
[ "Honk, DeepSpeech-finetune" ]
[ [ "Our baselines, Honk( UID9 ), DeepSpeech-finetune( UID10 ), had comparatively both lower recall and precision. We noticed an improvement when fine tuning DeepSpeech model with prototypical loss (DeepSpeech-finetune-prototypical ( UID11 )). While analysing the false positives of this model, it was observed that the model gets confused between the keywords and it also wrongly classifies background noise as a keyword. To improve this, we combined prototypical loss with a metric loss to reject background (DeepSpeech-finetune-prototypical+metric( UID14 )). This model gave us the best results." ] ]
8112d18681e266426cf7432ac4928b87f5ce8311
What languages are considered?
[ "English, Hindi" ]
[ [ "Our learning data, which was created in-house, has 20 keywords to be spotted about television models of a consumer electronics brand. It was collected by making 40 participants utter each keyword 3 times. Each participant recorded in normal ambient noise conditions. As a result, after collection of learning data we have 120 (3 x 40) instances of each of the 20 keywords. We split the learning data 80:20 into train and validation sets. Train/Validation split was done on speaker level, so as to make sure that all occurrences of a particular speaker is present only on either of two sets. For testing, we used 10 different 5 minutes long simulated conversational recordings of television salesmen and customers from a shopping mall in India. These recordings contain background noise (as is expected in a mall) and have different languages (Indians speak a mixture of English and Hindi). The CSKS algorithm trained on instances of keywords in learning data is supposed to detect keywords embedded in conversations of test set." ] ]
b14f13f2a3a316e5a5de9e707e1e6ed55e235f6f
Does this model train faster than state of the art models?
[ "Unanswerable" ]
[ [] ]
ba6422e22297c7eb0baa381225a2f146b9621791
What is the performance difference between proposed method and state-of-the-arts on these datasets?
[ "Difference is around 1 BLEU score lower on average than state of the art methods." ]
[ [ "Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work." ] ]
65e72ad72a9cbfc379f126b10b0ce80cfe44579b
What non autoregressive NMT models are used for comparison?
[ "NAT w/ Fertility, NAT-IR, NAT-REG, LV NAR, CTC Loss, CMLM" ]
[ [ "We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8." ] ]
cf8edc6e8c4d578e2bd9965579f0ee81f4bf35a9
What are three neural machine translation (NMT) benchmark datasets used for evaluation?
[ "WMT2014, WMT2016 and IWSLT-2014" ]
[ [ "FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear." ] ]
04aff4add28e6343634d342db92b3ac36aa8c255
What is result of their attention distribution analysis?
[ "visual attention is very sparse, visual component of the attention hasn't learnt any variation over the source encodings" ]
[ [ "In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths." ] ]
a8e4522ce2ce7336e731286654d6ad0931927a4e
What is result of their Principal Component Analysis?
[ "existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT" ]
[ [ "Figure FIGREF14 (Top) shows the variance explained by the Top 100 principal components, obtained by applying PCA on the How2 and Multi30k training set visual features. The original feature dimensions are 2048 in both the cases. It is clear from the Figure FIGREF14 that most of the energy of the visual feature space resides in a low-dimensional subspace BIBREF14. In other words, there exist a few directions in the embedding space which disproportionately explain the variance. These \"common\" directions affect all of the embeddings in the same way, rendering them less discriminative. Figure FIGREF14 also shows the cumulative variance explained by Top 10, 20, 50 and 100 principal components respectively. It is clear that the visual features in the case of How2 dataset are much more dominated by the \"common\" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction. Further, this also indicates that under subword vocabulary such as BPE BIBREF15 or Sentence-Piece BIBREF16, the utility of such visual embeddings will only aggravate." ] ]
f6202100cfb83286dc51f57c68cffdbf5cf50a3f
What are 3 novel fusion techniques that are proposed?
[ "Step-Wise Decoder Fusion, Multimodal Attention Modulation, Visual-Semantic (VS) Regularizer" ]
[ [ "Proposed Fusion Techniques ::: Step-Wise Decoder Fusion", "Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5.", "Proposed Fusion Techniques ::: Multimodal Attention Modulation", "Similar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well. Then, the true attention distribution $a_{t}(s)$ is computed as an interpolation between the visual and text based attention scores. The score function is a content based scoring mechanism as usual.", "Proposed Fusion Techniques ::: Visual-Semantic (VS) Regularizer", "In terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction. However, to our knowledge, visual-semantic supervision hasn't been much explored for multimodal translation in terms of loss functions." ] ]
bd7039f81a5417474efa36f703ebddcf51835254
What are two models' architectures in proposed solution?
[ "Reasoner model, also implemented with the MatchLSTM architecture, Ranker model" ]
[ [ "Method ::: Passage Ranking Model", "The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\\mathcal {P} = \\lbrace p_1, p_2 ... p_K\\rbrace $ from a pool of candidates, and outputs a chain of selected passages.", "Method ::: Cooperative Reasoner", "To alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:" ] ]
022e5c996a72aeab890401a7fdb925ecd0570529
How do two models cooperate to select the most confident chains?
[ "Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards" ]
[ [ "To alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:" ] ]
2a950ede24b26a45613169348d5db9176fda4f82
How many hand-labeled reasoning chains have been created?
[ "Unanswerable" ]
[ [] ]
34af2c512ec38483754e94e1ea814aa76552d60a
What benchmarks are created?
[ "Answer with content missing: (formula) The accuracy is defined as the ratio # of correct chains predicted to # of evaluation samples" ]
[ [ "In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages).", "In MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct.", "The accuracy is defined as the ratio:" ] ]
c1429f7fed5a4dda11ac7d9643f97af87a83508b
What empricial investigations do they reference?
[ "empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation" ]
[ [ "Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis.", "In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations.", "We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6)." ] ]
a93d4aa89ac3abbd08d725f3765c4f1bed35c889
What languages do they investigate for machine translation?
[ "English , Chinese " ]
[ [ "We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article:" ] ]
bc473c5bd0e1a8be9b2037aa7006fd68217c3f47
What recommendations do they offer?
[ " Choose professional translators as raters, Evaluate documents, not sentences, Evaluate fluency in addition to adequacy, Do not heavily edit reference translations for fluency, Use original source texts" ]
[ [ "Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general.", "Recommendations ::: (R1) Choose professional translators as raters.", "In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs.", "Recommendations ::: (R2) Evaluate documents, not sentences.", "When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4).", "Recommendations ::: (R3) Evaluate fluency in addition to adequacy.", "Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality.", "Recommendations ::: (R4) Do not heavily edit reference translations for fluency.", "In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30).", "Recommendations ::: (R5) Use original source texts.", "Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT." ] ]
cc5d8e12f6aecf6a5f305e2f8b3a0c67f49801a9
What percentage fewer errors did professional translations make?
[ "36%" ]
[ [ "To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32." ] ]
9299fe72f19c1974564ea60278e03a423eb335dc
What was the weakness in Hassan et al's evaluation design?
[ "MT developers to which crowd workers were compared are usually not professional translators, evaluation of sentences in isolation prevents raters from detecting translation errors, used not originally written Chinese test set\n" ]
[ [ "The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations.", "MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents.", "The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality." ] ]
2ed02be0c183fca7031ccb8be3fd7bc109f3694b
By how much they improve over the previous state-of-the-art?
[ "1.08 points in ROUGE-L over our base pointer-generator model , 0.6 points in ROUGE-1" ]
[ [ "Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries." ] ]
be73a88d5b695200e2ead4c2c24e2a977692970e
Is there any evidence that encoders with latent structures work well on other tasks?
[ "Yes" ]
[ [ "Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5)." ] ]
0e45aae0e97a6895543e88705e153f084ce9c136
Do they report results only on English?
[ "Unanswerable" ]
[ [] ]
c515269b37cc186f6f82ab9ada5d9ca176335ded
What evidence do they present that the model attends to shallow context clues?
[ "Using model gradients with respect to input features they presented that the most important model inputs are verbs associated with entities which shows that the model attends to shallow context clues" ]
[ [ "One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics." ] ]
43f86cd8aafe930ebb35ca919ada33b74b36c7dd
In what way is the input restructured?
[ "In four entity-centric ways - entity-first, entity-last, document-level and sentence-level" ]
[ [ "Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for.", "As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \\times m$ input sequences for fine tuning our classification task." ] ]
aa60b0a6c1601e09209626fd8c8bdc463624b0b3
What are their results on the entity recognition task?
[ "With both test sets performances decrease, varying between 94-97%" ]
[ [ "The performances of the NER experiments are reported separately for three different parts of the system proposed.", "Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot." ] ]
3837ae1e91a4feb27f11ac3b14963e9a12f0c05e
What task-specific features are used?
[ "6)Contributor first names, 7)Contributor last names, 8)Contributor types (\"soprano\", \"violinist\", etc.), 9)Classical work types (\"symphony\", \"overture\", etc.), 10)Musical instruments, 11)Opus forms (\"op\", \"opus\"), 12)Work number forms (\"no\", \"number\"), 13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\"), 14)Work Modes (\"major\", \"minor\", \"m\")" ]
[ [ "In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types (\"soprano\", \"violinist\", etc.); 9)Classical work types (\"symphony\", \"overture\", etc.); 10)Musical instruments; 11)Opus forms (\"op\", \"opus\"); 12)Work number forms (\"no\", \"number\"); 13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\"); 14)Work Modes (\"major\", \"minor\", \"m\"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features)." ] ]
ef4d6c9416e45301ea1a4d550b7c381f377cacd9
What kind of corpus-based features are taken into account?
[ "standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, series of features representing tokens' left and right context" ]
[ [ "We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 ." ] ]
689d1d0c4653a8fa87fd0e01fa7e12f75405cd38
Which machine learning algorithms did the explore?
[ "biLSTM-networks" ]
[ [ "However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported." ] ]
7920f228de6ef4c685f478bac4c7776443f19f39
What language is the Twitter content in?
[ "English" ]
[ [ "In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work." ] ]
41844d1d1ee6d6d38f31b3a17a2398f87566ed92
What is the architecture of the siamese neural network?
[ "two parallel convolutional networks, INLINEFORM0 , that share the same set of weights" ]
[ [ "To further distinguish speech from different Arabic dialects, while making speech from the same dialect more similar, we adopted a Siamese neural network architecture BIBREF24 based on an i-vector feature space. The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a). Let INLINEFORM2 and INLINEFORM3 be a pair of i-vectors for which we wish to compute a distance. Let INLINEFORM4 be the label for the pair, where INLINEFORM5 = 1 if the i-vectors INLINEFORM6 and INLINEFORM7 belong to same dialect, and INLINEFORM8 otherwise. To optimize the network, we use a Euclidean distance loss function between the label and the cosine distance, INLINEFORM9 , where INLINEFORM10" ] ]
ae17066634bd2731a07cd60e9ca79fc171692585
How do they explore domain mismatch?
[ "Unanswerable" ]
[ [] ]
4fa2faa08eeabc09d78d89aaf0ea86bb36328172
How do they explore dialect variability?
[ "Unanswerable" ]
[ [] ]
e87f47a293e0b49ab8b15fc6633d9ca6dc9de071
Which are the four Arabic dialects?
[ "Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR)" ]
[ [ "For the MGB-3 ADI task, the challenge organizers provided 13,825 utterances (53.6 hours) for the training (TRN) set, 1,524 utterances (10 hours) for a development (DEV) set, and 1,492 utterances (10.1 hours) for a test (TST) set. Each dataset consisted of five Arabic dialects: Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). Detailed statistics of the ADI dataset can be found in BIBREF23 . Table TABREF3 shows some facts about the evaluation conditions and data properties. Note that the development set is relatively small compared to the training set. However, it is matched with the test set channel domain. Thus, the development set provides valuable information to adapt or compensate the channel (recording) domain mismatch between the train and test sets." ] ]
7426a6e800d6c11795941616fc4a243e75716a10
What factors contribute to interpretive biases according to this research?
[ "Which events authors choose to include in their history, which they leave out, and the way the events chosen relate to the march" ]
[ [ "While the choice of wording helps to convey bias, just as crucial is the way that the reporters portray the march as being related to other events. Which events authors choose to include in their history, which they leave out, and the way the events chosen relate to the march are crucial factors in conveying bias. Townhall's bias against the March of Science expressed in the argument that it politicizes science cannot be traced back to negative opinion words; it relies on a comparison between the March for Science and the Women's March, which is portrayed as a political, anti-Trump event. Newsbusters takes a different track: the opening paragraph conveys an overall negative perspective on the March for Science, despite its neutral language, but it achieves this by contrasting general interest in the march with a claimed negative view of the march by many “actual scientists.” On the other hand, the New York Times points to an important and presumably positive outcome of the march, despite its controversiality: a renewed look into the role of science in public life and politics. Like Newsbusters, it lacks any explicit evaluative language and relies on the structural relations between events to convey an overall positive perspective; it contrasts the controversy surrounding the march with a claim that the march has triggered an important discussion, which is in turn buttressed by the reporter's mentioning of the responses of the Times' readership." ] ]
da4535b75e360604e3ce4bb3631b0ba96f4dadd3
Which interpretative biases are analyzed in this paper?
[ "in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury" ]
[ [ "An epistemic ME game is an ME game with a Harsanyi type space and a type/history correspondence as we've defined it. By adding types to an ME game, we provide the beginnings of a game theoretic model of interpretive bias that we believe is completely new. Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury. Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury." ] ]
4d30c2223939b31216f2e90ef33fe0db97e962ac
How many words are coded in the dictionary?
[ "11'248" ]
[ [ "We pair 11'248 standard German written words with their phonetical representations in six different Swiss dialects: Zürich, St. Gallen, Basel, Bern, Visp, and Stans (Figure FIGREF1). The phonetic words were written in a modified version of the Speech Assessment Methods Phonetic Alphabet (SAMPA). The Swiss German phonetic words are also paired with Swiss German writings in the latin alphabet. (From here onwards, a phonetic representation of a Swiss German word will be called a SAMPA and a written Swiss German word will be called a GSW.)" ] ]
7b47aa6ba247874eaa8ab74d7cb6205251c01eb5
Is the model evaluated on the graphemes-to-phonemes task?
[ "Yes" ]
[ [ "To increase the benefits of our data for ASR systems, we also trained a grapheme-to-phoneme (g2p) model: Out-of-vocabulary words can be a problem for ASR system. For those out-of-vocabulary words we need a model that can generate pronunciations from a written form, in real time. This is why we train a grapheme-to-phoneme (g2p) model that generates a sequence of phonemes for a given word. We train the g2p model using our dictionary and compare its performance with a widely used joint-sequence g2p model, Sequitur BIBREF26. For the g2p model we are using the same architecture as for the p2g model. The only difference is input and output vocabulary. The Sequitur and our model are using the dictionary with the same train (19'898 samples), test (2'412 samples) and validation (2'212 samples) split. Additionally, we also test their performance only on the items from the Zurich and Visp dialect, because most of the samples are from this two dialects. In Table TABREF15 we show the result of the comparison of the two models. We compute the edit distance between the predicted and the true pronunciation and report the number of exact matches. In the first columns we have the result using the whole test set with all the dialects, and in the 2nd and 3rd columns we show the number of exact matches only on the samples from the test set that are from the Zurich and Visp dialect. For here we can clearly see that our model performs better than the Sequitur model. The reason why we have less matches in the Visp dialect compared to Zurich is because most of the our data is from the Zurich dialect." ] ]
ce14b87dacfd5206d2a5af7c0ed1cfeb7b181922
How does the QuaSP+Zero model work?
[ "does not just consider the question tokens, but also the relationship between those tokens and the properties" ]
[ [ "We present and evaluate a model that we have developed for this, called QuaSP+Zero, that modifies the QuaSP+ parser as follows: During decoding, at points where the parser is selecting which property to include in the LF (e.g., Figure FIGREF31 ), it does not just consider the question tokens, but also the relationship between those tokens and the properties INLINEFORM0 used in the qualitative model. For example, a question token such as “longer” can act as a cue for (the property) length, even if unseen in the training data, because “longer” and a lexical form of length (e.g.,“length”) are similar. This approach follows the entity-linking approach used by BIBREF11 Krishnamurthy2017NeuralSP, where the similarity between question tokens and (words associated with) entities - called the entity linking score - help decide which entities to include in the LF during parsing. Here, we modify their entity linking score INLINEFORM1 , linking question tokens INLINEFORM2 and property “entities” INLINEFORM3 , to be: INLINEFORM4" ] ]
709a4993927187514701fe3cc491ac3030da1215
Which off-the-shelf tools do they use on QuaRel?
[ "information retrieval system, word-association method, CCG-style rule-based semantic parser written specifically for friction questions, state-of-the-art neural semantic parser" ]
[ [ "We use four systems to evaluate the difficulty of this dataset. (We subsequently present two new models, extending the baseline neural semantic parser, in Sections SECREF36 and SECREF44 ). The first two are an information retrieval system and a word-association method, following the designs of BIBREF26 Clark2016CombiningRS. These are naive baselines that do not parse the question, but nevertheless may find some signal in a large corpus of text that helps guess the correct answer. The third is a CCG-style rule-based semantic parser written specifically for friction questions (the QuaRel INLINEFORM0 subset), but prior to data being collected. The last is a state-of-the-art neural semantic parser. We briefly describe each in turn." ] ]
a3c6acf900126bc9bd9c50ce99041ea00761da6a
How do they obtain the logical forms of their questions in their dataset?
[ " workers were given a seed qualitative relation, asked to enter two objects, people, or situations to compare, created a question, guided by a large number of examples, LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions" ]
[ [ "We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 ).", "Second, the LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions, without exposing workers to the underlying formalism. This is possible because of the constrained space of LFs. Referring to LF templates (1) and (2) earlier (Section SECREF13 ), these questions are as follows:", "From this information, we can deduce the target LF ( INLINEFORM0 is the complement of INLINEFORM1 , INLINEFORM2 , we arbitrarily set INLINEFORM3 =world1, hence all other variables can be inferred). Three independent workers answer these follow-up questions to ensure reliable results." ] ]
31b631a8634f6180b20a72477040046d1e085494
Do all questions in the dataset allow the answers to pick from 2 options?
[ "Yes" ]
[ [ "We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 )." ] ]
ab78f066144936444ecd164dc695bec1cb356762
What is shared in the joint model?
[ "jointly trained with slots" ]
[ [ "Joint-1: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots)", "Joint-2: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots & intent keywords)" ] ]
e659ceb184777015c12db2da5ae396635192f0b0
Are the intent labels imbalanced in the dataset?
[ "Yes" ]
[ [ "For in-cabin intent understanding, we described 4 groups of usages to support various natural commands for interacting with the vehicle: (1) Set/Change Destination/Route (including turn-by-turn instructions), (2) Set/Change Driving Behavior/Speed, (3) Finishing the Trip Use-cases, and (4) Others (open/close door/window/trunk, turn music/radio on/off, change AC/temperature, show map, etc.). According to those scenarios, 10 types of passenger intents are identified and annotated as follows: SetDestination, SetRoute, GoFaster, GoSlower, Stop, Park, PullOver, DropOff, OpenDoor, and Other. For slot filling task, relevant slots are identified and annotated as: Location, Position/Direction, Object, Time Guidance, Person, Gesture/Gaze (e.g., `this', `that', `over there', etc.), and None/O. In addition to utterance-level intents and slots, word-level intent related keywords are annotated as Intent. We obtained 1331 utterances having commands to AMIE agent from our in-cabin dataset. We expanded this dataset via the creation of similar tasks on Amazon Mechanical Turk BIBREF21 and reached 3418 utterances with intents in total. Intent and slot annotations are obtained on the transcribed utterances by majority voting of 3 annotators. Those annotation results for utterance-level intent types, slots and intent keywords can be found in Table TABREF7 and Table TABREF8 as a summary of dataset statistics." ] ]
b512ab8de26874ee240cffdb3c65d9ac8d6023d9
What kernels are used in the support vector machines?
[ "Unanswerable" ]
[ [] ]
4e4d377b140c149338446ba69737ea191c4328d9
What dataset is used?
[ "ACL Anthology Reference Corpus" ]
[ [ "The ACL-Embeddings (300 and 100 dimensions) from ACL collection were trained . ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which I have generated 622,144 sentences after filtering out sentences with lower quality." ] ]
828ce5faed7783297cf9ce202364f999b8d4a1f6
What metrics are considered?
[ "F-score, micro-F, macro-F, weighted-F " ]
[ [ "One-Vs-The-Rest strategy was adopted for the task of multi-class classification and I reported F-score, micro-F, macro-F and weighted-F scores using 10-fold cross-validation. The F1 score is a weighted average of the precision and recall. In the multi-class case, this is the weighted average of the F1 score of each class. There are several types of averaging performed on the data: Micro-F calculates metrics globally by counting the total true positives, false negatives and false positives. Macro-F calculates metrics for each label, and find their unweighted mean. Macro-F does not take label imbalance into account. Weighted-F calculates metrics for each label, and find their average, weighted by support (the number of true instances for each label). Weighted-F alters macro-F to account for label imbalance." ] ]
9d016eb3913b41f7a18c6fa865897c12b5fe0212
Did the authors evaluate their system output for coherence?
[ "Yes" ]
[ [ "We want to further study how the proposed cache-based neural model influence coherence in document translation. For this, we follow Lapata2005Automatic to measure coherence as sentence similarity. First, each sentence is represented by the mean of the distributed vectors of its words. Second, the similarity between two sentences is determined by the cosine of their means." ] ]
c1c611409b5659a1fd4a870b6cc41f042e2e9889
What evaluations did the authors use on their system?
[ "BLEU scores, exact matches of words in both translations and topic cache, and cosine similarities of adjacent sentences for coherence." ]
[ [] ]
79bb1a1b71a1149e33e8b51ffdb83124c18f3e9c
What accuracy does CNN model achieve?
[ "Combined per-pixel accuracy for character line segments is 74.79" ]
[ [] ]
26faad6f42b6d628f341c8d4ce5a08a591eea8c2
How many documents are in the Indiscapes dataset?
[ "508" ]
[ [] ]
20be7a776dfda0d3c9dc10270699061cb9bc8297
What language(s) are the manuscripts written in?
[ "Unanswerable" ]
[ [] ]
3bfb8c12f151dada259fbd511358914c4b4e1b0e
What metrics are used to evaluation revision detection?
[ "precision, recall, F-measure" ]
[ [ "We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record." ] ]
3f85cc5be84479ba668db6d9f614fedbff6d77f1
How large is the Wikipedia revision dump dataset?
[ "eight GB" ]
[ [ "The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data." ] ]
126e8112e26ebf8c19ca7ff3dd06691732118e90
What are simulated datasets collected?
[ "There are 6 simulated datasets collected which is initialised with a corpus of size 550 and simulated by generating new documents from Wikipedia extracts and replacing existing documents" ]
[ [ "The generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period.", "We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5." ] ]