question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
1949d84653562fa9e83413796ae55980ab7318f2
What is MRR?
[ "mean reciprocal rank" ]
[ [ "I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits." ] ]
7ee660927e2b202376849e489faa7341518adaf9
Which techniques for word embeddings and topic models are used?
[ " skip-gram, LDA" ]
[ [ "To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 . This suggests that these discrete distributions can be reinterpreted as topics INLINEFORM1 . We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right). In this topic model, input words INLINEFORM2 are fully observed “cluster assignments,” and the words in INLINEFORM3 's contexts are a “document.” The skip-gram differs from this supervised topic model only in the parameterization of the “topics” via word vectors which encode the distributions with a log-bilinear model. Note that although the skip-gram is discriminative, in the sense that it does not jointly model the input words INLINEFORM4 , we are here equivalently interpreting it as encoding a “conditionally generative” process for the context given the words, in order to develop probabilistic models that extend the skip-gram.", "As in LDA, this model can be improved by replacing the naive Bayes assumption with a mixed membership assumption. By applying the mixed membership representation to this topic model version of the skip-gram, we obtain the model in the bottom-right of Table TABREF2 . After once again parameterizing this model with word embeddings, we obtain our final model, the mixed membership skip-gram (MMSG) (Table TABREF2 , bottom-left). In the model, each input word has a distribution over topics INLINEFORM0 . Each topic has a vector-space embedding INLINEFORM1 and each output word has a vector INLINEFORM2 (a parameter, not an embedding for INLINEFORM3 ). A topic INLINEFORM4 is drawn for each context, and the words in the context are drawn from the log-bilinear model using INLINEFORM5 : DISPLAYFORM0" ] ]
f6380c60e2eb32cb3a9d3bca17cf4dc5ae584eca
Why is big data not appropriate for this task?
[ "Training embeddings from small-corpora can increase the performance of some tasks" ]
[ [ "Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important.", "I have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. I plan to use this approach for substantive social science applications, and to address algorithmic bias and fairness issues." ] ]
c7d99e66c4ab555fe3d616b15a5048f3fe1f3f0e
What is an example of a computational social science NLP task?
[ "Visualization of State of the union addresses" ]
[ [ "I also performed several case studies. I obtained document embeddings, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token, and visualized them in two dimensions using INLINEFORM1 -SNE BIBREF24 (all vectors were normalized to unit length). The state of the Union addresses (Figure FIGREF27 ) are embedded almost linearly by year, with a major jump around the New Deal (1930s), and are well separated by party at any given time period. The embedded topics (gray) allow us to interpret the space. The George W. Bush addresses are embedded near a “war on terror” topic (“weapons, war...”), and the Barack Obama addresses are embedded near a “stimulus” topic (“people, work...”)." ] ]
400efd1bd8517cc51f217b34cbf19c75d94b1874
Do they report results only on English datasets?
[ "Unanswerable" ]
[ [] ]
4698298d506bef02f02c80465867f2cd12d29182
What were the previous state of the art benchmarks?
[ "BIBREF35 for VQA dataset, BIBREF5, BIBREF36" ]
[ [ "The comparison of our method with various baselines and state-of-the-art methods is provided in table TABREF26 for VQA 1.0 and table TABREF27 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current state-of-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art BIBREF35 for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over BIBREF5 by 3.7% and BIBREF36 by 3.5% in terms of METEOR scores." ] ]
4e2cb1677df949ee3d1d3cd10962b951da907105
How/where are the natural question generated?
[ "Decoder that generates question using an LSTM-based language model" ]
[ [ "Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model." ] ]
9cc0fd3721881bd8e246d20fff5d15bd32365655
What is the input to the differential network?
[ "image" ]
[ [ "Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model." ] ]
82c4863293a179fe5c0d9a1ff17d224bde952f54
How do the authors define a differential network?
[ "The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module." ]
[ [ "The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module.", "We use a triplet network BIBREF41 , BIBREF42 in our representation module. We refereed a similar kind of work done in BIBREF34 for building our triplet network. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. Given an image INLINEFORM0 we obtain an embedding INLINEFORM1 using a CNN parameterized by a function INLINEFORM2 where INLINEFORM3 are the weights for the CNN. The caption INLINEFORM4 results in a caption embedding INLINEFORM5 through an LSTM parameterized by a function INLINEFORM6 where INLINEFORM7 are the weights for the LSTM. This is shown in part 1 of Figure FIGREF4 . Similarly we obtain image embeddings INLINEFORM8 & INLINEFORM9 and caption embeddings INLINEFORM10 & INLINEFORM11 . DISPLAYFORM0", "The Mixture module brings the image and caption embeddings to a joint feature embedding space. The input to the module is the embeddings obtained from the representation module. We have evaluated four different approaches for fusion viz., joint, element-wise addition, hadamard and attention method. Each of these variants receives image features INLINEFORM0 & the caption embedding INLINEFORM1 , and outputs a fixed dimensional feature vector INLINEFORM2 . The Joint method concatenates INLINEFORM3 & INLINEFORM4 and maps them to a fixed length feature vector INLINEFORM5 as follows: DISPLAYFORM0" ] ]
88d9d32fb7a22943e1f4868263246731a1726e6e
How do the authors define exemplars?
[ "Exemplars aim to provide appropriate context., joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption" ]
[ [ "Exemplars aim to provide appropriate context. To better understand the context, we experimented by analysing the questions generated through an exemplar. We observed that indeed a supporting exemplar could identify relevant tags (cows in Figure FIGREF3 ) for generating questions.", "We improve use of exemplars by using a triplet network. This network ensures that the joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption and vice-versa. We empirically evaluated whether an explicit approach that uses the differential set of tags as a one-hot encoding improves the question generation, or the implicit embedding obtained based on the triplet network. We observed that the implicit multimodal differential network empirically provided better context for generating questions. Our understanding of this phenomenon is that both target and supporting exemplars generate similar questions whereas contrasting exemplars generate very different questions from the target question. The triplet network that enhances the joint embedding thus aids to improve the generation of target question. These are observed to be better than the explicitly obtained context tags as can be seen in Figure FIGREF2 . We now explain our method in detail." ] ]
af82043e7d046c2fb1ed86ef9b48c35492e6a48c
Is this a task other people have worked on?
[ "No" ]
[ [ "We propose a novel problem of relationship recommendation (RSR). Different from the reciprocal recommendation problem on DSNs, our RSR task operates on regular social networks (RSN), estimating long-term and serious relationship compatibility based on social posts such as tweets." ] ]
1bc8904118eb87fa5949ad7ce5b28ad3b3082bd0
Where did they get the data for this project?
[ "Twitter" ]
[ [ "Since there are no publicly available datasets for training relationship recommendation models, we construct our own. The goal is to construct a list of user pairs in which both users are in relationship. Our dataset is constructed via distant supervision from Twitter. We call this dataset the Love Birds dataset. This not only references the metaphorical meaning of the phrase `love birds' but also deliberately references the fact that the Twitter icon is a bird. This section describes the construction of our dataset. Figure 1 describes the overall process of our distant supervision framework." ] ]
5dc1aca619323ea0d4717d1f825606b2b7c21f01
Which major geographical regions are studied?
[ "Northeast U.S, South U.S., West U.S. and Midwest U.S." ]
[ [ "We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college." ] ]
dd5c9a370652f6550b4fd13e2ac317eaf90973a8
How strong is the correlation between the prevalence of the #MeToo movement and official reports [of sexual harassment]?
[ "0.9098 correlation" ]
[ [ "We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college." ] ]
39c78924df095c92e058ffa5a779de597e8c43f4
How are the topics embedded in the #MeToo tweets extracted?
[ "Using Latent Dirichlet Allocation on TF-IDF transformed from the corpus" ]
[ [ "In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms." ] ]
a95188a0f35d3cb3ca70ae1527d57ac61710afa3
How many tweets are explored in this paper?
[ "60,000 " ]
[ [ "In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users." ] ]
a1557ec0f3deb1e4cd1e68f4880dcecda55656dd
Which geographical regions correlate to the trend?
[ "Northeast U.S., West U.S. and South U.S." ]
[ [ "Observing the results of the linear regression in Table 2, we find the normalized governmental reported cases count and regional feature to be statistically significant on the sexual harassment rate in the Twitter data ($p-value<0.05$). Specifically, the change in the number of reported cases constitutes a considerable change in the number of #MeToo users on Twitter as p-value is extremely small at $5.7e-13$. This corresponds to the research by Napolitano (2014) regarding the \"Yes means yes\" movement in higher education institutes in recent years, as even with some limitations and inconsistency, the sexual assault reporting system is gradually becoming more rigorous BIBREF17. Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. This finding is interesting and warrants further scrutiny." ] ]
096f5c59f43f49cab1ef37126341c78f272c0e26
How many followers did they analyze?
[ "51,104" ]
[ [ "In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users." ] ]
c348a8c06e20d5dee07443e962b763073f490079
What two components are included in their proposed framework?
[ "evidence extraction and answer synthesis" ]
[ [ "In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers." ] ]
0300cf768996849cab7463d929afcb0b09c9cf2a
Which framework they propose in this paper?
[ " extraction-then-synthesis framework" ]
[ [ "In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers." ] ]
dd8f72cb3c0961b5ca1413697a00529ba60571fe
Why MS-MARCO is different from SQuAD?
[ "there are several related passages for each question in the MS-MARCO dataset., MS-MARCO also annotates which passage is correct" ]
[ [ "We propose a multi-task learning framework for evidence extraction. Unlike the SQuAD dataset, which only has one passage given a question, there are several related passages for each question in the MS-MARCO dataset. In addition to annotating the answer, MS-MARCO also annotates which passage is correct. To this end, we propose improving text span prediction with passage ranking. Specifically, as shown in Figure 2 , in addition to predicting a text span, we apply another task to rank candidate passages with the passage-level representation." ] ]
fbd094918b493122b3bba99cefe5da80cf88959c
Did they experiment with pre-training schemes?
[ "No" ]
[ [] ]
78661bdd4d11148e07bdf17141cf088db4ad60c6
What were their results on the test set?
[ "an official F1-score of 0.2905 on the test set" ]
[ [ "Out of 43 teams our system ranked 421st with an official F1-score of 0.2905 on the test set. Although our model outperforms the baseline in the validation set in terms of F1-score, we observe important drops for all metrics compared to the test set, showing that the architecture seems to be unable to generalize well. We think these results highlight the necessity of an ad-hoc architecture for the task as well as the relevance of additional information. The work of BIBREF21 offers interesting contributions in these two aspects, achieving good results for a range of tasks that include sarcasm detection, using an additional attention layer over a BiLSTM like ours, while also pre-training their model on an emoji-based dataset of 1246 million tweets." ] ]
95d98b2a7fbecd1990ec9a070f9d5624891a4f26
What is the size of the dataset?
[ "a balanced dataset of 2,396 ironic and 2,396 non-ironic tweets is provided" ]
[ [ "For the shared task, a balanced dataset of 2,396 ironic and 2,396 non-ironic tweets is provided. The ironic corpus was constructed by collecting self-annotated tweets with the hashtags #irony, #sarcasm and #not. The tweets were then cleaned and manually checked and labeled, using a fine-grained annotation scheme BIBREF3 . The corpus comprises different types of irony:" ] ]
586566de02abdf20b7bfd0d5a43ba93cb02795c3
What was the baseline model?
[ "a non-parameter optimized linear-kernel SVM that uses TF-IDF bag-of-word vectors as inputs" ]
[ [ "To compare our results, we use the provided baseline, which is a non-parameter optimized linear-kernel SVM that uses TF-IDF bag-of-word vectors as inputs. For pre-processing, in this case we do not preserve casing and delete English stopwords." ] ]
dfd9302615b27abf8cbef1a2f880a73dd5f0c753
What models are evaluated with QAGS?
[ "bert-large-wwm, bert-base, bert-large" ]
[ [ "For QA quality, we answer this question by training QA models of varying quality by fine-tuning different versions of BERT on SQuAD. We present results in Table . The QA models perform similarly despite substantially different performances on the SQuAD development set. Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. On XSUM, bert-base slightly outperforms the other two BERT variants. These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs." ] ]
e09dcb6fc163bba7d704178e7edba2e630b573c2
Do they use crowdsourcing to collect human judgements?
[ "Yes" ]
[ [ "We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18. We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix SECREF10 for further details." ] ]
c8f11561fc4da90bcdd72f76414421e1527c0287
Which dataset(s) do they evaluate on?
[ "LJSpeech" ]
[ [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "The open source LJSpeech Dataset was used to train our TTS model. This dataset contains around 13k <text,audio> pairs of a single female english speaker collect from across 7 different non-fictional books. The total training data time is around 21 hours of audio.", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 ." ] ]
51de39c8bad62d3cbfbec1deb74bd8a3ac5e69a8
Which modifications do they make to well-established Seq2seq architectures?
[ "Replacing attention mechanism to query-key attention, and adding a loss to make the attention mask as diagonal as possible" ]
[ [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 .", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .", "In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention.", "We propose to replace this attention with the simpler query-key attention from transformer model. As mentioned earlier, since for TTS the attention mechanism is an easier problem than say machine translation, we employ query-key attention as it's simple to implement and requires less parameters than the original Bahdanau attention.", "Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible.", "As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal. That is: DISPLAYFORM0" ] ]
d9cbcaf8f0457b4be59178446f1a280d17a923fa
How do they measure the size of models?
[ "Direct comparison of model parameters" ]
[ [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 .", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .", "Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality." ] ]
fc69f5d9464cdba6db43a525cecde2bf6ddaaa57
Do they reduce the number of parameters in their architecture compared to other direct text-to-speech models?
[ "Yes" ]
[ [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .", "Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality." ] ]
e1f5531ed04d0aae1dfcb0559f1512a43134c43a
Do they use pretrained models?
[ "Yes" ]
[ [ "In this work, we make use of the widely-recognized state of the art entailment technique – BERT BIBREF18, and train it on three mainstream entailment datasets: MNLI BIBREF19, GLUE RTE BIBREF20, BIBREF21 and FEVER BIBREF22, respectively. We convert all datasets into binary case: “entailment” vs. “non-entailment”, by changing the label “neutral” (if exist in some datasets) into “non-entailment”.", "For our label-fully-unseen setup, we directly apply this pretrained entailment model on the test sets of all $\\textsc {0shot-tc}$ aspects. For label-partially-unseen setup in which we intentionally provide annotated data, we first pretrain BERT on the MNLI/FEVER/RTE, then fine-tune on the provided training data." ] ]
4a4b7c0d3e7365440b49e9e6b67908ea5cea687d
What are their baseline models?
[ "Majority, ESA, Word2Vec , Binary-BERT" ]
[ [ "Majority: the text picks the label of the largest size.", "ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train.", "We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles.", "Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either.", "Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases." ] ]
da845a2a930fd6a3267950bec5928205b6c6e8e8
How was speed measured?
[ "how long it takes the system to lemmatize a set number of words" ]
[ [ "In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple." ] ]
2fa0b9d0cb26e1be8eae7e782ada6820bc2c037f
What were their accuracy results on the task?
[ "97.32%" ]
[ [] ]
76ce9e02d97e2d77fe28c0fa78526809e7c195c6
What is the state of the art?
[ " MADAMIRA BIBREF6 system" ]
[ [ "Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma.", "As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA." ] ]
64c7545ce349265e0c97fd6c434a5f8efdc23777
How was the dataset annotated?
[ "Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization" ]
[ [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.", "Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 ." ] ]
47822fec590e840438a3054b7f512fec09dbd1e1
What is the size of the dataset?
[ "Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each" ]
[ [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each." ] ]
989271972b3176d0a5dabd1cc0e4bdb671269c96
Where did they collect their dataset from?
[ "from Arabic WikiNews site https://ar.wikinews.org/wiki" ]
[ [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each." ] ]
26c64edbc5fa4cdded69ace66fdba64a9648b78e
How much in-domain data is enough for joint models to outperform baselines?
[ "Unanswerable" ]
[ [] ]
e06e1b103483e1e58201075c03e610202968c877
How many parameters does their proposed joint model have?
[ "Unanswerable" ]
[ [] ]
b0fd686183b056ea3f63a7ab494620df1d598c24
How does the model work if no treebank is available?
[ "train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags" ]
[ [ "mcdonald:11 established that, when no treebank annotations are available in the target language, training on multiple source languages outperforms training on one (i.e., multi-source model transfer outperforms single-source model transfer). In this section, we evaluate the performance of our parser in this setup. We use two strong baseline multi-source model transfer parsers with no supervision in the target language:", "Following guo:16, for each target language, we train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags. Our parser uses the same word embeddings and word clusters used in guo:16, and does not use any typology information." ] ]
7065e6140dbaffadebe62c9c9d3863ca0f829d52
How many languages have this parser been tried on?
[ "seven" ]
[ [ "We train MaLOPa on the concantenation of training sections of all seven languages. To balance the development set, we only concatenate the first 300 sentences of each language's development section." ] ]
9508e9ec675b6512854e830fa89fa6a747b520c5
Do they use attention?
[ "Yes" ]
[ [ "The NLG model is a seq2seq model with attention as described in section SECREF2. It takes as input a MR and generates a natural language text. The objective is to find the model parameters $\\theta ^{nlg}$ such that they minimize the loss which is defined as follows:" ] ]
a65e5c97ade6e697ec10bcf3c3190dc6604a0cd5
What non-annotated datasets are considered?
[ "E2E NLG challenge Dataset, The Wikipedia Company Dataset" ]
[ [ "The performance of the joint learning architecture was evaluated on the two datasets described in the previous section. The joint learning model requires a paired and an unpaired dataset, so each of the two datasets was split into several parts. E2E NLG challenge Dataset: The training set of the E2E challenge dataset which consists of 42K samples was partitioned into a 10K paired and 32K unpaired datasets by a random process. The unpaired database was composed of two sets, one containing MRs only and the other containing natural texts only. This process resulted in 3 training sets: paired set, unpaired text set and unpaired MR set. The original development set (4.7K) and test set (4.7K) of the E2E dataset have been kept.", "The Wikipedia Company Dataset: The Wikipedia company dataset presented in Section SECREF18 was filtered to contain only companies having abstracts of at least 7 words and at most 105 words. As a result of this process, 43K companies were retained. The dataset was then divided into: a training set (35K), a development set (4.3K) and a test set (4.3K). Of course, there was no intersection between these sets.", "The training set was also partitioned in order to obtain the paired and unpaired datasets. Because of the loose correlation between the MRs and their corresponding text, the paired dataset was selected such that it contained the infobox values with the highest similarity with its reference text. The similarity was computed using “difflib” library, which is an extension of the Ratcliff and Obershelp algorithm BIBREF19. The paired set was selected in this way (rather than randomly) to get samples as close as possible to a carefully annotated set. At the end of partitioning, the following training sets were obtained: paired set (10.5K), unpaired text set (24.5K) and unpaired MR set (24.5K)." ] ]
e28a6e3d8f3aa303e1e0daff26b659a842aba97b
Did they compare to Transformer based large language models?
[ "No" ]
[ [ "We compared our models with the following state-of-the-art baselines:", "Sequence to Sequence (Seq2Seq): A simple encoder-decoder model which concatenates four sentences to a long sentence with an attention mechanism BIBREF31 .", "Hierarchical LSTM (HLSTM): The story context is represented by a hierarchical LSTM: a word-level LSTM for each sentence and a sentence-level LSTM connecting the four sentences BIBREF29 . A hierarchical attention mechanism is applied, which attends to the states of the two LSTMs respectively.", "HLSTM+Copy: The copy mechanism BIBREF32 is applied to hierarchical states to copy the words in the story context for generation.", "HLSTM+Graph Attention(GA): We applied multi-source attention HLSTM where commonsense knowledge is encoded by graph attention.", "HLSTM+Contextual Attention(CA): Contextual attention is applied to represent commonsense knowledge." ] ]
0fce128b8aaa327ac0d58ec30cd2ecbea2019baa
Which baselines are they using?
[ "Seq2Seq, HLSTM, HLSTM+Copy, HLSTM+Graph Attention, HLSTM+Contextual Attention" ]
[ [ "We compared our models with the following state-of-the-art baselines:", "Sequence to Sequence (Seq2Seq): A simple encoder-decoder model which concatenates four sentences to a long sentence with an attention mechanism BIBREF31 .", "Hierarchical LSTM (HLSTM): The story context is represented by a hierarchical LSTM: a word-level LSTM for each sentence and a sentence-level LSTM connecting the four sentences BIBREF29 . A hierarchical attention mechanism is applied, which attends to the states of the two LSTMs respectively.", "HLSTM+Copy: The copy mechanism BIBREF32 is applied to hierarchical states to copy the words in the story context for generation.", "HLSTM+Graph Attention(GA): We applied multi-source attention HLSTM where commonsense knowledge is encoded by graph attention.", "HLSTM+Contextual Attention(CA): Contextual attention is applied to represent commonsense knowledge." ] ]
7a7e279170e7a2f3bc953c37ee393de8ea7bd82f
What two types the Chinese reading comprehension dataset consists of?
[ "cloze-style reading comprehension and user query reading comprehension questions" ]
[ [ "Cloze Track: In this track, the participants are required to use the large-scale training data to train their cloze system and evaluate on the cloze evaluation track, where training and test set are exactly the same type.", "User Query Track: This track is designed for using transfer learning or domain adaptation to minimize the gap between cloze training data and user query evaluation data, i.e. training and testing is fairly different." ] ]
e3981a11d3d6a8ab31e1b0aa2de96f253653cfb2
For which languages most of the existing MRC datasets are created?
[ "English" ]
[ [ "The previously mentioned datasets are all in English. To add diversities to the reading comprehension datasets, Cui et al. cui-etal-2016 proposed the first Chinese cloze-style reading comprehension dataset: People Daily & Children's Fairy Tale, including People Daily news datasets and Children's Fairy Tale datasets. They also generate the data in an automatic manner, which is similar to the previous datasets. They choose short articles (several hundreds of words) as Document and remove a word from it, whose type is mostly named entities and common nouns. Then the sentence that contains the removed word will be regarded as Query. To add difficulties to the dataset, along with the automatically generated evaluation sets (validation/test), they also release a human-annotated evaluation set. The experimental results show that the human-annotated evaluation set is significantly harder than the automatically generated questions. The reason would be that the automatically generated data is accordance with the training data which is also automatically generated and they share many similar characteristics, which is not the case when it comes to human-annotated data." ] ]
74b0d3ee0cc9b0a3d9b264aba9901ff97048a897
How did they induce the CFG?
[ "the parser first learns to parse simple sentences, then proceeds to learn more complex ones. The induction method is iterative, semi-automatic and based on frequent patterns" ]
[ [ "In this paper, we propose a novel approach to joint learning of ontology and semantic parsing, which is designed for homogeneous collections of text, where each fact is usually stated only once, therefore we cannot rely on data redundancy. Our approach is text-driven, semi-automatic and based on grammar induction. It is presented in Figure 1 .The input is a seed ontology together with text annotated with concepts from the seed ontology. The result of the process is an ontology with extended instances, classes, taxonomic and non-taxonomic relations, and a semantic parser, which transform basic units of text, i.e sentences, into semantic trees. Compared to trees that structure sentences based on syntactic information, nodes of semantic trees contain semantic classes, like location, profession, color, etc. Our approach does not rely on any syntactic analysis of text, like part-of-speech tagging or dependency parsing. The grammar induction method works on the premise of curriculum learning BIBREF7 , where the parser first learns to parse simple sentences, then proceeds to learn more complex ones. The induction method is iterative, semi-automatic and based on frequent patterns. A context-free grammar (CFG) is induced from the text, which is represented by several layers of semantic annotations. The motivation to use CFG is that it is very suitable for the proposed alternating usage of top-down and bottom-up parsing, where new rules are induced from previously unparsable parts. Furthermore, it has been shown by BIBREF8 that CFGs are expressive enough to model almost every language phenomena. The induction is based on a greedy iterative procedure that involves minor human involvement, which is needed for seed rule definition and rule categorization. Our experiments show that although the grammar is ambiguous, it is scalable enough to parse a large dataset of sentences." ] ]
9eb5b336b3dcb7ab63f673ba9ab1818573cce6c3
How big is their dataset?
[ "1.1 million sentences, 119 different relation types (unique predicates)" ]
[ [ "There are almost 1.1 million sentences in the collection. The average length of a sentence is 18.3 words, while the median length is 13.8 words. There are 2.3 links per sentence.", "There are 119 different relation types (unique predicates), having from just a few relations to a few million relations. Since DBpedia and Freebase are available in RDF format, we used the RDF store for querying and for storage of existing and new relations." ] ]
0a92352839b549d07ac3f4cb997b8dc83f64ba6f
By how much do they outperform basic greedy and cross-entropy beam decoding?
[ "2 accuracy points" ]
[ [ "For supertagging, we observe that the baseline cross entropy trained model improves its predictions with beam search decoding compared to greedy decoding by 2 accuracy points, which suggests that beam search is already helpful for this task, even without search-aware training. Both the optimization schemes proposed in this paper improve upon the baseline with soft direct loss optimization ( INLINEFORM0 ), performing better than the approximate max-margin approach." ] ]
242f96142116cf9ff763e97aecd54e22cb1c8b5a
Do they provide a framework for building a sub-differentiable for any final loss metric?
[ "Yes" ]
[ [ "We introduce a surrogate training objective that avoids these problems and as a result is fully continuous. In order to accomplish this, we propose a continuous relaxation to the composition of our final loss metric, INLINEFORM0 , and our decoder function, INLINEFORM1 : INLINEFORM2", "Specifically, we form a continuous function softLB that seeks to approximate the result of running our decoder on input INLINEFORM0 and then evaluating the result against INLINEFORM1 using INLINEFORM2 . By introducing this new module, we are now able to construct our surrogate training objective: DISPLAYFORM0" ] ]
fcd0bd2db39898ee4f444ae970b80ea4d1d9b054
Do they compare partially complete sequences (created during steps of beam search) to gold/target sequences?
[ "Yes" ]
[ [ "However, to reduce the gap between the training procedure and test procedure, we also experimented with soft beam search decoding. This decoding approach closely follows Algorithm SECREF7 , but along with soft back pointers, we also compute hard back pointers at each time step. After computing all the relevant quantities like model score, loss etc., we follow the hard backpointers to obtain the best sequence INLINEFORM0 . This is very different from hard beam decoding because at each time step, the selection decisions are made via our soft continuous relaxation which influences the scores, LSTM hidden states and input embeddings at subsequent time-steps. The hard backpointers are essentially the MAP estimate of the soft backpointers at each step. With small, finite INLINEFORM1 , we observe differences between soft beam search and hard beam search decoding in our experiments." ] ]
5cc937c2dcb8fd4683cb2298d047f27a05e16d43
Which loss metrics do they try in their new training procedure evaluated on the output of beam search?
[ " continuous relaxation to top-k-argmax" ]
[ [ "Hence, the continuous relaxation to top-k-argmax operation can be simply implemented by iteratively using the max operation which is continuous and allows for gradient flow during backpropagation. As INLINEFORM0 , each INLINEFORM1 vector converges to hard index pairs representing hard backpointers and successor candidates described in Algorithm SECREF1 . For finite INLINEFORM2 , we introduce a notion of a soft backpointer, represented as a vector INLINEFORM3 in the INLINEFORM4 -probability simplex, which represents the contribution of each beam element from the previous time step to a beam element at current time step. This is obtained by a row-wise sum over INLINEFORM5 to get INLINEFORM6 values representing soft backpointers." ] ]
37016cc987d33be5ab877013ef26ec7239b48bd9
How are different domains weighted in WDIRL?
[ "To achieve this purpose, we introduce a trainable class weight $\\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\\mathbf {w}_i > 0$" ]
[ [ "According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\\rm {P}(\\rm {Y})$ to DIRL. The key idea of this framework is to first align $\\rm {P}(\\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\\rm {P}(\\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\\rm {P}(\\rm {Y})$ during the alignment of $\\rm {P}(\\rm {X}|\\rm {Y})$. In the second step, it uses $\\mathbf {w}$ to reweigh the supervised classifier $\\rm {P}_S(\\rm {Y}|\\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively.", "The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\\rm {P}(\\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\\mathbf {w}_i > 0$. Specifically, we hope that:", "and we denote $\\mathbf {w}^*$ the value of $\\mathbf {w}$ that makes this equation hold. We shall see that when $\\mathbf {w}=\\mathbf {w}^*$, DIRL is to align $\\rm {P}_S(G(\\rm {X})|\\rm {Y})$ with $\\rm {P}_T(G(\\rm {X})|\\rm {Y})$ without the shift of $\\rm {P}(\\rm {Y})$. According to our analysis, we know that due to the shift of $\\rm {P}(\\rm {Y})$, there is a conflict between the training objects of the supervised learning $\\mathcal {L}_{sup}$ and the domain-invariant learning $\\mathcal {L}_{inv}$. And the conflict degree will decrease as $\\rm {P}_S(\\rm {Y})$ getting close to $\\rm {P}_T(\\rm {Y})$. Therefore, during model training, $\\mathbf {w}$ is expected to be optimized toward $\\mathbf {w}^*$ since it will make $\\rm {P}(\\rm {Y})$ of the weighted source domain close to $\\rm {P}_T(\\rm {Y})$, so as to solve the conflict." ] ]
b3dc6d95d1570ad9a58274539ff1def12df8f474
How is DIRL evaluated?
[ "Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from." ]
[ [ "Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models:", "SO: the source-only model trained using source domain labeled data without any domain adaptation.", "CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\\mathcal {L}_{inv}$ with $\\text{CMD}_K$.", "DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\\mathcal {L}_{inv}$ with $\\text{JSD}(\\rm {P}_S, \\rm {P}_T)$.", "$\\text{CMD}^\\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method.", "$\\text{DANN}^\\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method.", "$\\text{CMD}^{\\dagger \\dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method.", "$\\text{DANN}^{\\dagger \\dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method.", "$\\text{CMD}^{*}$: a variant of $\\text{CMD}^{\\dagger \\dagger }$ that assigns $\\mathbf {w}^*$ (estimate from target labeled data) to $\\mathbf {w}$ and fixes this value during model training.", "$\\text{DANN}^{*}$: a variant of $\\text{DANN}^{\\dagger \\dagger }$ that assigns $\\mathbf {w}^*$ to $\\mathbf {w}$ and fixes this value during model training.", "We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams.", "From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\\rightarrow $D, B$\\rightarrow $E, B$\\rightarrow $K, D$\\rightarrow $B, D$\\rightarrow $E, D$\\rightarrow $K, E$\\rightarrow $B, E$\\rightarrow $D, E$\\rightarrow $K, K$\\rightarrow $B, K$\\rightarrow $D, K$\\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\\rm {P}(\\rm {Y})$ shift, which was evaluated by the max value of $\\rm {P}_S(\\rm {Y}=i)/\\rm {P}_T(\\rm {Y}=i), \\forall i=1, \\cdots , L$. Please refer to Appendix C for more detail about the task design for this study.", "We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\\mathcal {D}_S$ contained 1000 examples of each class, and $\\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$.", "Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\\text{CMD}^{\\dagger \\dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\\text{DANN}^{\\dagger \\dagger }$ with that of DANN and SO. Third, $\\text{CMD}^{\\dagger }$ and $\\text{DANN}^{\\dagger }$ consistently outperformed $\\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\\text{CMD}^{\\dagger \\dagger }$ and $\\text{DANN}^{\\dagger \\dagger }$ outperforms $\\text{CMD}^{\\dagger }$ and $\\text{DANN}^{\\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\\text{Acc}(\\text{CMD})-\\text{Acc}(\\text{SO}))/\\text{Acc}(\\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\\rm {P}(\\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\\rm {P}(\\rm {Y})$ shift. In contrast, our proposed model $\\text{CMD}^{\\dagger \\dagger }$ performed robustly to the varying of $\\rm {P}(\\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\\text{CMD}^{*}$. This again verified the effectiveness of our solution." ] ]
cc5d3903913fa2e841f900372ec74b0efd5e0c71
Which sentiment analysis tasks are addressed?
[ "12 binary-class classification and multi-class classification of reviews based on rating" ]
[ [ "We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams.", "Experiment ::: Dataset and Task Design ::: Binary-Class.", "From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\\rightarrow $D, B$\\rightarrow $E, B$\\rightarrow $K, D$\\rightarrow $B, D$\\rightarrow $E, D$\\rightarrow $K, E$\\rightarrow $B, E$\\rightarrow $D, E$\\rightarrow $K, K$\\rightarrow $B, K$\\rightarrow $D, K$\\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\\rm {P}(\\rm {Y})$ shift, which was evaluated by the max value of $\\rm {P}_S(\\rm {Y}=i)/\\rm {P}_T(\\rm {Y}=i), \\forall i=1, \\cdots , L$. Please refer to Appendix C for more detail about the task design for this study.", "Experiment ::: Dataset and Task Design ::: Multi-Class.", "We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\\mathcal {D}_S$ contained 1000 examples of each class, and $\\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$." ] ]
c95fd189985d996322193be71cf5be8858ac72b5
Which NLP area have the highest average citation for woman author?
[ "sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation" ]
[ [ "Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP." ] ]
4a61260d6edfb0f93100d92e01cf655812243724
Which 3 NLP areas are cited the most?
[ "machine translation, statistical machine, sentiment analysis" ]
[ [] ]
5c95808cd3ee9585f05ef573b0d4a52e86d04c60
Which journal and conference are cited the most in recent years?
[ "CL Journal and EMNLP conference" ]
[ [] ]
b6f5860fc4a9a763ddc5edaf6d8df0eb52125c9e
Which 5 languages appear most frequently in AA paper titles?
[ "English, Chinese, French, Japanese and Arabic" ]
[ [] ]
7955dbd79ded8ef4ae9fc28b2edf516320c1cb55
What aspect of NLP research is examined?
[ "size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender)" ]
[ [ "We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender)." ] ]
6bff681f1f6743ef7aa6c29cc00eac26fafdabc2
Are the academically younger authors cited less than older?
[ "Yes" ]
[ [] ]
205163715f345af1b5523da6f808e6dbf5f5dd47
How many papers are used in experiment?
[ "44,896 articles" ]
[ [ "A. As of June 2019, AA had $\\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018." ] ]
8d989490c5392492ad66e6a5047b7d74cc719f30
What ensemble methods are used for best model?
[ "choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer" ]
[ [ "We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer." ] ]
a7829abed2186f757a59d3da44893c0172c7012b
What hyperparameters have been tuned?
[ "number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks" ]
[ [ "We first focused on directed coattention via context to query and query to context attention as discussed in BIDAF BIBREF9. We then implemented localized feature extraction by 1D convolutions to add local information to coattention based on the QANET architecture BIBREF10. Subsequently, we experimented with different types of skip connections to inject BERT embedding information back into our modified network. We then applied what we learned using the base BERT model to the large BERT model. Finally, we performed hyperparameter tuning by adjusting the number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks. Each part of the project is discussed further in the subsections below." ] ]
707db46938d16647bf4b6407b2da84b5c7ab4a81
How much F1 was improved after adding skip connections?
[ "Simple Skip improves F1 from 74.34 to 74.81\nTransformer Skip improes F1 from 74.34 to 74.95 " ]
[ [ "Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip). The final column presents the result of the localized feature extraction added inside the C2Q/Q2C architecture (Inside Conv - Figure FIGREF8)." ] ]
d72548fa4d29115252605d5abe1561a3ef2430ca
Where do they retrieve neighbor n-grams from in their approach?
[ "represent every sentence by their reduced n-gram set" ]
[ [ "Motivated by phrase based SMT, we retrieve neighbors which have high local, sub-sentence level overlap with the source sentence. We adapt our approach to retrieve n-grams instead of sentences. We note that the similarity metric defined above for sentences is equally applicable for n-gram retrieval.", "We represent every sentence by their reduced n-gram set. For every n-gram in INLINEFORM0 , we find the closest n-gram in the training set using the IDF similarity defined above. For each retrieved n-gram we find the corresponding sentence (In case an n-gram is present in multiple sentences, we choose one randomly). The set of neighbors of INLINEFORM1 is then the set of all sentences in the training corpus that contain an n-gram that maximizes the n-gram similarity with any n-gram in INLINEFORM2 ." ] ]
24d06808fa3b903140659ee5a471fdfa86279980
To which systems do they compare their results against?
[ "standard Transformer Base model" ]
[ [ "We compare the performance of a standard Transformer Base model and our semi-parametric NMT approach on an English-French translation task. We create a new heterogeneous dataset, constructed from a combination of the WMT training set (36M pairs), the IWSLT bilingual corpus (237k pairs), JRC-Acquis (797k pairs) and OpenSubtitles (33M pairs). For WMT, we use newstest 13 for validation and newstest 14 for test. For IWSLT, we use a combination of the test corpora from 2012-14 for validation and test 2015 for eval. For OpenSubtitles and JRC-Acquis, we create our own splits for validation and test, since no benchmark split is publicly available. After deduping, the JRC-Acquis test and validation set contain 6574 and 5121 sentence pairs respectively. The OpenSubtitles test and validation sets contain 3975 and 3488 pairs. For multi-domain training, the validation set is a concatenation of the four individual validation sets." ] ]
dba3d05c495e2c8ca476139e78f65059db2eb72d
Does their combination of a non-parametric retrieval and neural network get trained end-to-end?
[ "Yes" ]
[ [ "The Transformer baselines are trained on 16 GPUs, with the learning rate, warm-up schedule and batching scheme described in BIBREF6 . The semi-parametric models were trained on 32 GPUs with each replica split over 2 GPUs, one to train the translation model and the other for computing the CSTM. We used a conservative learning rate schedule (3, 40K) BIBREF8 to train the semi-parametric models." ] ]
0062ad4aed09a57d0ece6aa4b873f4a4bf65d165
Which similarity measure do they use in their n-gram retrieval approach?
[ "we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0" ]
[ [ "Our baseline approach relies on a simple inverse document frequency (IDF) based similarity score. We define the IDF score of any token, INLINEFORM0 , as INLINEFORM1 , where INLINEFORM2 is the number of sentence pairs in training corpus and INLINEFORM3 is the number of sentences INLINEFORM4 occurs in. Let any two sentence pairs in the corpus be INLINEFORM5 and INLINEFORM6 . Then we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0", "Motivated by phrase based SMT, we retrieve neighbors which have high local, sub-sentence level overlap with the source sentence. We adapt our approach to retrieve n-grams instead of sentences. We note that the similarity metric defined above for sentences is equally applicable for n-gram retrieval." ] ]
67a28fe78f07c1383176b89e78630ee191cf15db
Where is MVCNN pertained?
[ "on the unlabeled data of each task" ]
[ [ "Pretraining. Sentence classification systems are usually implemented as supervised training regimes where training loss is between true label distribution and predicted label distribution. In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems." ] ]
d8de12f5eff64d0e9c9e88f6ebdabc4cdf042c22
How much gain does the model achieve with pretraining MVCNN?
[ "0.8 points on Binary; 0.7 points on Fine-Grained; 0.6 points on Senti140; 0.7 points on Subj" ]
[ [] ]
9cba2ee1f8e1560e48b3099d0d8cf6c854ddea2e
What are the effects of extracting features of multigranular phrases?
[ "The system benefits from filters of each size., features of multigranular phrases are extracted with variable-size convolution filters." ]
[ [ "The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).", "This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks." ] ]
7975c3e1f61344e3da3b38bb12e1ac6dcb153a18
What are the effects of diverse versions of pertained word embeddings?
[ "each embedding version is crucial for good performance" ]
[ [ "In the block “versions”, we see that each embedding version is crucial for good performance: performance drops in every single case. Though it is not easy to compare fairly different embedding versions in NLP tasks, especially when those embeddings were trained on different corpora of different sizes using different algorithms, our results are potentially instructive for researchers making decision on which embeddings to use for their own tasks." ] ]
eddb18109495976123e10f9c6946a256a55074bd
How is MVCNN compared to CNN?
[ "MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. " ]
[ [ "This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks." ] ]
ea6764a362bac95fb99969e9f8c773a61afd8f39
What is the highest accuracy score achieved?
[ "82.0%" ]
[ [] ]
62c4c8b46982c3fcf5d7c78cd24113635e2d7010
What is the size range of the datasets?
[ "Unanswerable" ]
[ [] ]
e9cfe3f15735e2b0d5c59a54c9940ed1d00401a2
Does the paper report F1-scores for the age and language variety tasks?
[ "No" ]
[ [] ]
52ed2eb6f4d1f74ebdc4dcddcae201786d4c0463
Are the models compared to some baseline models?
[ "Yes" ]
[ [ "Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\\mu =0$, and $\\sigma =1$, i.e., $W \\sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs." ] ]
2c576072e494ab5598667cd6b40bc97fdd7d92d7
What are the in-house data employed?
[ "we manually label an in-house dataset of 1,100 users with gender tags, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task" ]
[ [ "To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender." ] ]
8602160e98e4b2c9c702440da395df5261f55b1f
What are the three datasets used in the paper?
[ "Data released for APDA shared task contains 3 datasets." ]
[ [ "For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}." ] ]
57fdb0f6cd91b64a000630ecb711550941283091
What are the potentials risks of this approach?
[ "Unanswerable" ]
[ [] ]
3aa43a0d543b88d40e4f3500c7471e263515be40
What elements of natural language processing are proposed to analyze qualitative data?
[ "translated the responses in multiple languages into English using machine translation, words without functional meaning (e.g. `I'), rare words that occurred in only one narrative, numbers, and punctuation were all removed, remaining words were stemmed to remove plural forms of nouns or conjugations of verbs" ]
[ [ "The UCL team had access to micro-narratives, as well as context specific meta-data such as demographic information and project details. For a cross-national comparison for policy-makers, the team translated the responses in multiple languages into English using machine translation, in this case Translate API (Yandex Technologies). As a pre-processing step, words without functional meaning (e.g. `I'), rare words that occurred in only one narrative, numbers, and punctuation were all removed. The remaining words were stemmed to remove plural forms of nouns or conjugations of verbs." ] ]
d82ec1003a3db7370994c7522590f7e5151b1f33
How does the method measure the impact of the event on market prices?
[ "We collected the historical 52 week stock prices prior to this event and calculated the daily stock price change. The distribution of the daily price change of the previous 52 weeks is Figure FIGREF13 with a mean INLINEFORM1 and standard deviation INLINEFORM2 . " ]
[ [ "We also did a qualitative study on the Starbucks (SBUX) stock movement during this event. Figure FIGREF12 is the daily percentage change of SBUX and NASDAQ index between April 11th and April 20th. SBUX did not follow the upward trend of the whole market before April 17th, and then its change on April 20th, INLINEFORM0 , is quite significant from historical norms. We collected the historical 52 week stock prices prior to this event and calculated the daily stock price change. The distribution of the daily price change of the previous 52 weeks is Figure FIGREF13 with a mean INLINEFORM1 and standard deviation INLINEFORM2 . The INLINEFORM3 down almost equals to two standard deviations below the mean. Our observation is that plausibly, there was a negative aftereffect from the event of the notable decline in Starbucks stock price due to the major public relations crisis." ] ]
58f08d38bbcffb2dd9d660faa8026718d390d64b
How is sentiment polarity measured?
[ "For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets" ]
[ [ "Sentiment: For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets." ] ]
89e1e0dc5d15a05f8740f471e1cb3ddd296b8942
Which part of the joke is more important in humor?
[ "the punchline of the joke " ]
[ [ "In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence." ] ]
2815bac42db32d8f988b380fed997af31601f129
What is improvement in accuracy for short Jokes in relation other types of jokes?
[ "It had the highest accuracy comparing to all datasets 0.986% and It had the highest improvement comparing to previous methods on the same dataset by 8%" ]
[ [ "Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).", "In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. We also note that the general human classification found 66.3% of the jokes to be humorous.", "The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features." ] ]
de03e8cc1ceaf2108383114460219bf46e00423c
What kind of humor they have evaluated?
[ "a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread, These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. " ]
[ [ "The next question then is, what makes a joke humorous? Although humor is a universal construct, there is a wide variety between what each individual may find humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular - with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting." ] ]
8a276dfe748f07e810b3944f4f324eaf27e4a52c
How they evaluate if joke is humorous or not?
[ "The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes." ]
[ [ "Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.", "Some sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes." ] ]
0716b481b78d80b012bca17c897c62efbe7f3731
Do they report results only on English data?
[ "Yes" ]
[ [ "We can see at least three reasons for these observed correlations. First, some correlations can be attributed to overlapping feature definitions. For instance, expletive arguments (e.g. There are birds singing) are, by definition, non-canonical arguments, and thus are a subset of add arg. However, some added arguments, such as benefactives (Bo baked Mo a cake), are not expletives. Second, some correlations can be attributed to grammatical properties of the relevant constructions. For instance, question and aux are correlated because main-clause questions in English require subject-aux inversion and in many cases the insertion of auxiliary do (Do lions meow?). Third, some correlations may be a consequence of the sources sampled in CoLA and the phenomena they focus on. For instance, the unusually high correlation of Emb-Q and ellipsis/anaphor can be attributed to BIBREF18 , which is an article about the sluicing construction involving ellipsis of an embedded interrogative (e.g. I saw someone, but I don't know who).", "Expletives, or “dummy” arguments, are semantically inert arguments. The most common expletives in English are it and there, although not all occurrences of these items are expletives. Arguments are usually selected for by the head, and they are generally not optional. In this case, the expletive occupies a syntactic argument slot, but it is not semantically selected by the verb, and there is often a syntactic variation without the expletive. See [p.170-172]adger2003core and [p.82-83]kim2008syntax." ] ]
fed0785d24375ebbde51fb0503b93f14da1d8583
Do the authors have a hypothesis as to why morphological agreement is hardly learned by any model?
[ "These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions." ]
[ [ "The results for the major features and minor features are shown in Figures FIGREF26 and FIGREF35 , respectively. For each feature, we measure the MCC of the sentences including that feature. We plot the mean of these results across the different restarts for each model, and error bars mark the mean INLINEFORM0 standard deviation. For the Violations features, MCC is technically undefined because these features only contain unacceptable sentences. We report MCC in these cases by including for each feature a single acceptable example that is correctly classified by all models.", "The most challenging features are all related to Violations. Low performance on Infl/Agr Violations, which marks morphological violations (He washed yourself, This is happy), is especially striking because a relatively high proportion (29%) of these sentences are Simple. These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions." ] ]
675d7c48541b6368df135f71f9fc13a398f0c8c6
Which models are best for learning long-distance movement?
[ "the transformer models" ]
[ [ "We identify many specific syntactic features that make sentences harder to classify, and many that have little effect. For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn. We also find features of sentences that accentuate or minimize the differences between models. Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations." ] ]
868c69c8f623e30b96df5b5c8336070994469f60
Where does the data in CoLA come from?
[ " CoLA contains example sentences from linguistics publications labeled by experts" ]
[ [ "The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer BIBREF3 and BERT BIBREF2 achieve the state of the art on the GLUE benchmark BIBREF4 . Among the GLUE tasks, these state-of-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability BIBREF0 . CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability." ] ]
f809fd0d3acfaccbe6c8abb4a9d951a83eec9a32
How is the CoLA grammatically annotated?
[ "labeled by experts" ]
[ [ "The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer BIBREF3 and BERT BIBREF2 achieve the state of the art on the GLUE benchmark BIBREF4 . Among the GLUE tasks, these state-of-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability BIBREF0 . CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability." ] ]
c4a6b727769328333bb48d59d3fc4036a084875d
What baseline did they compare Entity-GCN to?
[ "Human, FastQA, BiDAF, Coref-GRU, MHPGM, Weaver / Jenga, MHQA-GRN" ]
[ [ "In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set." ] ]
bbeb74731b9ac7f61e2d74a7d9ea74caa85e62ef
How many documents at a time can Entity-GCN handle?
[ "Unanswerable" ]
[ [] ]
93e8ce62361b9f687d5200d2e26015723721a90f
Did they use a relation extraction method to construct the edges in the graph?
[ "No" ]
[ [ "To each node $v_i$ , we associate a continuous annotation $\\mathbf {x}_i \\in \\mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section \"Node annotations\" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph." ] ]
d05d667822cb49cefd03c24a97721f1fe9dc0f4c
How did they get relations between mentions?
[ "Assign a value to the relation based on whether mentions occur in the same document, if mentions are identical, or if mentions are in the same coreference chain." ]
[ [ "To each node $v_i$ , we associate a continuous annotation $\\mathbf {x}_i \\in \\mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section \"Node annotations\" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph." ] ]