question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
8829f738bcdf05b615072724223dbd82463e5de6
Does the paper report translation accuracy for an automatic translation model for Tunisian to Arabish words?
[ "Yes" ]
[ [ "The 10-fold cross validation with this setting gave a token-level accuracy of roughly 71%. This result is not satisfactory on an absolute scale, however it is more than encouraging taking into account the small size of our data. This result means that less than 3 tokens, on average, out of 10, must be corrected to increase the size of our corpus. With this model we automatically transcribed into Arabic morphemes, roughly, 5,000 additional tokens, corresponding to the second annotation block. This can be manually annotated in at least 7,5 days, but thanks to the automatic annotation accuracy, it was manually corrected into 3 days. The accuracy of the model on the annotation of the second block was roughly 70%, which corresponds to the accuracy on the test set. The manually-corrected additional tokens were added to the training data of our neural model, and a new block was automatically annotated and manually corrected. Both accuracy on the test set and on the annotation block remained at around 70%. This is because the block added to the training data was significantly different from the previous and from the third. Adding the third block to the training data and annotating a fourth block with the new trained model gave in contrast an accuracy of roughly 80%. This incremental, semi-automatic transcription procedure is in progress for the remaining blocks, but it is clear that it will make the corpus annotation increasingly easier and faster as the amount of training data will grow up." ] ]
4b624064332072102ea674254d7098038edad572
Did participants behave unexpectedly?
[ "No" ]
[ [ "Experiment 1 directly tested the hypothesis that speakers increase their specificity in contexts with asymmetry in visual access. We found that speakers are not only context-sensitive in choosing referring expressions that distinguish target from distractors in a shared context, but are occlusion-sensitive, adaptively compensating for uncertainty. Critically, this resulted in systematic differences in behavior across the occlusion conditions that are difficult to explain under an egocentric theory: in the presence of occlusions, speakers were spontaneously willing to spend additional time and keystrokes to give further information beyond what they produce in the corresponding unoccluded contexts, even though that information is equally redundant given the visible objects in their display.", "These results strongly suggest that the speaker's informativity influences listener accuracy. In support of this hypothesis, we found a strong negative correlation between informativity and error rates across items and conditions: listeners make fewer errors when utterances are a better fit for the target relative to the distractor ( $\\rho = -0.81$ , bootstrapped 95% CI $= [-0.9, -0.7]$ ; Fig. 6 B). This result suggests that listener behavior is driven by an expectation of speaker informativity: listeners interpret utterances proportionally to how well they fit objects in context.", "Are human adults expert mind-readers, or fundamentally egocentric? The longstanding debate over the role of theory of mind in communication has largely centered around whether listeners (or speakers) with private information consider their partner's perspective BIBREF30 , BIBREF16 . Our work presents a more nuanced picture of how a speaker and a listener use theory of mind to modulate their pragmatic expectations. The Gricean cooperative principle emphasizes a natural division of labor in how the joint effort of being cooperative is shared BIBREF4 , BIBREF60 . It can be asymmetric when one partner is expected to, and able to, take on more complex reasoning than the other, in the form of visual perspective-taking, pragmatic inference, or avoiding further exchanges of clarification and repair. One such case is when the speaker has uncertainty over what the listener can see, as in the director-matcher task. Our Rational Speech Act (RSA) formalization of cooperative reasoning in this context predicts that speakers (directors) naturally increase the informativity of their referring expressions to hedge against the increased risk of misunderstanding; Exp. 1 presents direct evidence in support of this hypothesis.", "Importantly, when the director (speaker) is expected to be appropriately informative, communication can be successful even when the matcher (listener) does not reciprocate the effort. If visual perspective-taking is effortful and cognitively demanding BIBREF39 , the matcher will actually minimize joint effort by not taking the director's visual perspective. This suggests a less egocentric explanation of when and why listeners neglect the speaker's visual perspective; they do so when they expect the speaker to disambiguate referents sufficiently. While adaptive in most natural communicative contexts, such neglect might backfire and lead to errors when the speaker (inexplicably) violates this expectation. From this point of view, the “failure” of listener theory of mind in these tasks is not really a failure; instead, it suggests that both speakers and listeners may use theory of mind to know when (and how much) they should expect others to be cooperative and informative, and subsequently allocate their resources accordingly BIBREF36 . Exp. 2 is consistent with this hypothesis; when directors used underinformative scripted instructions (taken from prior work), listeners made significantly more errors than when speakers were allowed to provide referring expressions at their natural level of informativity, and speaker informativeness strongly modulated listener error rates." ] ]
65ba7304838eb960e3b3de7c8a367d2c2cd64c54
Was this experiment done in a lab?
[ "No" ]
[ [ "We recruited 102 pairs of participants from Amazon Mechanical Turk and randomly assigned speaker and listener roles. After we removed 7 games that disconnected part-way through and 12 additional games according to our pre-registered exclusion criteria (due to being non-native English speakers, reporting confusion about the instructions, or clearly violating the instructions), we were left with a sample of 83 full games." ] ]
a60030cfd95d0c10b1f5116c594d50cb96c87ae6
How long is new model trained on 3400 hours of data?
[ "Unanswerable" ]
[ [] ]
efe49829725cfe54de01405c76149a4fe4d18747
How much does HAS-QA improve over baselines?
[ "For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. , For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score." ]
[ [ "1) HAS-QA outperforms traditional RC baselines with a large gap, such as GA, BiDAF, AQA listed in the first part. For example, in QuasarT, it improves 16.8% in EM score and 20.4% in F1 score. As RC task is just a special case of OpenQA task. Some experiments on standard SQuAD dataset(dev-set) BIBREF9 show that HAS-QA yields EM/F1:0.719/0.798, which is comparable with the best released single model Reinforced Mnemonic Reader BIBREF25 in the leaderboard (dev-set) EM/F1:0.721/0.816. Our performance is slightly worse because Reinforced Mnemonic Reader directly use the accurate answer span, while we use multiple distantly supervised answer spans. That may introduce noises in the setting of SQuAD, since only one span is accurate.", "2) HAS-QA outperforms recent OpenQA baselines, such as DrQA, R ${}^3$ and Shared-Norm listed in the second part. For example, in QuasarT, it improves 4.6% in EM score and 3.5% in F1 score." ] ]
3d49b678ff6b125ffe7fb614af3e187da65c6f65
What does "explicitly leverages their probabilistic correlation to guide the training process of both models" mean?
[ "The framework jointly learns parametrized QA and QG models subject to the constraint in equation 2. In more detail, they minimize QA and QG loss functions, with a third dual loss for regularization." ]
[ [ "Moreover, QA and QG have probabilistic correlation as both tasks relate to the joint probability between $q$ and $a$ . Given a question-answer pair $\\langle q, a \\rangle $ , the joint probability $P(q, a)$ can be computed in two equivalent ways.", "$$P(q, a) = P(a) P(q|a) = P(q)P(a|q)$$ (Eq. 1)", "The conditional distribution $P(q|a)$ is exactly the QG model, and the conditional distribution $P(a|q)$ is closely related to the QA model. Existing studies typically learn the QA model and the QG model separately by minimizing their own loss functions, while ignoring the probabilistic correlation between them.", "Based on these considerations, we introduce a training framework that exploits the duality of QA and QG to improve both tasks. There might be different ways of exploiting the duality of QA and QG. In this work, we leverage the probabilistic correlation between QA and QG as the regularization term to influence the training process of both tasks. Specifically, the training objective of our framework is to jointly learn the QA model parameterized by $\\theta _{qa}$ and the QG model parameterized by $\\theta _{qg}$ by minimizing their loss functions subject to the following constraint.", "$$P_a(a) P(q|a;\\theta _{qg}) = P_q(q)P(a|q;\\theta _{qa})$$ (Eq. 3)", "$P_a(a)$ and $P_q(q)$ are the language models for answer sentences and question sentences, respectively.", "We describe the proposed algorithm in this subsection. Overall, the framework includes three components, namely a QA model, a QG model and a regularization term that reflects the duality of QA and QG. Accordingly, the training objective of our framework includes three parts, which is described in Algorithm 1.", "The QA specific objective aims to minimize the loss function $l_{qa}(f_{qa}(a,q;\\theta _{qa}), label)$ , where $label$ is 0 or 1 that indicates whether $a$ is the correct answer of $q$ or not. Since the goal of a QA model is to predict whether a question-answer pair is correct or not, it is necessary to use negative QA pairs whose labels are zero. The details about the QA model will be presented in the next section.", "For each correct question-answer pair, the QG specific objective is to minimize the following loss function,", "$$l_{qg}(q, a) = -log P_{qg}(q|a;\\theta _{qg})$$ (Eq. 6)", "where $a$ is the correct answer of $q$ . The negative QA pairs are not necessary because the goal of a QG model is to generate the correct question for an answer. The QG model will be described in the following section.", "The third objective is the regularization term which satisfies the probabilistic duality constrains as given in Equation 3 . Specifically, given a correct $\\langle q, a \\rangle $ pair, we would like to minimize the following loss function,", "$$ \\nonumber l_{dual}(a,q;\\theta _{qa}, \\theta _{qg}) &= [logP_a(a) + log P(q|a;\\theta _{qg}) \\\\ & - logP_q(q) - logP(a|q;\\theta _{qa})]^2$$ (Eq. 9)", "where $P_a(a)$ and $P_q(q)$ are marginal distributions, which could be easily obtained through language model. $P(a|q;\\theta _{qg})$ could also be easily calculated with the markov chain rule: $P(q|a;\\theta _{qg}) = \\prod _{t=1}^{|q|} P(q_t|q_{<t}, a;\\theta _{qg})$ , where the function $P(q_t|q_{<t}, a;\\theta _{qg})$ is the same with the decoder of the QG model (detailed in the following section)." ] ]
b686e10a725254695821e330a277c900792db69f
How does this compare to contextual embedding methods?
[ " represent each word with an expressive multimodal distribution, for multiple distinct meanings, entailment, heavy tailed uncertainty, and enhanced interpretability. For example, one mode of the word `bank' could overlap with distributions for words such as `finance' and `money', and another mode could overlap with the distributions for `river' and `creek'." ]
[ [ "In this paper, we propose to represent each word with an expressive multimodal distribution, for multiple distinct meanings, entailment, heavy tailed uncertainty, and enhanced interpretability. For example, one mode of the word `bank' could overlap with distributions for words such as `finance' and `money', and another mode could overlap with the distributions for `river' and `creek'. It is our contention that such flexibility is critical for both qualitatively learning about the meanings of words, and for optimal performance on many predictive tasks." ] ]
40f87db3a8d1ac49b888ce3358200f7d52903ce7
Does the new system utilize pre-extracted bounding boxes and/or features?
[ "Yes" ]
[ [ "In this section, we elaborate our model consisting of four parts: (a) image feature pre-selection part which models the tendency where people focus to ask questions, (b) question encoding part which encodes the question words as a condensed semantic embedding, (c) attention-based feature fusion part performs second selection on image features and (d) answer generation part which gives the answer output.", "We propose to perform saliency-like pre-selection operation to alleviate the problems and model the RoI patterns. The image is first divided into $g\\times g$ grids as illustrated in Figure. 2 . Taking $m\\times m$ grids as a region, with $s$ grids as the stride, we obtain $n\\times n$ regions, where $n=\\left\\lfloor \\frac{g-m}{s}\\right\\rfloor +1$ . We then feed the regions to a pre-trained ResNet BIBREF24 deep convolutional neural network to produce $n\\times n\\times d_I$ -dimensional region features, where $d_I$ is the dimension of feature from the layer before the last fully-connected layer." ] ]
36383971a852d1542e720d3ea1f5adeae0dbff18
To which previous papers does this work compare its results?
[ "holistic, TraAtt, RegAtt, ConAtt, ConAtt, iBOWIMG , VQA, VQA, WTL , NMN , SAN , AMA , FDA , D-NMN, DMN+" ]
[ [ "holistic: The baseline model which maps the holistic image feature and LSTM-encoded question feature to a common space and perform element-wise multiplication between them.", "TraAtt: The traditional attention model, implementation of WTL model BIBREF9 using the same $3\\times 3$ regions in SalAtt model.", "RegAtt: The region attention model which employs our novel attention method, same as the SalAtt model but without region pre-selection.", "ConAtt: The convolutional region pre-selection attention model which replaces the BiLSTM in SalAtt model with a weight-sharing linear mapping, implemented by a convolutional layer.", "Besides, we also compare our SalAtt model with the popular baseline models i.e. iBOWIMG BIBREF4 , VQA BIBREF1 , and the state-of-the-art attention-based models i.e. WTL BIBREF9 , NMN BIBREF21 , SAN BIBREF14 , AMA BIBREF33 , FDA BIBREF34 , D-NMN BIBREF35 , DMN+ BIBREF8 on two tasks of COCO-VQA." ] ]
1d941d390c0ee365aa7d7c58963e646eea74cbd6
Do they consider other tasks?
[ "No" ]
[ [] ]
3ee976add83e37339715d4ae9d8aa328dd54d052
What were the model's results on flood detection?
[ "Queensland flood which provided 96% accuracy, Alberta flood with the same configuration of train-test split which provided 95% accuracy" ]
[ [ "We have used the following hardware for the experimentation: Windows 10 Education desktop consisting of intel core i-7 processor and 16GB RAM. We have used python 3.6 and Google colab notebook to execute our model and obtained the results discussed below: The train and test data have divided into 70-30 ratio and we got these results as shown in Table TABREF17 for the individual dataset and the combination of both. The pre-trained network was already trained and we used the target data Queensland flood which provided 96% accuracy with 0.118 Test loss in only 11 seconds provided we used only 70% of training labeled data. The second target data is Alberta flood with the same configuration of train-test split which provided 95% accuracy with 0.118 Test loss in just 19 seconds. As we can see it takes very less time to work with 20,000 of tweets (combined) and at times of emergency it can handle a huge amount of unlabeled data to classify into meaningful categories in minutes." ] ]
ef04182b6ae73a83d52cb694cdf4d414c81bf1dc
What dataset did they use?
[ " disaster data from BIBREF5, Queensland flood in Queensland, Australia and Alberta flood in Alberta, Canada" ]
[ [ "Data Collection: We are using the disaster data from BIBREF5. It contains various dataset including the CrisiLexT6 dataset which contains six crisis events related to English tweets in 2012 and 2013, labeled by relatedness (on-topic and off-topic) of respective crisis. Each crisis event tweets contain almost 10,000 labeled tweets but we are only focused on flood-related tweets thus, we experimented with only two flood event i.e. Queensland flood in Queensland, Australia and Alberta flood in Alberta, Canada and relabeled all on-topic tweets as Related and Off-topic as Unrelated for implicit class labels understanding in this case. The data collection process and duration of CrisisLex data is described in BIBREF5 details." ] ]
decb07f9be715de024236e50dc7011a132363480
What exactly is new about this stochastic gradient descent algorithm?
[ "CNN model can be trained in a purely online setting. We first initialize the model parameters $\\theta _0$ (line 1), which can be a trained model from other disaster events or it can be initialized randomly to start from scratch.\n\nAs a new batch of labeled tweets $B_t= \\lbrace \\mathbf {s}_1 \\ldots \\mathbf {s}_n \\rbrace $ arrives, we first compute the log-loss (cross entropy) in Equation 11 for $B_t$ with respect to the current parameters $\\theta _t$ (line 2a). Then, we use backpropagation to compute the gradients $f^{\\prime }(\\theta _{t})$ of the loss with respect to the current parameters (line 2b). Finally, we update the parameters with the learning rate $\\eta _t$ and the mean of the gradients (line 2c). We take the mean of the gradients to deal with minibatches of different sizes. Notice that we take only the current minibatch into account to get an updated model. " ]
[ [ "DNNs are usually trained with first-order online methods like stochastic gradient descent (SGD). This method yields a crucial advantage in crisis situations, where retraining the whole model each time a small batch of labeled data arrives is impractical. Algorithm \"Online Learning\" demonstrates how our CNN model can be trained in a purely online setting. We first initialize the model parameters $\\theta _0$ (line 1), which can be a trained model from other disaster events or it can be initialized randomly to start from scratch.", "As a new batch of labeled tweets $B_t= \\lbrace \\mathbf {s}_1 \\ldots \\mathbf {s}_n \\rbrace $ arrives, we first compute the log-loss (cross entropy) in Equation 11 for $B_t$ with respect to the current parameters $\\theta _t$ (line 2a). Then, we use backpropagation to compute the gradients $f^{\\prime }(\\theta _{t})$ of the loss with respect to the current parameters (line 2b). Finally, we update the parameters with the learning rate $\\eta _t$ and the mean of the gradients (line 2c). We take the mean of the gradients to deal with minibatches of different sizes. Notice that we take only the current minibatch into account to get an updated model. Choosing a proper learning rate $\\eta _t$ can be difficult in practice. Several adaptive methods such as ADADELTA BIBREF6 , ADAM BIBREF7 , etc., have been proposed to overcome this issue. In our model, we use ADADELTA." ] ]
63eb31f613a41a3ddd86f599e743ed10e1cd07ba
What codemixed language pairs are evaluated?
[ "Hindi-English" ]
[ [ "We use the Universal Dependencies' Hindi-English codemixed data set BIBREF9 to test the model's ability to label code-mixed data. This dataset is based on code-switching tweets of Hindi and English multilingual speakers. We use the Devanagari script provided by the data set as input tokens." ] ]
d2804ac0f068e9c498e33582af9c66906b26cac3
How do they compress the model?
[ "we extract sentences from Wikipedia in languages for which public multilingual is pretrained. For each sentence, we use the open-source BERT wordpiece tokenizer BIBREF4 , BIBREF1 and compute cross-entropy loss for each wordpiece: INLINEFORM0" ]
[ [ "For model distillation BIBREF6 , we extract sentences from Wikipedia in languages for which public multilingual is pretrained. For each sentence, we use the open-source BERT wordpiece tokenizer BIBREF4 , BIBREF1 and compute cross-entropy loss for each wordpiece: INLINEFORM0", "where INLINEFORM0 is the cross-entropy function, INLINEFORM1 is the softmax function, INLINEFORM2 is the BERT model's logit of the current wordpiece, INLINEFORM3 is the small BERT model's logits and INLINEFORM4 is a temperature hyperparameter, explained in Section SECREF11 .", "To train the distilled multilingual model mMiniBERT, we first use the distillation loss above to train the student from scratch using the teacher's logits on unlabeled data. Afterwards, we finetune the student model on the labeled data the teacher is trained on." ] ]
e24fbcc8be922c43f6b6037cdf2bfd4c0a926c08
What is the multilingual baseline?
[ " the Meta-LSTM BIBREF0" ]
[ [ "We discuss two core models for addressing sequence labeling problems and describe, for each, training them in a single-model multilingual setting: (1) the Meta-LSTM BIBREF0 , an extremely strong baseline for our tasks, and (2) a multilingual BERT-based model BIBREF1 ." ] ]
e8c0fabae0d29491471e37dec34f652910302928
Which features do they use?
[ "beyond localized features and have access to the entire sequence" ]
[ [ "In this paper, we present the problem of DAR from the viewpoint of extending richer CRF-attentive structural dependencies along with neural network without abandoning end-to-end training. For simplicity, we call the framework as CRF-ASN (CRF-Attentive Structured Network). Specifically, we propose the hierarchical semantic inference integrated with memory mechanism on the utterance modeling. The memory mechanism is adopted in order to enable the model to look beyond localized features and have access to the entire sequence. The hierarchical semantic modeling learns different levels of granularity including word level, utterance level and conversation level. We then develop internal structured attention network on the linear-chain conditional random field (CRF) to specify structural dependencies in a soft manner. This approach generalizes the soft-selection attention on the structural CRF dependencies and takes into account the contextual influence on the nearing utterances. It is notably that the whole process is differentiable thus can be trained in an end-to-end manner." ] ]
cafa6103e609acaf08274a2f6d8686475c6b8723
By how much do they outperform state-of-the-art solutions on SWDA and MRDA?
[ "improves the DAR accuracy over Bi-LSTM-CRF by 2.1% and 0.8% on SwDA and MRDA respectively" ]
[ [ "The results show that our proposed model CRF-ASN obviously outperforms the state-of-the-art baselines on both SwDA and MRDA datasets. Numerically, Our model improves the DAR accuracy over Bi-LSTM-CRF by 2.1% and 0.8% on SwDA and MRDA respectively. It is remarkable that our CRF-ASN method is nearly close to the human annotators' performance on SwDA, which is very convincing to prove the superiority of our model." ] ]
7f2fd7ab968de720082133c42c2052d351589a67
What type and size of word embeddings were used?
[ "word2vec, 200 as the dimension of the obtained word vectors" ]
[ [ "We used the public tool, word2vec, released by Mikolov-2013 to obtain the word embeddings. Their neural network approach is similar to the feed-forward neural networks BIBREF5 , BIBREF6 . To be more precise, the previous words to the current word are encoded in the input layer and then projected to the projection layer with a shared projection matrix. After that, the projection is given to the non-linear hidden layer and then the output is given to softmax in order to receive a probability distribution over all the words in the vocabulary. However, as suggested by Mikolov-2013, removing the non-linear hidden layer and making the projection layer shared by all words is much faster, which allowed us to use a larger unlabeled corpus and obtain better word embeddings.", "Among the methods presented in Mikolov-2013, we used the continuous Skip-gram model to obtain semantic representations of Turkish words. The Skip-gram model uses the current word as an input to the projection layer with a log-linear classifier and attempts to predict the representation of neighboring words within a certain range. In the Skip-gram model architecture we used, we have chosen 200 as the dimension of the obtained word vectors. The range of surrounding words is chosen to be 5, so that we will predict the distributed representations of the previous 2 words and the next 2 words using the current word. Our vector size and range decisions are aligned with the choices made in the previous study for Turkish NER by Demir-2014. The Skip-gram model architecture we used is shown in Figure FIGREF3 ." ] ]
369b0a481a4b75439ade0ec4f12b44414c4e5164
What data was used to build the word embeddings?
[ "Turkish news-web corpus, TS TweetS by Sezer-2013 and 20M Turkish Tweets by Bolat and Amasyalı" ]
[ [ "In the unsupervised stage, we used two types of unlabeled data to obtain Turkish word embeddings. The first one is a Turkish news-web corpus containing 423M words and 491M tokens, namely the BOUN Web Corpus BIBREF9 , BIBREF10 . The second one is composed of 21M Turkish tweets with 241M words and 293M tokens, where we combined 1M tweets from TS TweetS by Sezer-2013 and 20M Turkish Tweets by Bolat and Amasyalı." ] ]
e97545f4a5e7bc96515e60f2f9b23d8023d1eed9
How are templates discovered from training data?
[ "For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates." ]
[ [ "Our framework includes three key modules: Retrieve, Fast Rerank, and BiSET. For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates. Finally, BiSET mutually selects important information from the source article and the template to generate an enhanced article representation for summarization.", "This module starts with a standard information retrieval library to retrieve a small set of candidates for fine-grained filtering as cao2018retrieve. To do that, all non-alphabetic characters (e.g., dates) are removed to eliminate their influence on article matching. The retrieval process starts by querying the training corpus with a source article to find a few (5 to 30) related articles, the summaries of which will be treated as candidate templates.", "The above retrieval process is essentially based on superficial word matching and cannot measure the deep semantic relationship between two articles. Therefore, the Fast Rerank module is developed to identify a best template from the candidates based on their deep semantic relevance with the source article. We regard the candidate with highest relevance as the template. As illustrated in Figure FIGREF6 , this module consists of a Convolution Encoder Block, a Similarity Matrix and a Pooling Layer." ] ]
aaed6e30cf16727df0075b364873df2a4ec7605b
What is WNGT 2019 shared task?
[ "efficiency task aimed at reducing the number of parameters while minimizing drop in performance" ]
[ [ "The Transformer network BIBREF3 is a neural sequence-to-sequence model that has achieved state-of-the-art results in machine translation. However, Transformer models tend to be very large, typically consisting of hundreds of millions of parameters. As the number of parameters directly corresponds to secondary storage requirements and memory consumption during inference, using Transformer networks may be prohibitively expensive in scenarios with constrained resources. For the 2019 Workshop on Neural Generation of Text (WNGT) Efficiency shared task BIBREF0, the Notre Dame Natural Language Processing (NDNLP) group looked at a method of inducing sparsity in parameters called auto-sizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance." ] ]
66f0dee89f084fe0565539a73f5bbe65f3677814
Do they use pretrained word representations in their neural network models?
[ "No" ]
[ [ "Grammatical error correction (GEC) is a challenging task due to the variability of the type of errors and the syntactic and semantic dependencies of the errors on the surrounding context. Most of the grammatical error correction systems use classification and rule-based approaches for correcting specific error types. However, these systems use several linguistic cues as features. The standard linguistic analysis tools like part-of-speech (POS) taggers and parsers are often trained on well-formed text and perform poorly on ungrammatical text. This introduces further errors and limits the performance of rule-based and classification approaches to GEC. As a consequence, the phrase-based statistical machine translation (SMT) approach to GEC has gained popularity because of its ability to learn text transformations from erroneous text to correct text from error-corrected parallel corpora without any additional linguistic information. They are also not limited to specific error types. Currently, many state-of-the-art GEC systems are based on SMT or use SMT components for error correction BIBREF0 , BIBREF1 , BIBREF2 . In this paper, grammatical error correction includes correcting errors of all types, including word choice errors and collocation errors which constitute a large class of learners' errors.", "We model our GEC system based on the phrase-based SMT approach. However, traditional phrase-based SMT systems treat words and phrases as discrete entities. We take advantage of continuous space representation by adding two neural network components that have been shown to improve SMT systems BIBREF3 , BIBREF4 . These neural networks are able to capture non-linear relationships between source and target sentences and can encode contextual information more effectively. Our experiments show that the addition of these two neural networks leads to significant improvements over a strong baseline and outperforms the current state of the art.", "To train NNJM, we use the publicly available implementation, Neural Probabilistic Language Model (NPLM) BIBREF14 . The latest version of Moses can incorporate NNJM trained using NPLM as a feature while decoding. Similar to NNGLM, we use the parallel text used for training the translation model in order to train NNJM. We use a source context window size of 5 and a target context window size of 4. We select a source context vocabulary of 16,000 most frequent words from the source side. The target context vocabulary and output vocabulary is set to the 32,000 most frequent words. We use a single hidden layer to speed up training and decoding with an input embedding dimension of 192 and 512 hidden layer nodes. We use rectified linear units (ReLU) as the activation function. We train NNJM with noise contrastive estimation with 100 noise samples per training instance, which are obtained from a unigram distribution. The neural network is trained for 30 epochs using stochastic gradient descent optimization with a mini-batch size of 128 and learning rate of 0.1." ] ]
8f882f414d7ea12077930451ae77c6e5f093adbc
How do they combine the two proposed neural network models?
[ "ncorporating NNGLM and NNJM both independently and jointly into, baseline system" ]
[ [ "Recently, continuous space representations of words and phrases have been incorporated into SMT systems via neural networks. Specifically, addition of monolingual neural network language models BIBREF13 , BIBREF14 , neural network joint models (NNJM) BIBREF4 , and neural network global lexicon models (NNGLM) BIBREF3 have been shown to be useful for SMT. Neural networks have been previously used for GEC as a language model feature in the classification approach BIBREF15 and as a classifier for article error correction BIBREF16 . Recently, a neural machine translation approach has been proposed for GEC BIBREF17 . This method uses a recurrent neural network to perform sequence-to-sequence mapping from erroneous to well-formed sentences. Additionally, it relies on a post-processing step based on statistical word-based translation models to replace out-of-vocabulary words. In this paper, we investigate the effectiveness of two neural network models, NNGLM and NNJM, in SMT-based GEC. To the best of our knowledge, there is no prior work that uses these two neural network models for SMT-based GEC.", "Grammatical error correction (GEC) is a challenging task due to the variability of the type of errors and the syntactic and semantic dependencies of the errors on the surrounding context. Most of the grammatical error correction systems use classification and rule-based approaches for correcting specific error types. However, these systems use several linguistic cues as features. The standard linguistic analysis tools like part-of-speech (POS) taggers and parsers are often trained on well-formed text and perform poorly on ungrammatical text. This introduces further errors and limits the performance of rule-based and classification approaches to GEC. As a consequence, the phrase-based statistical machine translation (SMT) approach to GEC has gained popularity because of its ability to learn text transformations from erroneous text to correct text from error-corrected parallel corpora without any additional linguistic information. They are also not limited to specific error types. Currently, many state-of-the-art GEC systems are based on SMT or use SMT components for error correction BIBREF0 , BIBREF1 , BIBREF2 . In this paper, grammatical error correction includes correcting errors of all types, including word choice errors and collocation errors which constitute a large class of learners' errors.", "We conduct experiments by incorporating NNGLM and NNJM both independently and jointly into our baseline system. The results of our experiments are described in Section SECREF23 . The evaluation is performed similar to the CoNLL 2014 shared task setting using the the official test data of the CoNLL 2014 shared task with annotations from two annotators (without considering alternative annotations suggested by the participating teams). The test dataset consists of 1,312 error-annotated sentences with 30,144 tokens on the source side. We make use of the official scorer for the shared task, M INLINEFORM0 Scorer v3.2 BIBREF19 , for evaluation. We perform statistical significance test using one-tailed sign test with bootstrap resampling on 100 samples.", "On top of our baseline system described above, we incorporate the two neural network components, neural network global lexicon model (NNGLM) and neural network joint model (NNJM) as features. Both NNGLM and NNJM are trained using the parallel data used to train the translation model of our baseline system." ] ]
a49832c89a2d7f95c1fe6132902d74e4e7a3f2d0
Which dataset do they evaluate grammatical error correction on?
[ "CoNLL 2014" ]
[ [ "We conduct experiments by incorporating NNGLM and NNJM both independently and jointly into our baseline system. The results of our experiments are described in Section SECREF23 . The evaluation is performed similar to the CoNLL 2014 shared task setting using the the official test data of the CoNLL 2014 shared task with annotations from two annotators (without considering alternative annotations suggested by the participating teams). The test dataset consists of 1,312 error-annotated sentences with 30,144 tokens on the source side. We make use of the official scorer for the shared task, M INLINEFORM0 Scorer v3.2 BIBREF19 , for evaluation. We perform statistical significance test using one-tailed sign test with bootstrap resampling on 100 samples." ] ]
a33ab5ce8497ff63ca575a80b03e0ed9c6acd273
How many users/clicks does their search engine have?
[ "Unanswerable" ]
[ [] ]
8fcbae7c3bd85034ae074fa58a35e773936edb5b
what was their baseline comparison?
[ "Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF)" ]
[ [ "To compare our neural models with the traditional approaches, we experimented with a number of existing models including: Support Vector Machine (SVM), a discriminative max-margin model; Logistic Regression (LR), a discriminative probabilistic model; and Random Forest (RF), an ensemble model of decision trees. We use the implementation from the scikit-learn toolkit BIBREF19 . All algorithms use the default value of their parameters." ] ]
cbbcafffda7107358fa5bf02409a01e17ee56bfd
Was any variation in results observed based on language typology?
[ "It is observed some variability - but not significant. Bert does not seem to gain much more syntax information than with type level information." ]
[ [ "On another note, we apply our formalization to evaluate multilingual $\\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\\%$ in all languages), it only encodes at most $5\\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings.", "We know $\\textsc {bert} $ can generate text in many languages, here we assess how much does it actually know about syntax in those languages. And how much more does it know than simple type-level baselines. tab:results-full presents this results, showing how much information $\\textsc {bert} $, fastText and onehot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages.", "Finally, when put into perspective, multilingual $\\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\\%$ additional information." ] ]
1e59263f7aa7dd5acb53c8749f627cf68683adee
Does the work explicitly study the relationship between model complexity and linguistic structure encoding?
[ "No" ]
[ [] ]
eac042734f76e787cb98ba3d0c13a916a49bdfb3
Which datasets are used in this work?
[ "GENIA corpus" ]
[ [ "There have been several workshops on biomedical natural language processing. We focus on the BioNLP Shared Tasks in recent years that had competitions on event extraction. There have been three BioNLP Shared Task competitions so far: 2009, 2011, and 2013. The BioNLP 2009 Shared Task BIBREF195 was based on the GENIA corpus BIBREF196 which contains PubMed abstracts of articles on transcription factors in human blood cells. There was a second BioNLP Shared Task competition organized in 2011 to measure the advances in approaches and associated results BIBREF197 . The third BioNLP ST was held in 2013. We discuss some notable systems from BioNLP ST 2011 and 2013." ] ]
9595bf228c9e859b0dc745e6c74070be2468d2cf
Does the training dataset provide logical form supervision?
[ "Yes" ]
[ [ "The dataset MSParS is published by NLPCC 2019 evaluation task. The whole dataset consists of 81,826 samples annotated by native English speakers. 80% of them are used as training set. 10% of them are used as validation set while the rest is used as test set. 3000 hard samples are selected from the test set. Metric for this dataset is the exactly matching accuracy on both full test set and hard test subset. Each sample is composed of the question, the logical form, the parameters(entity/value/type) and question type as the Table TABREF3 demonstrates." ] ]
94c5f5b1eb8414ad924c3568cedd81dc35f29c48
What is the difference between the full test set and the hard test set?
[ "3000 hard samples are selected from the test set" ]
[ [ "The dataset MSParS is published by NLPCC 2019 evaluation task. The whole dataset consists of 81,826 samples annotated by native English speakers. 80% of them are used as training set. 10% of them are used as validation set while the rest is used as test set. 3000 hard samples are selected from the test set. Metric for this dataset is the exactly matching accuracy on both full test set and hard test subset. Each sample is composed of the question, the logical form, the parameters(entity/value/type) and question type as the Table TABREF3 demonstrates." ] ]
ba05a53f5563b9dd51cc2db241c6e9418bc00031
How is the discriminative training formulation different from the standard ones?
[ "the best permutation is decided by $\\mathcal {J}_{\\text{SEQ}}(\\mathbf {L}_{un}^{(s^{\\prime })},\\mathbf {L}_{un}^{(r)})$ , which is the sequence discriminative criterion of taking the $s^{\\prime }$ -th permutation in $n$ -th output inference stream at utterance $u$" ]
[ [ "For the overlapped speech recognition problem, the conditional independence assumption in the output label streams is still made as in Equation ( 5 ). Then the cross-entropy based PIT can be transformed to sequence discriminative criterion based PIT as below,", "$$\\begin{split} \\mathcal {J}_{\\text{SEQ-PIT}}=\\sum _u \\min _{s^{\\prime }\\in \\mathbf {S}} \\frac{1}{N} \\sum _{n\\in [1,N]}-\\mathcal {J}_{\\text{SEQ}}(\\mathbf {L}_{un}^{(s^{\\prime })},\\mathbf {L}_{un}^{(r)}) \\end{split}$$ (Eq. 44)", "Different from Equation ( 7 ), the best permutation is decided by $\\mathcal {J}_{\\text{SEQ}}(\\mathbf {L}_{un}^{(s^{\\prime })},\\mathbf {L}_{un}^{(r)})$ , which is the sequence discriminative criterion of taking the $s^{\\prime }$ -th permutation in $n$ -th output inference stream at utterance $u$ . Similar to CE-PIT, $\\mathcal {J}_{\\text{SEQ}}$ of all the permutations are calculated and the minimum permutation is taken to do the optimization." ] ]
7bf3a7d19f17cf01f2c9fa16401ef04a3bef65d8
How are the two datasets artificially overlapped?
[ "we sort the speech segments by length, we take segments in pairs, zero-padding the shorter segment so both have the same length, These pairs are then mixed together" ]
[ [ "Two-talker overlapped speech is artificially generated by mixing these waveform segments. To maximize the speech overlap, we developed a procedure to mix similarly sized segments at around 0dB. First, we sort the speech segments by length. Then, we take segments in pairs, zero-padding the shorter segment so both have the same length. These pairs are then mixed together to create the overlapped speech data. The overlapping procedure is similar to BIBREF13 except that we make no modification to the signal levels before mixing . After overlapping, there's 150 hours data in the training, called 150 hours dataset, and 915 utterances in the test set. After decoding, there are 1830 utterances for evaluation, and the shortest utterance in the hub5e-swb dataset is discarded. Additionally, we define a small training set, the 50 hours dataset, as a random 50 hour subset of the 150 hours dataset. Results are reported using both datasets." ] ]
20f7b359f09c37e6aaaa15c2cdbb52b031ab4809
What baseline system is used?
[ "Unanswerable" ]
[ [] ]
3efc0981e7f959d916aa8bb32ab1c347b8474ff8
What type of lexical, syntactic, semantic and polarity features are used?
[ "Our lexical features include 1-, 2-, and 3-grams in both word and character levels., number of characters and the number of words, POS tags, 300-dimensional pre-trained word embeddings from GloVe, latent semantic indexing, tweet representation by applying the Brown clustering algorithm, positive words (e.g., love), negative words (e.g., awful), positive emoji icon and negative emoji icon, boolean features that check whether or not a negation word is in a tweet" ]
[ [ "Our lexical features include 1-, 2-, and 3-grams in both word and character levels. For each type of INLINEFORM0 -grams, we utilize only the top 1,000 INLINEFORM1 -grams based on the term frequency-inverse document frequency (tf-idf) values. That is, each INLINEFORM2 -gram appearing in a tweet becomes an entry in the feature vector with the corresponding feature value tf-idf. We also use the number of characters and the number of words as features.", "We use the NLTK toolkit to tokenize and annotate part-of-speech tags (POS tags) for all tweets in the dataset. We then use all the POS tags with their corresponding tf-idf values as our syntactic features and feature values, respectively.", "Firstly, we employ 300-dimensional pre-trained word embeddings from GloVe BIBREF29 to compute a tweet embedding as the average of the embeddings of words in the tweet.", "Secondly, we apply the latent semantic indexing BIBREF30 to capture the underlying semantics of the dataset. Here, each tweet is represented as a vector of 100 dimensions.", "Thirdly, we also extract tweet representation by applying the Brown clustering algorithm BIBREF31 , BIBREF32 —a hierarchical clustering algorithm which groups the words with similar meaning and syntactical function together. Applying the Brown clustering algorithm, we obtain a set of clusters, where each word belongs to only one cluster. For example in Table TABREF13 , words that indicate the members of a family (e.g., “mum”, “dad”) or positive sentiment (e.g., “interesting”, “awesome”) are grouped into the same cluster. We run the algorithm with different number of clustering settings (i.e., 80, 100, 120) to capture multiple semantic and syntactic aspects. For each clustering setting, we use the number of tweet words in each cluster as a feature. After that, for each tweet, we concatenate the features from all the clustering settings to form a cluster-based tweet embedding.", "Motivated by the verbal irony by means of polarity contrast, such as “I really love this year's summer; weeks and weeks of awful weather”, we use the number of polarity signals appearing in a tweet as the polarity features. The signals include positive words (e.g., love), negative words (e.g., awful), positive emoji icon and negative emoji icon. We use the sentiment dictionaries provided by BIBREF33 to identify positive and negative words in a tweet. We further use boolean features that check whether or not a negation word is in a tweet (e.g., not, n't)." ] ]
10f560fe8e1c0c7dea5e308ee4cec16d07874f1d
How does nextsum work?
[ "selects the next summary sentence based not only on properties of the source text, but also on the previously selected sentences in the summary" ]
[ [ "This work proposes an extractive summarization system that focuses on capturing rich summary-internal structure. Our key idea is that since summaries in a domain often follow some predictable structure, a partial summary or set of summary sentences should help predict other summary sentences. We formalize this intuition in a model called NextSum, which selects the next summary sentence based not only on properties of the source text, but also on the previously selected sentences in the summary. An example choice is shown in Table 1 . This setup allows our model to capture summary-specific discourse and topic transitions. For example, it can learn to expand on a topic that is already mentioned in the summary, or to introduce a new topic. It can learn to follow a script or discourse relations that are expected for that domain's summaries. It can even learn to predict the end of the summary, avoiding the need to explicitly define a length cutoff." ] ]
07580f78b04554eea9bb6d3a1fc7ca0d37d5c612
Can the approach be generalized to other technical domains as well?
[ "There is no reason to think that this approach wouldn't also be successful for other technical domains. Technical terms are replaced with tokens, therefore so as long as there is a corresponding process for identifying and replacing technical terms in the new domain this approach could be viable." ]
[ [ "In this paper, we propose a method that enables NMT to translate patent sentences with a large vocabulary of technical terms. We use an NMT model similar to that used by Sutskever et al. Sutskever14, which uses a deep long short-term memories (LSTM) BIBREF7 to encode the input sentence and a separate deep LSTM to output the translation. We train the NMT model on a bilingual corpus in which the technical terms are replaced with technical term tokens; this allows it to translate most of the source sentences except technical terms. Similar to Sutskever et al. Sutskever14, we use it as a decoder to translate source sentences with technical term tokens and replace the tokens with technical term translations using statistical machine translation (SMT). We also use it to rerank the 1,000-best SMT translations on the basis of the average of the SMT and NMT scores of the translated sentences that have been rescored with the technical term tokens. Our experiments on Japanese-Chinese patent sentences show that our proposed NMT system achieves a substantial improvement of up to 3.1 BLEU points and 2.3 RIBES points over a traditional SMT system and an improvement of approximately 0.6 BLEU points and 0.8 RIBES points over an equivalent NMT system without our proposed technique.", "One important difference between our NMT model and the one used by Sutskever et al. Sutskever14 is that we added an attention mechanism. Recently, Bahdanau et al. Bahdanau15 proposed an attention mechanism, a form of random access memory, to help NMT cope with long input sequences. Luong et al. Luong15b proposed an attention mechanism for different scoring functions in order to compare the source and target hidden states as well as different strategies for placing the attention. In this paper, we utilize the attention mechanism proposed by Bahdanau et al. Bahdanau15, wherein each output target word is predicted on the basis of not only a recurrent hidden state and the previously predicted word but also a context vector computed as the weighted sum of the hidden states.", "According to the approach proposed by Dong et al. Dong15b, we identify Japanese-Chinese technical term pairs using an SMT phrase translation table. Given a parallel sentence pair $\\langle S_J, S_C\\rangle $ containing a Japanese technical term $t_J$ , the Chinese translation candidates collected from the phrase translation table are matched against the Chinese sentence $S_C$ of the parallel sentence pair. Of those found in $S_C$ , $t_C$ with the largest translation probability $P(t_C\\mid t_J)$ is selected, and the bilingual technical term pair $\\langle t_J,t_C\\rangle $ is identified.", "For the Japanese technical terms whose Chinese translations are not included in the results of Step UID11 , we then use an approach based on SMT word alignment. Given a parallel sentence pair $\\langle S_J, S_C\\rangle $ containing a Japanese technical term $t_J$ , a sequence of Chinese words is selected using SMT word alignment, and we use the Chinese translation $t_C$ for the Japanese technical term $t_J$ .", "Figure 3 illustrates the procedure for producing Chinese translations via decoding the Japanese sentence using the method proposed in this paper. In the step 1 of Figure 3 , when given an input Japanese sentence, we first automatically extract the technical terms and replace them with the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ). Consequently, we have an input sentence in which the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ) represent the positions of the technical terms and a list of extracted Japanese technical terms. Next, as shown in the step 2-N of Figure 3 , the source Japanese sentence with technical term tokens is translated using the NMT model trained according to the procedure described in Section \"NMT Training after Replacing Technical Term Pairs with Tokens\" , whereas the extracted Japanese technical terms are translated using an SMT phrase translation table in the step 2-S of Figure 3 . Finally, in the step 3, we replace the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ) of the sentence translation with SMT the technical term translations." ] ]
dc28ac845602904c2522f5349374153f378c42d3
How many tweets were manually labelled?
[ "44,000 tweets" ]
[ [ "Tweets related to Forex, specifically to EUR and USD, were acquired through the Twitter search API with the following query: “EURUSD”, “USDEUR”, “EUR”, or “USD”. In the period of three years (January 2014 to December 2016) almost 15 million tweets were collected. A subset of them (44,000 tweets) was manually labeled by knowledgeable students of finance. The label captures the leaning or stance of the Twitter user with respect to the anticipated move of one currency w.r.t. the other. The stance is represented by three values: buy (EUR vs. USD), hold, or sell. The tweets were collected, labeled and provided to us by the Sowa Labs company (http://www.sowalabs.com)." ] ]
ac148fb921cce9c8e7b559bba36e54b63ef86350
What dataset they use for evaluation?
[ "The same 2K set from Gigaword used in BIBREF7" ]
[ [ "We validate our approach on the Gigaword corpus, which comprises of a training set of 3.8M article headlines (considered to be the full text) and titles (summaries), along with 200K validation pairs, and we report test performance on the same 2K set used in BIBREF7. Since we want to learn systems from fully unaligned data without giving the model an opportunity to learn an implicit mapping, we also further split the training set into 2M examples for which we only use titles, and 1.8M for headlines. All models after the initialization step are implemented as convolutional seq2seq architectures using Fairseq BIBREF20. Artificial data generation uses top-15 sampling, with a minimum length of 16 for full text and a maximum length of 12 for summaries. rouge scores are obtained with an output vocabulary of size 15K and a beam search of size 5 to match BIBREF11." ] ]
094ce2f912aa3ced9eb97b171745d38f58f946dd
What is the source of the tables?
[ "The Online Retail Data Set consists of a clean list of 25873 invoices, totaling 541909 rows and 8 columns." ]
[ [ "The Online Retail Data Set consists of a clean list of 25873 invoices, totaling 541909 rows and 8 columns. InvoiceNo, CustomerID and StockCode are mostly 5 or 6-digit integers with occasional letters. Quantity is mostly 1 to 3-digit integers, a part of them being negative, and UnitPrice is composed of 1 to 6 digits floating values. InvoiceDate are dates all in the same format, Country contains strings representing 38 countries and Description is 4224 strings representing names of products. We reconstruct text mails from this data, by separating each token with a blank space and stacking the lines for a given invoice, grouped by InvoiceNo. We will use the column label as ground truth for the tokens in the dataset. For simplicity reasons we add underscores between words in Country and Description to ease the tokenization. Another slight modification has to be done: $25\\%$ of the CustomerId values are missing, and we replace them by '00000'. A sample can be found in Fig. 4 ." ] ]
b5bfa6effdeae8ee864d7d11bc5f3e1766171c2d
Which regions of the United States do they consider?
[ "all regions except those that are colored black" ]
[ [] ]
bf00808353eec22b4801c922cce7b1ec0ff3b777
Why did they only consider six years of published books?
[ "Unanswerable" ]
[ [] ]
ec62c4cdbeaafc875c695f2d4415bce285015763
What state-of-the-art general-purpose pretrained models are made available under the unified API?
[ "BERT, RoBERTa, DistilBERT, GPT, GPT2, Transformer-XL, XLNet, XLM" ]
[ [ "Here is a list of architectures for which reference implementations and pretrained weights are currently provided in Transformers. These models fall into two main categories: generative models (GPT, GPT-2, Transformer-XL, XLNet, XLM) and models for language understanding (Bert, DistilBert, RoBERTa, XLM).", "BERT (BIBREF13) is a bi-directional Transformer-based encoder pretrained with a linear combination of masked language modeling and next sentence prediction objectives.", "RoBERTa (BIBREF5) is a replication study of BERT which showed that carefully tuning hyper-parameters and training data size lead to significantly improved results on language understanding.", "DistilBERT (BIBREF32) is a smaller, faster, cheaper and lighter version BERT pretrained with knowledge distillation.", "GPT (BIBREF34) and GPT2 (BIBREF9) are two large auto-regressive language models pretrained with language modeling. GPT2 showcased zero-shot task transfer capabilities on various tasks such as machine translation or reading comprehension.", "Transformer-XL (BIBREF35) introduces architectural modifications enabling Transformers to learn dependency beyond a fixed length without disrupting temporal coherence via segment-level recurrence and relative positional encoding schemes.", "XLNet (BIBREF4) builds upon Transformer-XL and proposes an auto-regressive pretraining scheme combining BERT's bi-directional context flow with auto-regressive language modeling by maximizing the expected likelihood over permutations of the word sequence.", "XLM (BIBREF8) shows the effectiveness of pretrained representations for cross-lingual language modeling (both on monolingual data and parallel data) and cross-lingual language understanding.", "We systematically release the model with the corresponding pretraining heads (language modeling, next sentence prediction for BERT) for adaptation using the pretraining objectives. Some models fine-tuned on downstream tasks such as SQuAD1.1 are also available. Overall, more than 30 pretrained weights are provided through the library including more than 10 models pretrained in languages other than English. Some of these non-English pretrained models are multi-lingual models (with two of them being trained on more than 100 languages) ." ] ]
405964517f372629cda4326d8efadde0206b7751
How is performance measured?
[ "they use ROC curves and cross-validation" ]
[ [ "To assess the predictive capability of this and other models, we require some method by which we can compare the models. For that purpose, we use receiver operating characteristic (ROC) curves as a visual representation of predictive effectiveness. ROC curves compare the true positive rate (TPR) and false positive rate (FPR) of a model's predictions at different threshold levels. The area under the curve (AUC) (between 0 and 1) is a numerical measure, where the higher the AUC is, the better the model performs.", "We cross-validate our model by first randomly splitting the corpus into a training set (95% of the corpus) and test set (5% of the corpus). We then fit the model to the training set, and use it to predict the response of the documents in the test set. We repeat this process 100 times. The threshold-averaged ROC curve BIBREF13 is found from these predictions, and shown in Figure 3 . Table 1 shows the AUC for each model considered." ] ]
ae95a7d286cb7a0d5bc1a8283ecbf803e9305951
What models are included in the toolkit?
[ " recurrent neural network (RNN)-based sequence-to-sequence (Seq2Seq) models for NATS" ]
[ [ "Modules: Modules are the basic building blocks of different models. In LeafNATS, we provide ready-to-use modules for constructing recurrent neural network (RNN)-based sequence-to-sequence (Seq2Seq) models for NATS, e.g., pointer-generator network BIBREF1 . These modules include embedder, RNN encoder, attention BIBREF24 , temporal attention BIBREF6 , attention on decoder BIBREF2 and others. We also use these basic modules to assemble a pointer-generator decoder module and the corresponding beam search algorithms. The embedder can also be used to realize the embedding-weights sharing mechanism BIBREF2 ." ] ]
0be0c8106df5fde4b544af766ec3d4a3d7a6c8a2
Is there any human evaluation involved in evaluating this famework?
[ "Yes" ]
[ [ "Due to the lack of available models for the task, we compare our framework with a previous model developed for image-to-image translation as baseline, which colorizes images without text descriptions. We carried out two human evaluations using Mechanical Turk to compare the performance of our model and the baseline. For each experiment, we randomly sampled 1,000 images from the test set and then turned these images into black and white. For each image, we generated a pair of two images using our model and the baseline, respectively. Our model took into account the caption in generation while the baseline did not. Then we randomly permuted the 2,000 generated images. In the first experiment, we presented to human annotators the 2,000 images, together with their original captions, and asked humans to rate the consistency between the generated images and the captions in a scale of 0 and 1, with 0 indicating no consistency and 1 indicating consistency. In the second experiment, we presented to human annotators the same 2,000 images without captions, but asked human annotators to rate the quality of each image without providing its original caption. The quality was rated in a scale of 0 and 1, with 0 indicating low quality and 1 indicating high quality." ] ]
959490ba72bd02f742db1e7b19525d4b6c419772
How big is multilingual dataset?
[ "Unanswerable" ]
[ [] ]
504a069ccda21580ccbf18c34f5eefc0088fa105
How big is dataset used for fine-tuning BERT?
[ "hundreds of thousands of legal agreements" ]
[ [ "To fine-tune BERT, we used a proprietary corpus that consists of hundreds of thousands of legal agreements. We extracted text from the agreements, tokenized it into sentences, and removed sentences without alphanumeric text. We selected the BERT-Base uncased pre-trained model for fine-tuning. To avoid including repetitive content found at the beginning of each agreement we selected the 31st to 50th sentence of each agreement. We ran unsupervised fine-tuning of BERT using sequence lengths of 128, 256 and 512. The loss function over epochs is shown in Figure FIGREF3." ] ]
d76ecdc0743893a895bc9dc3772af47d325e6d07
How big are datasets for 2019 Amazon Alexa competition?
[ "Unanswerable" ]
[ [] ]
2a6469f8f6bf16577b590732d30266fd2486a72e
What is novel in author's approach?
[ "They use self-play learning , optimize the model for specific metrics, train separate models per user, use model and response classification predictors, and filter the dataset to obtain higher quality training data." ]
[ [ "Our novelties include:", "Using self-play learning for the neural response ranker (described in detail below).", "Optimizing neural models for specific metrics (e.g. diversity, coherence) in our ensemble setup.", "Training a separate dialog model for each user, personalizing our socialbot and making it more consistent.", "Using a response classification predictor and a response classifier to predict and control aspects of responses such as sentiment, topic, offensiveness, diversity etc.", "Using a model predictor to predict the best responding model, before the response candidates are generated, reducing computational expenses.", "Using our entropy-based filtering technique to filter all dialog datasets, obtaining higher quality training data BIBREF3.", "Building big, pre-trained, hierarchical BERT and GPT dialog models BIBREF6, BIBREF7, BIBREF8.", "Constantly monitoring the user input through our automatic metrics, ensuring that the user stays engaged." ] ]
a02696d4ab728ddd591f84a352df9375faf7d1b4
How large is the Dialog State Tracking Dataset?
[ "1,618 training dialogs, 500 validation dialogs, and 1,117 test dialogs" ]
[ [] ]
78577fd1c09c0766f6e7d625196adcc72ddc8438
What dataset is used for train/test of this method?
[ "Training datasets: TTS System dataset and embedding selection dataset. Evaluation datasets: Common Prosody Errors dataset and LFR dataset." ]
[ [ "Experimental Protocol ::: Datasets ::: Training Dataset", "(i) TTS System dataset: We trained our TTS system with a mixture of neutral and newscaster style speech. For a total of 24 hours of training data, split in 20 hours of neutral (22000 utterances) and 4 hours of newscaster styled speech (3000 utterances).", "(ii) Embedding selection dataset: As the evaluation was carried out only on the newscaster speaking style, we restrict our linguistic search space to the utterances associated to the newscaster style: 3000 sentences.", "Experimental Protocol ::: Datasets ::: Evaluation Dataset", "The systems were evaluated on two datasets:", "(i) Common Prosody Errors (CPE): The dataset on which the baseline Prostron model fails to generate appropriate prosody. This dataset consists of complex utterances like compound nouns (22%), “or\" questions (9%), “wh\" questions (18%). This set is further enhanced by sourcing complex utterances (51%) from BIBREF24.", "(ii) LFR: As demonstrated in BIBREF25, evaluating sentences in isolation does not suffice if we want to evaluate the quality of long-form speech. Thus, for evaluations on LFR we curated a dataset of news samples. The news style sentences were concatenated into full news stories, to capture the overall experience of our intended use case." ] ]
1f63ccc379f01ecdccaa02ed0912970610c84b72
How much is the gap between using the proposed objective and using only cross-entropy objective?
[ "The mixed objective improves EM by 2.5% and F1 by 2.2%" ]
[ [ "The contributions of each part of our model are shown in Table 2 . We note that the deep residual coattention yielded the highest contribution to model performance, followed by the mixed objective. The sparse mixture of experts layer in the decoder added minor improvements to the model performance." ] ]
736c74d2f61ac8d3ac31c45c6510a36c767a5d6d
What is the multi-instance learning?
[ "Unanswerable" ]
[ [] ]
b2254f9dd0e416ee37b577cef75ffa36cbcb8293
How many domains of ontologies do they gather data from?
[ "5 domains: software, stuff, african wildlife, healthcare, datatypes" ]
[ [ "The Software Ontology (SWO) BIBREF5 is included because its set of CQs is of substantial size and it was part of Ren et al.'s set of analysed CQs. The CQ sets of Dem@Care BIBREF8 and OntoDT BIBREF9 were included because they were available. CQs for the Stuff BIBREF6 and African Wildlife (AWO) BIBREF7 ontologies were added to the set, because the ontologies were developed by one of the authors (therewith facilitating in-depth domain analysis, if needed), they cover other topics, and are of a different `type' (a tutorial ontology (AWO) and a core ontology (Stuff)), thus contributing to maximising diversity in source selection." ] ]
cb1126992a39555e154bedec388465b249a02ded
How is the semi-structured knowledge base created?
[ "using a mixture of manual and semi-automatic techniques" ]
[ [ "Although techniques for constructing this knowledge base are outside the scope of this paper, we briefly mention them. Tables were constructed using a mixture of manual and semi-automatic techniques. First, the table schemas were manually defined based on the syllabus, study guides, and training questions. Tables were then populated both manually and semi-automatically using IKE BIBREF29 , a table-building tool that performs interactive, bootstrapped relation extraction over a corpus of science text. In addition, to augment these tables with the broad knowledge present in study guides that doesn't always fit the manually defined table schemas, we ran an Open IE BIBREF30 pattern-based subject-verb-object (SVO) extractor from BIBREF31 clark2014:akbc over several science texts to populate three-column Open IE tables. Methods for further automating table construction are under development." ] ]
d5256d684b5f1b1ec648d996c358e66fe51f4904
what is the practical application for this paper?
[ "Improve existing NLP methods. Improve linguistic analysis. Measure impact of word normalization tools." ]
[ [ "Morphology deals with the internal structure of words BIBREF0 , BIBREF1 . Languages of the world have different word production processes. Morphological richness vary from language to language, depending on their linguistic typology. In natural language processing (NLP), taking into account the morphological complexity inherent to each language could be important for improving or adapting the existing methods, since the amount of semantic and grammatical information encoded at the word level, may vary significantly from language to language.", "Additionally, most of the previous works do not analyze how the complexity changes when different types of morphological normalization procedures are applied to a language, e.g., lemmatization, stemming, morphological segmentation. This information could be useful for linguistic analysis and for measuring the impact of different word form normalization tools depending of the language. In this work, we analyze how the type-token relationship changes using different types of morphological normalization techniques." ] ]
2a1069ae3629ae8ecc19d2305f23445c0231dc39
Do they use a neural model for their task?
[ "No" ]
[ [ "Figure 1 presents architecture of the WSD system. As one may observe, no human labor is used to learn interpretable sense representations and the corresponding disambiguation models. Instead, these are induced from the input text corpus using the JoBimText approach BIBREF8 implemented using the Apache Spark framework, enabling seamless processing of large text collections. Induction of a WSD model consists of several steps. First, a graph of semantically related words, i.e. a distributional thesaurus, is extracted. Second, word senses are induced by clustering of an ego-network of related words BIBREF9 . Each discovered word sense is represented as a cluster of words. Next, the induced sense inventory is used as a pivot to generate sense representations by aggregation of the context clues of cluster words. To improve interpretability of the sense clusters they are labeled with hypernyms, which are in turn extracted from the input corpus using Hearst:92 patterns. Finally, the obtained WSD model is used to retrieve a list of sentences that characterize each sense. Sentences that mention a given word are disambiguated and then ranked by prediction confidence. Top sentences are used as sense usage examples. For more details about the model induction process refer to BIBREF10 . Currently, the following WSD models induced from a text corpus are available: Word senses based on cluster word features. This model uses the cluster words from the induced word sense inventory as sparse features that represent the sense.", "Word senses based on context word features. This representation is based on a sum of word vectors of all cluster words in the induced sense inventory weighted by distributional similarity scores.", "Super senses based on cluster word features. To build this model, induced word senses are first globally clustered using the Chinese Whispers graph clustering algorithm BIBREF9 . The edges in this sense graph are established by disambiguation of the related words BIBREF11 , BIBREF12 . The resulting clusters represent semantic classes grouping words sharing a common hypernym, e.g. “animal”. This set of semantic classes is used as an automatically learned inventory of super senses: There is only one global sense inventory shared among all words in contrast to the two previous traditional “per word” models. Each semantic class is labeled with hypernyms. This model uses words belonging to the semantic class as features.", "Super senses based on context word features. This model relies on the same semantic classes as the previous one but, instead, sense representations are obtained by averaging vectors of words sharing the same class." ] ]
0b411f942c6e2e34e3d81cc855332f815b6bc123
What's the method used here?
[ "Two neural networks: an extractor based on an encoder (BERT) and a decoder (LSTM Pointer Network BIBREF22) and an abstractor identical to the one proposed in BIBREF8." ]
[ [ "Our model consists of two neural network modules, i.e. an extractor and abstractor. The extractor encodes a source document and chooses sentences from the document, and then the abstractor paraphrases the summary candidates. Formally, a single document consists of $n$ sentences $D=\\lbrace s_1,s_2,\\cdots ,s_n\\rbrace $. We denote $i$-th sentence as $s_i=\\lbrace w_{i1},w_{i2},\\cdots ,w_{im}\\rbrace $ where $w_{ij}$ is the $j$-th word in $s_i$. The extractor learns to pick out a subset of $D$ denoted as $\\hat{D}=\\lbrace \\hat{s}_1,\\hat{s}_2,\\cdots ,\\hat{s}_k|\\hat{s}_i\\in D\\rbrace $ where $k$ sentences are selected. The abstractor rewrites each of the selected sentences to form a summary $S=\\lbrace f(\\hat{s}_1),f(\\hat{s}_2),\\cdots ,f(\\hat{s}_k)\\rbrace $, where $f$ is an abstracting function. And a gold summary consists of $l$ sentences $A=\\lbrace a_1,a_2,\\cdots ,a_l\\rbrace $.", "The extractor is based on the encoder-decoder framework. We adapt BERT for the encoder to exploit contextualized representations from pre-trained transformers. BERT as the encoder maps the input sequence $D$ to sentence representation vectors $H=\\lbrace h_1,h_2,\\cdots ,h_n\\rbrace $, where $h_i$ is for the $i$-th sentence in the document. Then, the decoder utilizes $H$ to extract $\\hat{D}$ from $D$.", "We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations. The decoder extracts sentences recurrently, producing a distribution over all of the remaining sentence representations excluding those already selected. Since we use the sequential model which selects one sentence at a time step, our decoder can consider the previously selected sentences. This property is needed to avoid selecting sentences that have overlapping information with the sentences extracted already.", "The abstractor network approximates $f$, which compresses and paraphrases an extracted document sentence to a concise summary sentence. We use the standard attention based sequence-to-sequence (seq2seq) model BIBREF23, BIBREF24 with the copying mechanism BIBREF25 for handling out-of-vocabulary (OOV) words. Our abstractor is practically identical to the one proposed in BIBREF8." ] ]
01123a39574bdc4684aafa59c52d956b532d2e53
By how much does their method outperform state-of-the-art OOD detection?
[ "AE-HCN outperforms by 17%, AE-HCN-CNN outperforms by 20% on average" ]
[ [ "The goal of this paper is to propose a novel OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog. Most prior approaches do not consider dialog context and make predictions for each utterance independently. We will show that this independent decision leads to suboptimal performance even when actual OOD utterances are given to optimize the model and that the use of dialog context helps reduce OOD detection errors. To consider dialog context, we need to connect the OOD detection task with the overall dialog task. Thus, for this work, we build upon Hybrid Code Networks (HCN) BIBREF4 since HCNs achieve state-of-the-art performance in a data-efficient way for task-oriented dialogs, and propose AE-HCNs which extend HCNs with an autoencoder (Figure FIGREF8 ). Furthermore, we release new dialog datasets which are three publicly available dialog corpora augmented with OOD turns in a controlled way (exemplified in Table TABREF2 ) to foster further research.", "The result is shown in Table TABREF23 . Since there are multiple actions that are appropriate for a given dialog context, we use per-utterance Precision@K as performance metric. We also report f1-score for OOD detection to measure the balance between precision and recall. The performances of HCN on Test-OOD are about 15 points down on average from those on Test, showing the detrimental impact of OOD utterances to such models only trained on in-domain training data. AE-HCN(-CNN) outperforms HCN on Test-OOD by a large margin about 17(20) points on average while keeping the minimum performance trade-off compared to Test. Interestingly, AE-HCN-CNN has even better performance than HCN on Test, indicating that, with the CNN encoder, counterfeit OOD augmentation acts as an effective regularization. In contrast, AE-HCN-Indep failed to robustly detect OOD utterances, resulting in much lower numbers for both metrics on Test-OOD as well as hurting the performance on Test. This result indicates two crucial points: 1) the inherent difficulty of finding an appropriate threshold value without actually seeing OOD data; 2) the limitation of the models which do not consider context. For the first point, Figure FIGREF24 plots histograms of reconstruction scores for IND and OOD utterances of bAbI6 Test-OOD. If OOD utterances had been known a priori, the threshold should have been set to a much higher value than the maximum reconstruction score of IND training data (6.16 in this case)." ] ]
954c4756e293fd5c26dc50dc74f505cc94b3f8cc
What are dilated convolutions?
[ "Similar to standard convolutional networks but instead they skip some input values effectively operating on a broader scale." ]
[ [ "In this work we focus on end-to-end stateless temporal modeling which can take advantage of a large context while limiting computation and avoiding saturation issues. By end-to-end model, we mean a straight-forward model with a binary target that does not require a precise phoneme alignment beforehand. We explore an architecture based on a stack of dilated convolution layers, effectively operating on a broader scale than with standard convolutions while limiting model size. We further improve our solution with gated activations and residual skip-connections, inspired by the WaveNet style architecture explored previously for text-to-speech applications BIBREF10 and voice activity detection BIBREF9 , but never applied to KWS to our knowledge. In BIBREF11 , the authors explore Deep Residual Networks (ResNets) for KWS. ResNets differ from WaveNet models in that they do not leverage skip-connections and gating, and apply convolution kernels in the frequency domain, drastically increasing the computational cost.", "Standard convolutional networks cannot capture long temporal patterns with reasonably small models due to the increase in computational cost yielded by larger receptive fields. Dilated convolutions skip some input values so that the convolution kernel is applied over a larger area than its own. The network therefore operates on a larger scale, without the downside of increasing the number of parameters. The receptive field $r$ of a network made of stacked convolutions indeed reads: $r = \\sum _i d_i (s_i - 1),$" ] ]
ee279ace5bc69d15e640da967bd4214fe264aa1a
what was the evaluation metrics studied in this work?
[ "mean rank (MR), mean reciprocal rank (MRR), as well as Hits@1, Hits@3, and Hits@10" ]
[ [ "Performance figures are computed using tail prediction on the test sets: For each test triple $(h,r,t)$ with open-world head $h \\notin E$ , we rank all known entities $t^{\\prime } \\in E$ by their score $\\phi (h,r,t^{\\prime })$ . We then evaluate the ranks of the target entities $t$ with the commonly used mean rank (MR), mean reciprocal rank (MRR), as well as Hits@1, Hits@3, and Hits@10." ] ]
beda007307c76b8ce7ffcd159a8280d2e8c7c356
Do they analyze ELMo?
[ "No" ]
[ [] ]
dac2591f19f5bbac3d4a7fa038ff7aa09f6f0d96
what are the three methods presented in the paper?
[ "Optimized TF-IDF, iterated TF-IDF, BERT re-ranking." ]
[ [] ]
f62c78be58983ef1d77049738785ec7ab9f2a3ee
what datasets did the authors use?
[ "Kaggle\nSubversive Kaggle\nWikipedia\nSubversive Wikipedia\nReddit\nSubversive Reddit " ]
[ [ "We trained and tested our neural network with and without sentiment information, with and without subversion, and with each corpus three times to mitigate the randomness in training. In every experiment, we used a random 70% of messages in the corpus as training data, another 20% as validation data, and the final 10% as testing data. The average results of the three tests are given in Table TABREF40 . It can be seen that sentiment information helps improve toxicity detection in all cases. The improvement is smaller when the text is clean. However, the introduction of subversion leads to an important drop in the accuracy of toxicity detection in the network that uses the text alone, and the inclusion of sentiment information gives an important improvement in that case. Comparing the different corpora, it can be seen that the improvement is smallest in the Reddit dataset experiment, which is expected since it is also the dataset in which toxicity and sentiment had the weakest correlation in Table TABREF37 ." ] ]
639c145f0bcb1dd12d08108bc7a02f9ec181552e
What are three possible phases for language formation?
[ "Phase I: $\\langle cc \\rangle $ increases smoothly for $\\wp < 0.4$, indicating that for this domain there is a small correlation between word neighborhoods. Full vocabularies are attained also for $\\wp < 0.4$, Phase II: a drastic transition appears at the critical domain $\\wp ^* \\in (0.4,0.6)$, in which $\\langle cc \\rangle $ shifts abruptly towards 1. An abrupt change in $V(t_f)$ versus $\\wp $ is also found (Fig. FIGREF16) for $\\wp ^*$, Phase III: single-word languages dominate for $\\wp > 0.6$. The maximum value of $\\langle cc \\rangle $ indicate that word neighborhoods are completely correlated" ]
[ [ "Three clear domains can be noticed in the behavior of $\\langle cc \\rangle $ versus $\\wp $, at $t_f$, as shown in Fig. FIGREF15 (blue squares). Phase I: $\\langle cc \\rangle $ increases smoothly for $\\wp < 0.4$, indicating that for this domain there is a small correlation between word neighborhoods. Full vocabularies are attained also for $\\wp < 0.4$; Phase II: a drastic transition appears at the critical domain $\\wp ^* \\in (0.4,0.6)$, in which $\\langle cc \\rangle $ shifts abruptly towards 1. An abrupt change in $V(t_f)$ versus $\\wp $ is also found (Fig. FIGREF16) for $\\wp ^*$; Phase III: single-word languages dominate for $\\wp > 0.6$. The maximum value of $\\langle cc \\rangle $ indicate that word neighborhoods are completely correlated." ] ]
ab3737fbf17b7a0e790e1315fffe46f615ebde64
How many parameters does the model have?
[ "Unanswerable" ]
[ [] ]
0b8d64d6cdcfc2ba66efa41a52e09241729a697c
Do the experiments explore how various architectures and layers contribute towards certain decisions?
[ "No" ]
[ [ "As a document classifier we employ a word-based CNN similar to Kim consisting of the following sequence of layers: $ \\texttt {Conv} \\xrightarrow{} \\texttt {ReLU} \\xrightarrow{} \\texttt {1-Max-Pool} \\xrightarrow{} \\texttt {FC} \\\\ $", "Future work would include applying LRP to other neural network architectures (e.g. character-based or recurrent models) on further NLP tasks, as well as exploring how relevance information could be taken into account to improve the classifier's training procedure or prediction performance." ] ]
891c4af5bb77d6b8635ec4109572de3401b60631
What social media platform does the data come from?
[ "Unanswerable" ]
[ [] ]
39a450ac15688199575798e72a2cc016ef4316b5
How much performance improvements they achieve on SQuAD?
[ "Compared to baselines SAN (Table 1) shows improvement of 1.096% on EM and 0.689% F1. Compared to other published SQuAD results (Table 2) SAN is ranked second. " ]
[ [ "Finally, we compare our results with other top models in Table 2 . Note that all the results in Table 2 are taken from the published papers. We see that SAN is very competitive in both single and ensemble settings (ranked in second) despite its simplicity. Note that the best-performing model BIBREF14 used a large-scale language model as an extra contextual embedding, which gave a significant improvement (+4.3% dev F1). We expect significant improvements if we add this to SAN in future work.", "The main experimental question we would like to answer is whether the stochastic dropout and averaging in the answer module is an effective technique for multi-step reasoning. To do so, we fixed all lower layers and compared different architectures for the answer module:" ] ]
de015276dcde4e7d1d648c6e31100ec80f61960f
Do the authors perform experiments using their proposed method?
[ "Yes" ]
[ [ "We illustrate the concept by discussing some instantiations that we have recently experimented with.", "Unlike the Visual Dialogue setting discussed above, this setting ensures informational symmetry between the participants (both have access to the same type of information; but not the same information, as they can't “see” each other). More importantly, however, the constraint that the game only ends if they both agree ensures a “committment symmetry”, where the success of the game must be ensured by both participants. The design also provides for a clear “relevance place” at which an opportunity arises for semantic negotiation, namely, before the final decision is made. An example of this is shown in the example below. (The number in the parentheses indicate the time, relative to the beginning of the interaction, when the utterance was made.)", "The MatchIt Game (Ilinykh et al., forthcoming) is a yet further simplified visual game. Here, the goal simply is to decide whether you and your partner are both looking at the same image (of the same genre as in MeetUp). In that sense, it is a reduction of the MeetUP game to the final stage, taking out the navigation aspect. As example SECREF12 shows, this can similarly lead to meta-semantic interaction, where classifications are revised. As SECREF12 shows, even in cases where a decision can be reached quickly, there can be an explicit mutual confirmation step, before the (silent) decision signal is sent.", "A third setting that we have explored BIBREF19 brings conceptual negotiation more clearly into the foreground. In that game, the players are presented with images of birds of particular species and are tasked with coming up with a description of common properties. Again, the final answer has to be approved by both participants. As SECREF13 shows, this can lead to an explicit negotiation of conceptual content." ] ]
56836afc57cae60210fa1e5294c88e40bb10cc0e
What NLP tasks do the authors evaluate feed-forward networks on?
[ "language identification, part-of-speech tagging, word segmentation, and preordering for statistical machine translation" ]
[ [ "We experiment with small feed-forward networks for four diverse NLP tasks: language identification, part-of-speech tagging, word segmentation, and preordering for statistical machine translation." ] ]
6147846520a3dc05b230241f2ad6d411d614e24c
What are three challenging tasks authors evaluated their sequentially aligned representations?
[ "paper acceptance prediction, Named Entity Recognition (NER), author stance prediction" ]
[ [ "We consider three tasks representing a broad selection of natural language understanding scenarios: paper acceptance prediction based on the PeerRead data set BIBREF2, Named Entity Recognition (NER) based on the Broad Twitter Corpus BIBREF3, and author stance prediction based on the RumEval-19 data set BIBREF6. These tasks were chosen so as to represent i) different textual domains, across ii) differing time scales, and iii) operating at varying levels of linguistic granularity. As we are dealing with dynamical learning, the vast majority of NLP data sets can unfortunately not be used since they do not include time stamps." ] ]
99cf494714c67723692ad1279132212db29295f3
What is the difference in findings of Buck et al? It looks like the same conclusion was mentioned in Buck et al..
[ "AQA diverges from well structured language in favour of less fluent, but more effective, classic information retrieval (IR) query operations" ]
[ [ "Here we perform a qualitative analysis of this communication process to better understand what kind of language the agent has learned. We find that while optimizing its reformulations to adapt to the language of the QA system, AQA diverges from well structured language in favour of less fluent, but more effective, classic information retrieval (IR) query operations. These include term re-weighting (tf-idf), expansion and morphological simplification/stemming. We hypothesize that the explanation of this behaviour is that current machine comprehension tasks primarily require ranking of short textual snippets, thus incentivizing relevance more than deep language understanding." ] ]
85e45b37408bb353c6068ba62c18e516d4f67fe9
What is the baseline?
[ "The baseline is a multi-task architecture inspired by another paper." ]
[ [ "We compare the results of our model to a baseline multi-task architecture inspired by yang2016multi. In our baseline model there are no explicit connections between tasks - the only shared parameters are in the hidden layer." ] ]
f4e1d2276d3fc781b686d2bb44eead73e06fbf3f
What is the unsupervised task in the final layer?
[ "Language Modeling" ]
[ [ "In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep." ] ]
bf2ebc9bbd4cbdf8922c051f406effc97fd16e54
How many supervised tasks are used?
[ "two" ]
[ [] ]
c13fe4064df0cfebd0538f29cb13e917fc5c3be0
What is the network architecture?
[ "The network architecture has a multi-task Bi-Directional Recurrent Neural Network, with an unsupervised sequence labeling task and a low-dimensional embedding layer between tasks. There is a hidden layer after each successive task with skip connections to the senior supervised layers." ]
[ [ "In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep.", "Our neural network has one hidden layer, after which each successive task is supervised on the next layer. In addition, we add skip connections from the hidden layer to the senior supervised layers to allow layers to ignore information from junior tasks." ] ]
6adde6bc3e27a32eac5daa57d30ab373f77690be
Is the proposed model more sensitive than previous context-aware models too?
[ "Unanswerable" ]
[ [] ]
90ad8d7ee27192b89ffcfa4a68302f370e6333a8
In what ways the larger context is ignored for the models that do consider larger context?
[ "Unanswerable" ]
[ [] ]
ba1da61db264599963e340010b777a1723ffeb4c
What does recurrent deep stacking network do?
[ "Stacks and joins outputs of previous frames with inputs of the current frame" ]
[ [ "As indicated in its name, Recurrent Deep Stacking Network stacks and concatenates the outputs of previous frames into the input features of the current frame. If we view acoustic models in ASR systems as functions projecting input features to the probability density outputs, we can see the differences between conventional systems and RDSN clearer. Denote the input features at frame $t$ as $x_t$ , and the output as frame $t$ as $y_t$ . We can see that RDSN tries to model" ] ]
ff814793387c8f3b61f09b88c73c00360a22a60e
Does the latent dialogue state heklp their model?
[ "Yes" ]
[ [ "Recently, end-to-end approaches have trained recurrent neural networks (RNNs) directly on text transcripts of dialogs. A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels. However, end-to-end methods lack a general mechanism for injecting domain knowledge and constraints. For example, simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software, yet may take thousands of dialogs to learn. Moreover, in some practical settings, programmed constraints are essential – for example, a banking dialog system would require that a user is logged in before they can retrieve account information." ] ]
059acc270062921ad27ee40a77fd50de6f02840a
Do the authors test on datasets other than bAbl?
[ "No" ]
[ [] ]
6a9eb407be6a459dc976ffeae17bdd8f71c8791c
What is the reward model for the reinforcement learning appraoch?
[ "reward 1 for successfully completing the task, with a discount by the number of turns, and reward 0 when fail" ]
[ [ "We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of $0.95$ was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and $G = 0.95^{T-1}$ for successful dialogs, where $T$ is the number of system turns in the dialog. Finally, we created a set of 21 labeled dialogs, which will be used for supervised learning." ] ]
cacb83e15e160d700db93c3f67c79a11281d20c5
Does this paper propose a new task that others can try to improve performance on?
[ "No, there has been previous work on recognizing social norm violation." ]
[ [ "Interesting prior work on quantifying social norm violation has taken a heavily data-driven focus BIBREF8 , BIBREF9 . For instance, BIBREF8 trained a series of bigram language models to quantify the violation of social norms in users' posts on an online community by leveraging cross-entropy value, or the deviation of word sequences predicted by the language model and their usage by the user. However, their models were trained on written-language instead of natural face-face dialog corpus. Another kind of social norm violation was examined by BIBREF10 , who developed a classifier to identify specific types of sarcasm in tweets. They utilized a bootstrapping algorithm to automatically extract lists of positive sentiment phrases and negative situation phrases from given sarcastic tweets, which were in turn leveraged to recognize sarcasm in an SVM classifier. However, no contextual information was considered in this work. BIBREF11 understood the nature of social norm violation in dialog by correlating it with associated observable verbal, vocal and visual cues. By leveraging their findings and statistical machine learning techniques, they built a computational model for automatic recognition. While they preserved short-term temporal contextual information in the model, this study avoided dealing with sparsity of the social norm violation phenomena by under-sampling the negative-class instances to make a balanced dataset." ] ]
33957fde72f9082a5c11844e7c47c58f8029c4ae
What knowledge base do they use?
[ "Freebase" ]
[ [ "Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 . This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries. In contrast, our focus is on language that is not directly mappable to Freebase. We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 . For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . This produced 2.1m predicate instances involving 142k entity pairs and 184k entities. After removing infrequently-seen predicates (seen fewer than 6 times), there were 25k categories and 4.2k relations." ] ]
1c4cd22d6eaefffd47b93c2124f6779a06d2d9e1
How big is their dataset?
[ "3 million webpages processed with a CCG parser for training, 220 queries for development, and 307 queries for testing" ]
[ [ "Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 . This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries. In contrast, our focus is on language that is not directly mappable to Freebase. We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 . For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . This produced 2.1m predicate instances involving 142k entity pairs and 184k entities. After removing infrequently-seen predicates (seen fewer than 6 times), there were 25k categories and 4.2k relations.", "We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set. For a final evaluation, we generated another, similar test set from a different held out section of ClueWeb, in the same fashion as done by Krishnamurthy and Mitchell. This final test set contains 307 queries." ] ]
2122bd05c03dde098aa17e36773e1ac7b6011969
What task do they evaluate on?
[ "Fill-in-the-blank natural language questions" ]
[ [ "We demonstrate our approach on the task of answering open-domain fill-in-the-blank natural language questions. By giving open vocabulary semantic parsers direct access to KB information, we improve mean average precision on this task by over 120%." ] ]
1d6c42e3f545d55daa86bea6fabf0b1c52a93bbb
Do some pretraining objectives perform better than others for sentence level understanding tasks?
[ "Yes" ]
[ [ "Looking to other target tasks, the grammar-related CoLA task benefits dramatically from ELMo pretraining: The best result without language model pretraining is less than half the result achieved with such pretraining. In contrast, the meaning-oriented textual similarity benchmark STS sees good results with several kinds of pretraining, but does not benefit substantially from the use of ELMo." ] ]
480e10e5a1b9c0ae9f7763b7611eeae9e925096b
Did the authors try stacking multiple convolutional layers?
[ "No" ]
[ [ "Recently, convolutional neural networks (CNNs), originally designed for computer vision BIBREF27 , have significantly received research attention in natural language processing BIBREF28 , BIBREF29 . CNN learns non-linear features to capture complex relationships with a remarkably less number of parameters compared to fully connected neural networks. Inspired from the success in computer vision, BIBREF30 proposed ConvE—the first model applying CNN for the KB completion task. In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer. Different filters of the same $3\\times 3$ shape are operated over the input matrix to output feature map tensors. These feature map tensors are then vectorized and mapped into a vector via a linear transformation. Then this vector is computed with $v_t$ via a dot product to return a score for (h, r, t). See a formal definition of the ConvE score function in Table 1 . It is worth noting that ConvE focuses on the local relationships among different dimensional entries in each of $v_h$ or $v_r$ , i.e., ConvE does not observe the global relationships among same dimensional entries of an embedding triple ( $v_h$ , $v_r$ , $v_t$ ), so that ConvE ignores the transitional characteristic in transition-based models, which is one of the most useful intuitions for the task." ] ]
056fc821d1ec1e8ca5dc958d14ea389857b1a299
How many feature maps are generated for a given triple?
[ "3 feature maps for a given tuple" ]
[ [ "Our ConvKB uses different filters $\\in \\mathbb {R}^{1\\times 3}$ to generate different feature maps. Let ${\\Omega }$ and $\\tau $ denote the set of filters and the number of filters, respectively, i.e. $\\tau = |{\\Omega }|$ , resulting in $\\tau $ feature maps. These $\\tau $ feature maps are concatenated into a single vector $\\in \\mathbb {R}^{\\tau k\\times 1}$ which is then computed with a weight vector ${w} \\in \\mathbb {R}^{\\tau k\\times 1}$ via a dot product to give a score for the triple $(h, r, t)$ . Figure 1 illustrates the computation process in ConvKB." ] ]
974868e4e22f14766bcc76dc4927a7f2795dcd5e
How does the number of parameters compare to other knowledge base completion models?
[ "Unanswerable" ]
[ [] ]