id
string
x
string
y
string
05b53f9e0a347c4f47d0fd066538c7_2
For the syntactic trees, they can connect the anchor words to the functional words (i.e., negation, modal auxiliaries) that are far away, but convey important information to affect the factuality of the event mentions. For instance, the dependency tree of the sentence "I will, after seeing the treatment of others, go back when I need medical care." will be helpful to directly link the anchor word "go" to the modal auxiliary "will" to successfully predict the non-factuality of the event mention. Regarding the semantic information, the meaning of the some important context words in the sentences can contribute significantly to the factuality of an event mention. For example, in the sentence "Knight lied when he said I went to the ranch.", the meaning represented by the cue word "lied" is crucial to classify the event mention associated with the anchor word "went" as non-factual. The meaning of such cue words and their interactions with the anchor words can be captured via their distributed representations (i.e., with word embeddings and long-short term memory networks (LSTM))<cite> (Rudinger et al., 2018)</cite> .
background
05b53f9e0a347c4f47d0fd066538c7_3
The current state-of-the-art approach for EFP has involved deep learning models <cite>(Rudinger et al., 2018</cite> ) that examine both syntactic and semantic information in the modeling process. However, in these models, the syntactic and semantic information are only employed separately in the different deep learning architectures to generate syntactic and semantic representations. Such representations are only concatenated in the final stage to perform the factuality prediction. A major problem with this approach occurs in the event mentions when the syntactic and semantic information cannot identify the important structures for EFP individually (i.e., by itself). In such cases, both the syntactic and semantic representations from the separate deep learning models would be noisy and/or insufficient, causing the poor quality of their simple combination for EFP.
motivation background
05b53f9e0a347c4f47d0fd066538c7_4
Recently, deep learning has been applied to solve EFP. (Qian et al., 2018) employ Generative Adversarial Networks (GANs) for EFP while<cite> (Rudinger et al., 2018)</cite> utilize LSTMs for both sequential and dependency representations of the input sentences.
background
05b53f9e0a347c4f47d0fd066538c7_5
In the next step, we further abstract (e 1 , e 2 , . . . , e n ) for EFP by feeding them into two layers of bidirectional LSTMs (as in<cite> (Rudinger et al., 2018)</cite> ).
uses
05b53f9e0a347c4f47d0fd066538c7_6
Given the hidden representation (h 1 , h 2 , . . . , h n ), it is possible to use the hidden vector corresponding to the anchor word h k as the features to perform factuality prediction (as done in<cite> (Rudinger et al., 2018)</cite> ). However, despite the rich context information over the whole sentence, the features in h k are not directly designed to focus on the import context words for factuality prediction. In order to explicitly encode the information of the cue words into the representations for the anchor word, we propose to learn an importance matrix A = (a ij ) i,j=1..n in which the value in the cell a ij quantifies the contribution of the context word x i for the hidden representation at x j if the representation vector at x j is used to form features for EFP.
differences
05b53f9e0a347c4f47d0fd066538c7_7
Finally, similar to<cite> (Rudinger et al., 2018)</cite> , the feature vector V is fed into a regression model with two layers of feed-forward networks to produce the factuality score.
uses
05b53f9e0a347c4f47d0fd066538c7_8
Following<cite> (Rudinger et al., 2018)</cite> , we train the proposed model by optimizing the Huber loss with δ = 1 and the Adam optimizer with learning rate = 1.0.
uses
05b53f9e0a347c4f47d0fd066538c7_9
For the fourth dataset (i.e., UDS-IH2), we follow the instructions in<cite> (Rudinger et al., 2018)</cite> to scale the scores to the range of [-3, +3] .
uses
05b53f9e0a347c4f47d0fd066538c7_10
Following the previous work (Stanovsky et al., 2017;<cite> Rudinger et al., 2018)</cite> , we evaluate the proposed EFP model using four benchmark datasets: FactBack (Saurí and Pustejovsky, 2009 ), UW (Lee et al., 2015) , Meantime (Minard et al., 2016) and UDS-IH2<cite> (Rudinger et al., 2018)</cite> .
uses
05b53f9e0a347c4f47d0fd066538c7_11
We compare the proposed model with the best reported systems in the literature with linguistic features (Lee et al., 2015; Stanovsky et al., 2017) and deep learning<cite> (Rudinger et al., 2018)</cite> .
background
05b53f9e0a347c4f47d0fd066538c7_12
Importantly, to achieve a fair comparison, we obtain the actual implementation of the current state-of-the-art EFP models from<cite> (Rudinger et al., 2018)</cite> , introduce the BERT embeddings as the inputs for those models and compare them with the proposed models (i.e., the rows with "+BERT").
extends
05d1ecc230c7907d9a14d3351070c3_0
The introduction of pre-trained language models, such as BERT <cite>[2]</cite> and Open-GPT [3] , among many others, has brought tremendous progress to the NLP research and industrial communities.
background
05d1ecc230c7907d9a14d3351070c3_1
Some early attempts include pre-trained models includes, CoVe [12] , CVT [13, 14] , ELMo [15] and ULMFiT [16] . However, the most successful ones are BERT <cite>[2]</cite> and Open-GPT [3] .
background
05d1ecc230c7907d9a14d3351070c3_2
In the presence of the success of pre-trained language models, especially BERT <cite>[2]</cite> , it is natural to ask how to best utilize the pre-trained language models to achieve new state-of-the-art results.
motivation
05d1ecc230c7907d9a14d3351070c3_3
Stickland and Murray [22] invented projected attention layer for multi-task learning using BERT, which results in an improvement in various state-of-the-art results compared to the original work of Devlin et al. <cite>[2]</cite> .
motivation background
05d1ecc230c7907d9a14d3351070c3_4
In this line of work, Liu et al. [21] investigated the linguistic knowledge and transferability of contextual representations by comparing BERT <cite>[2]</cite> with ELMo [15] , and concluded that while the higher levels of LSTM's are more task-specific, this trend does not exhibit in transformer based models.
background
05d1ecc230c7907d9a14d3351070c3_5
In the paradigm proposed in the original work by Devlin et al. <cite>[2]</cite> , the author directly trained BERT along with with a light-weighted task-specific head.
background
05d1ecc230c7907d9a14d3351070c3_6
Since BERT-Adam <cite>[2]</cite> has excellent performance, in our experiments, we use it as an optimizer with β 1 = 0.9, β 2 = 0.999,L 2 -weight decay of 0.01.We apply a dropout trick on all layers and set the dropout probability as 0.1.
similarities uses
05d1ecc230c7907d9a14d3351070c3_7
In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset [6] , which is a public available used in many studies to test the accuracy of their proposed methods [30, 31, 32, 33,<cite> 2]</cite> .
similarities uses
05eecafea7684dc8de13c29a76b767_0
The strong geographical bias, most obviously at the language level (e.g. Finland vs. Japan), and more subtly at the dialect level (e.g. in English used in north-west England vs. north-east USA vs. Texas, USA), clearly reflected in language use in social media services such as Twitter, has been used extensively either for geolocation of users<cite> (Eisenstein et al., 2010</cite>; Roller et al., 2012; Rout et al., 2013; Wing and Baldridge, 2014) or dialectology Eisenstein, 2015) .
background
05eecafea7684dc8de13c29a76b767_1
Three main text-based approaches are: (1) the use of gazetteers Quercini et al., 2010) ; (2) unsupervised text clustering based on topic models or similar<cite> (Eisenstein et al., 2010</cite>; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised classification (Ding et al., 2000; Backstrom et al., 2008; Cheng et al., 2010; Hecht et al., 2011; Kinsella et al., 2011; Wing and Baldridge, 2011; Han et al., 2012; Rout et al., 2013) , which unlike gazetteers can be applied to informal text and compared to topic models, scales better.
background
05eecafea7684dc8de13c29a76b767_2
There have also been attempts to automatically identify such words from geotagged documents<cite> (Eisenstein et al., 2010</cite>; Ahmed et al., 2013; Eisenstein, 2015) .
background
05eecafea7684dc8de13c29a76b767_3
We use three existing Twitter user geolocation datasets: (1) GEOTEXT<cite> (Eisenstein et al., 2010)</cite> , (2) TWITTER-US (Roller et al., 2012) , and (3) TWITTER-WORLD (Han et al., 2012) .
uses
05eecafea7684dc8de13c29a76b767_4
Following Cheng (2010) and<cite> Eisenstein (2010)</cite> , we evaluated the geolocation model using mean and median error in km ("Mean" and "Median" resp.) and accuracy within 161km of the actual location ("Acc@161").
uses
05fe3e9c1598f5b36b6efa79216309_0
presented an efficient method to generate the next word in a sequence when it is added an attention mechanism, improving the performance for long textual sequences <cite>[1]</cite> .
background
05fe3e9c1598f5b36b6efa79216309_1
The model is trained using two real-world datasets: BeerAdvocate [5] and Amazon book reviews <cite>[1]</cite> .
uses
05fe3e9c1598f5b36b6efa79216309_2
• Attention mechanism The attention mechanism, adaptively learns soft alignments c t between character dependencies H t and attention inputs a. Eq. 1 formally defines the new character dependencies using attention layer H attention t <cite>[1]</cite> .
uses
05fe3e9c1598f5b36b6efa79216309_3
The characters are given by maximizing the softmax conditional probability p, based on the new character dependencies H attention t <cite>[1]</cite> , as presented in Eq. 2 p = softmax(H attention t W + b), char = arg max p (2)
uses
05fe3e9c1598f5b36b6efa79216309_4
The model differs from recent works <cite>[1,</cite> 6] , due to the use of attention layer combined with character-level LSTM.
differences
06276db79ed5aa04bb24a31c10d3a9_0
AMRs can be seen as graphs connecting concepts by relations. Each concept is represented by a named instance. Co-reference is established by re-using these instances. For example, the AMRs corresponding to examples (1) and (2) above are given in Figure 1 . Note that, due to the bracketing, the variable b encapsulates the whole entity person :name "Bob" and not just person, i.e. b stands for a person with the name Bob. That there is a lot to gain in this area can be seen by applying the AMR evaluation suite of Damonte et al. (2017) , which calculates nine different metrics to evaluate AMR parsing, reentrancy being one of them. Out of the four systems that made these scores available (all scores reported in<cite> van Noord and Bos (2017)</cite> ), the reentrancy metric obtained the lowest F-score for three of them.
background
06276db79ed5aa04bb24a31c10d3a9_1
Various methods have been proposed to automatically parse AMRs, ranging from syntax-based approaches (e.g. Flanigan et al. (2014) ; Wang et al. (2015) ; Pust et al. (2015) ; Damonte et al. (2017) ) to the more recent neural approaches (Peng et al. (2017) ; Buys and Blunsom (2017) ; Konstas et al. (2017) ; Foland and Martin (2017);<cite> van Noord and Bos (2017)</cite> ). Especially the neural approaches are interesting, since they all use some sort of linearization method and therefore need a predefined way to handle reentrancy.
motivation background
06276db79ed5aa04bb24a31c10d3a9_2
Foland and Martin (2017) and<cite> van Noord and Bos (2017)</cite> use the same input transformation as Konstas et al. (2017) , but do try to restore co-referring nodes by merging all equal concepts into a single concept in a post-processing step. All these methods have in common that they are not very sophisticated, but more importantly, that it is not clear what the exact impact of these methods is on the final performance of the model, making it unclear what the best implementation is for future neural AMR parsers.
motivation background
06276db79ed5aa04bb24a31c10d3a9_3
In this paper we present three methods to handle reentrancy for AMR parsing. The first two methods are based on the previous work described above, while the third is a new, more principled method. These methods are applied on the model that reported the best results in the literature, the character-level neural semantic parsing method of<cite> van Noord and Bos (2017)</cite> . In a nutshell, this method uses a character-based sequence-to-sequence model to translate sentences to AMRs. To enable this process, pre-processing and post-processing steps are needed.
uses background
06276db79ed5aa04bb24a31c10d3a9_4
Method 1B: Reentrancy Restoring This method is created to restore reentrancy nodes in the output of the baseline model. It operates on a very ad hoc principle: if two nodes have the same concept, the second one was actually a reference to the first one. We therefore replace each node that has already occurred in the AMR by the variable of the antecedent node. This approach was applied by<cite> van Noord and Bos (2017)</cite> and Foland and Martin (2017) .
uses
06276db79ed5aa04bb24a31c10d3a9_5
The parameter settings are the same as in<cite> van Noord and Bos (2017)</cite> and are shown in Table 2 .
uses
06276db79ed5aa04bb24a31c10d3a9_6
We test the impact of the different methods on two of our earlier models, described in<cite> van Noord and Bos (2017)</cite> .
uses
06276db79ed5aa04bb24a31c10d3a9_7
The second approach also employs the postprocessing methods Wikification and pruning, as explained in<cite> van Noord and Bos (2017)</cite>.
uses background
06db17253d76150772c0926e11131d_0
Several large real image VQA datasets have recently emerged [8] [9]<cite> [10]</cite> [11] [12] [13] [14] .
background
06db17253d76150772c0926e11131d_1
However, it has been shown that they tend to exploit statistical regularities between answer occurrences and certain patterns in the question [24, <cite>10,</cite> 25, 23, 13] . While they are designed to merge information from both modalities, in practice they often answer without considering the image modality.
motivation
06db17253d76150772c0926e11131d_2
However, when evaluated on a test set that displays different statistical regularities, they usually suffer from a significant drop in accuracy<cite> [10,</cite> 25] .
motivation
06db17253d76150772c0926e11131d_3
We run extensive experiments on VQA-CP v2<cite> [10]</cite> and demonstrate the ability of RUBi to surpass current state-of-the-art results from a significant margin.
uses
06db17253d76150772c0926e11131d_4
VQA-CP v2 and VQA-CP v1<cite> [10]</cite> were recently introduced as diagnostic datasets containing different answer distributions for each questiontype between train and test splits.
background
06db17253d76150772c0926e11131d_5
VQA-CP v2 and VQA-CP v1<cite> [10]</cite> were recently introduced as diagnostic datasets containing different answer distributions for each questiontype between train and test splits. Consequentially, models biased towards the question modality fail on these benchmarks. We use the more challenging VQA-CP v2 dataset extensively in order to show the ability of our approach to reduce the learning of biases coming from the question modality.
uses
06db17253d76150772c0926e11131d_6
However, even with this additional balancing, statistical biases from the question remain and can be leveraged<cite> [10]</cite> .
motivation
06db17253d76150772c0926e11131d_7
VQA models are inclined to learn unimodal biases from the datasets<cite> [10]</cite> .
background
06db17253d76150772c0926e11131d_8
However, even with this additional balancing, statistical biases from the question remain and can be leveraged<cite> [10]</cite> . That is why we propose an approach to reduce unimodal biases during training.
motivation
06db17253d76150772c0926e11131d_9
Experimental setup We train and evaluate our models on VQA-CP v2<cite> [10]</cite> .
uses
06db17253d76150772c0926e11131d_10
This accuracy corresponds to a gain of +5.94 percentage points over the current state-of-the-art UpDn + Q-Adv + DoE. It also corresponds to a gain of +15.88 over GVQA<cite> [10]</cite> , which is a specific architecture designed for VQA-CP.
differences
06db17253d76150772c0926e11131d_11
We report a drop of 1.94 percentage points with respect to our baseline, while<cite> [10]</cite> report a drop of 3.78 between GVQA and their SAN baseline.
differences
06de9a8e72b832beea9c2f17e0862a_0
Later on, pressure from language researchers forced us to replace it with terms such as "online memory minimization"<cite> [5]</cite> because our initial formulation was obscure to them.
uses background
06de9a8e72b832beea9c2f17e0862a_1
Our position is grounded on the high predictive power of that principle per se<cite> [5]</cite> .
uses
06de9a8e72b832beea9c2f17e0862a_2
For sociological reasons, these arguments started appearing in print many years later [20, <cite>5,</cite> 21] .
background
06de9a8e72b832beea9c2f17e0862a_4
Later on, pressure from language researchers forced us to replace it with terms such as "online memory minimization"<cite> [5]</cite> because our initial formulation was obscure to them.
uses background
06de9a8e72b832beea9c2f17e0862a_5
Our position is grounded on the high predictive power of that principle per se<cite> [5]</cite> .
uses
06de9a8e72b832beea9c2f17e0862a_6
For sociological reasons, these arguments started appearing in print many years later [20, <cite>5,</cite> 21] .
background
0706cab049274ffc82c5e2ef6f7b99_0
For example, the coding manual for the Switchboard DAMSL dialogue act annotation scheme (Jurafsky, Shriberg, and Biasca 1997, page 2) states that kappa is used to "assess labelling accuracy," and Di<cite> Eugenio and Glass (2004)</cite> relate reliability to "the objectivity of decisions," whereas Carletta (1996) regards reliability as the degree to which we understand the judgments that annotators are asked to make.
background
0706cab049274ffc82c5e2ef6f7b99_1
Di<cite> Eugenio and Glass (2004)</cite> identify three general classes of agreement statistics and suggest that all three should be used in conjunction in order to accurately evaluate coding schemes.
background
0706cab049274ffc82c5e2ef6f7b99_2
The justification given for using percentage agreement is that it does not suffer from what Di<cite> Eugenio and Glass (2004)</cite> referred to as the "prevalence problem.
differences
0706cab049274ffc82c5e2ef6f7b99_3
For example, Table 1 shows an example taken from Di<cite> Eugenio and Glass (2004)</cite> showing the classification of the utterance Okay as an acceptance or acknowledgment.
background
0706cab049274ffc82c5e2ef6f7b99_4
Di<cite> Eugenio and Glass (2004)</cite> perceive this as an "unpleasant behavior" of chancecorrected tests, one that prevents us from concluding that the example given in Table 1 shows satisfactory levels of agreement.
differences
0706cab049274ffc82c5e2ef6f7b99_5
The second class of agreement measure recommended in Di<cite> Eugenio and Glass (2004)</cite> is that of chance-corrected tests that do not assume an equal distribution of categories between coders.
background
0706cab049274ffc82c5e2ef6f7b99_6
Di<cite> Eugenio and Glass (2004)</cite> conclude with the proposal that these three forms of agreement measure collectively provide better means with which to judge agreement than any individual test.
background
0706cab049274ffc82c5e2ef6f7b99_7
The prevalent use of this criterion despite repeated advice that it is unlikely to be suitable for all studies (Carletta 1996; <cite>Di Eugenio and Glass 2004</cite>; Krippendorff 2004a ) is probably due to a desire for a simple system that can be easily applied to a scheme.
background
0763666190b6b4be1bcf494d7c6fe2_0
They also proposed an algorithm that uses successive splits and merges of semantic roles clusters in order to improve their quality in<cite> (Lang and Lapata, 2011a)</cite> .
background
0763666190b6b4be1bcf494d7c6fe2_1
Following common practice<cite> (Lang and Lapata, 2011a</cite>; Titov and Klementiev, 2012) , we assume oracle argument identification and focus on argument labeling.
similarities uses
0763666190b6b4be1bcf494d7c6fe2_2
As done in<cite> (Lang and Lapata, 2011a)</cite> and (Titov and Klementiev, 2012) , we use purity and collocation measures to assess the quality of our role induction process.
similarities uses
0763666190b6b4be1bcf494d7c6fe2_3
In the same way as<cite> (Lang and Lapata, 2011a)</cite> , we use the micro-average obtained by weighting the scores for individual verbs proportionally to the number of argument instances for that verb.
similarities uses
0763666190b6b4be1bcf494d7c6fe2_4
The baseline model is the "syntactic function" used for instance in<cite> (Lang and Lapata, 2011a)</cite> , which simply clusters predicate arguments according to the dependency relation to their head.
similarities
0763666190b6b4be1bcf494d7c6fe2_5
We made our best to follow the setup used in previous work<cite> (Lang and Lapata, 2011a</cite>; Titov and Kle-mentiev, 2012) , in order to compare with the current state of the art.
similarities uses
0763666190b6b4be1bcf494d7c6fe2_6
We can first note that, despite our efforts to reproduce the same baseline, there is still a difference between our baseline (Synt.Func.) and the baseline reported in<cite> (Lang and Lapata, 2011a)</cite>
differences
0763666190b6b4be1bcf494d7c6fe2_7
The other results respectively correspond to the Split Merge approach presented in<cite> (Lang and Lapata, 2011a</cite> ) (Split Merge), the Graph Partitioning algorithm (Graph Part.) presented in (Lang and Lapata, 2011b) , and two Bayesian approaches presented in (Titov and Klementiev, 2012) , which achieve the best current unsupervised SRL results.
similarities
07b062d569749924fa6ee1b2223411_0
One popular approach is to use a log-linear parsing model and maximise the conditional likelihood function (Johnson et al., 1999; Riezler et al., 2002;<cite> Clark and Curran, 2004b</cite>; Malouf and van Noord, 2004; Miyao and Tsujii, 2005) .
background
07b062d569749924fa6ee1b2223411_1
In<cite> Clark and Curran (2004b)</cite> we use cluster computing resources to solve this problem.
background
07b062d569749924fa6ee1b2223411_2
Dynamic programming (DP) in the form of the inside-outside algorithm can be used to calculate the expectations, if the features are sufficiently local (Miyao and Tsujii, 2002) ; however, the memory requirements can be prohibitive, especially for automatically extracted, wide-coverage grammars. In<cite> Clark and Curran (2004b)</cite> we use cluster computing resources to solve this problem.
background
07b062d569749924fa6ee1b2223411_3
We use a lexicalized phrase-structure parser, the CCG parser of<cite> Clark and Curran (2004b)</cite> , together with a DP-based decoder.
uses
07b062d569749924fa6ee1b2223411_4
Previous discriminative models for CCG <cite>(Clark and Curran, 2004b)</cite> required cluster computing resources to train. In this paper we reduce the memory requirements from 20 GB of RAM to only a few hundred MB, but without greatly increasing the training time or reducing parsing accuracy.
motivation
07b062d569749924fa6ee1b2223411_5
2 The CCG Parser<cite> Clark and Curran (2004b)</cite> describes the CCG parser.
background
07b062d569749924fa6ee1b2223411_6
In<cite> Clark and Curran (2004b)</cite> we use a cluster of 45 machines, together with a parallel implementation of the BFGS training algorithm, to solve this problem. The need for cluster computing resources presents a barrier to the development of further CCG parsing models.
motivation
07b062d569749924fa6ee1b2223411_7
In<cite> Clark and Curran (2004b)</cite> we use a cluster of 45 machines, together with a parallel implementation of the BFGS training algorithm, to solve this problem.
background
07b062d569749924fa6ee1b2223411_8
In this paper, Y is the set of possible CCG derivations and GEN(x) enumerates the set of derivations for sentence x. We use the same feature representation Φ(x, y) as in<cite> Clark and Curran (2004b)</cite> , to allow comparison with the log-linear model.
uses
07b062d569749924fa6ee1b2223411_9
A feature forest is essentially a packed chart with only the feature information retained (see Miyao and Tsujii (2002) and<cite> Clark and Curran (2004b)</cite> for the details).
background
07b062d569749924fa6ee1b2223411_10
For the log-linear parsing model in<cite> Clark and Curran (2004b)</cite> , the inside-outside algorithm is used to calculate feature expectations, which are then used by the BFGS algorithm to optimise the likelihood function.
background
07b062d569749924fa6ee1b2223411_11
We applied the same normal-form restrictions used in<cite> Clark and Curran (2004b)</cite> : categories can only combine if they have been seen to combine in Sections 2-21 of CCGbank, and only if they do not violate the Eisner (1996a) normal-form constraints.
uses
07b062d569749924fa6ee1b2223411_12
In<cite> Clark and Curran (2004b)</cite> we use a cluster of 45 machines, together with a parallel implementation of BFGS, to solve this problem, but need up to 20 GB of RAM. The feature forest representation, and our implementation of it, is so compact that the perceptron training requires only 20 MB of RAM.
differences
07b062d569749924fa6ee1b2223411_13
Following<cite> Clark and Curran (2004b)</cite> , accuracy is measured using F-score over the goldstandard predicate-argument dependencies in CCGbank.
uses
0924035155d4bbac7768c65fbe8f9a_1
Since the shared task graphs used relations between nodes which were often not easily mappable to native OpenCCG relations, we trained a maxent classifier to tag the most likely relation, as well as an auxiliary maxent classifier to POS tag the graph nodes, much like hypertagging <cite>(Espinosa et al., 2008)</cite> .
similarities
09f627b9a70966dc7b63316c56a2a0_0
Our NCE-trained language models achieve significantly lower perplexity on the One Billion Word Benchmark language modeling challenge, and contain one sixth of the parameters in the best single model in<cite> Chelba et al. (2013)</cite> .
differences
09f627b9a70966dc7b63316c56a2a0_1
1 Henceforth we will use terms like "RNN" and "LSTM" with the understanding that we are referring to language models that use these formalisms have outperformed their count-based counterparts <cite>(Chelba et al., 2013</cite>; Zaremba et al., 2014; Mikolov, 2012) .
differences
09f627b9a70966dc7b63316c56a2a0_2
Using our new objective, we train large multi-layer LSTMs on the One Billion Word benchmark<cite> (Chelba et al., 2013)</cite> , with its full 780k word vocabulary.
uses
09f627b9a70966dc7b63316c56a2a0_3
We achieve significantly lower perplexities with a single model, while using only a sixth of the parameters of a very strong baseline model<cite> (Chelba et al., 2013)</cite> .
differences
09f627b9a70966dc7b63316c56a2a0_4
The contributions in this paper are the following: 2 www.github.com/isi-nlp/Zoph_RNN • Significantly improved perplexities (43.2) on the One Billion Word benchmark over<cite> Chelba et al. (2013)</cite> • Extrinsic machine translation improvement over a strong baseline.
extends differences
09f627b9a70966dc7b63316c56a2a0_5
We conducted two series of experiments to validate the efficiency of our approach and the quality of the models we learned using it: An intrinsic study of language model perplexity using the standard One Billion Word benchmark<cite> (Chelba et al., 2013)</cite> and an extrinsic end-to-end statistical machine translation task that uses an LSTM as one of several feature functions in re-ranking.
uses
09f627b9a70966dc7b63316c56a2a0_6
For our language modeling experiment we use the One Billion Word benchmark proposed by<cite> Chelba et al. (2013)</cite> .
uses
09f627b9a70966dc7b63316c56a2a0_7
Our perplexity results are shown in Table 1 , where we get significantly lower perplexities than the best single model from<cite> Chelba et al. (2013)</cite> , while having almost 6 times fewer parameters.
differences
09f627b9a70966dc7b63316c56a2a0_8
Parameters Perplexity<cite> Chelba et al. (2013)</cite> 20m 51.3 NCE (ours) 3.4m 43.2 Recently, (Józefowicz et al., 2016) achieved stateof-the-art language modeling perplexities (30.0) on the billion word dataset with a single model, using importance sampling to approximate the normalization constant, Z(u).
differences
0a538968f0cd121a1ef63b58a0c9f7_1
We follow the same data split of 1115 training and 19 test conversations as in the baseline approach (Stolcke et al., 2000;<cite> Kalchbrenner and Blunsom, 2013)</cite> .
similarities uses
0a55859a36d0887ba4febc98762715_0
This paper proposes a new scalable and accurate neural dialogue state tracking model, based on the recently proposed Global-Local Self-Attention encoder (GLAD) model by <cite>Zhong et al. (2018)</cite> which uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features.
uses
0a55859a36d0887ba4febc98762715_1
Recently, <cite>Zhong et al. (2018)</cite> proposed a model based on training a binary classifier for each slot-value, Global-Locally Self Attentive encoder (GLAD, by employing recurrent and self attention for each utterance and previous system actions, and measuring similaity of these computed representation to each slot-value, which achieve state of the art results on WoZ and DSTC2 (Williams et al., 2013) datasets.
background
0a55859a36d0887ba4febc98762715_2
Recently, <cite>Zhong et al. (2018)</cite> proposed a model based on training a binary classifier for each slot-value, Global-Locally Self Attentive encoder (GLAD, by employing recurrent and self attention for each utterance and previous system actions, and measuring similaity of these computed representation to each slot-value, which achieve state of the art results on WoZ and DSTC2 (Williams et al., 2013) datasets. Although the proposed neural based models achieves state of the art results on several benchmark, they are still inefficient for deployment in production system, due to their latency which stems from using recurrent networks.
motivation
0a55859a36d0887ba4febc98762715_3
Recently, <cite>Zhong et al. (2018)</cite> proposed a model based on training a binary classifier for each slot-value, Global-Locally Self Attentive encoder (GLAD, by employing recurrent and self attention for each utterance and previous system actions, and measuring similaity of these computed representation to each slot-value, which achieve state of the art results on WoZ and DSTC2 (Williams et al., 2013) datasets. Although the proposed neural based models achieves state of the art results on several benchmark, they are still inefficient for deployment in production system, due to their latency which stems from using recurrent networks. In this paper, we propose a new encoder, by improving GLAD architecture<cite> (Zhong et al., 2018)</cite> .
uses motivation