id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
0a55859a36d0887ba4febc98762715_4
In this paper, we propose a new encoder, by improving GLAD architecture<cite> (Zhong et al., 2018)</cite> .
uses
0a55859a36d0887ba4febc98762715_5
First, section 2.1 explains the recently proposed GLAD encoder<cite> (Zhong et al., 2018)</cite> architecture, followed by our proposed encoder in section 2.2.
uses background
0a55859a36d0887ba4febc98762715_6
Here, we employ the similar approach of learning slot-specific temporal and context representation of user utterance and system actions, as proposed in GLAD<cite> (Zhong et al., 2018)</cite> .
uses
0a55859a36d0887ba4febc98762715_7
Scoring Model: We follow the proposed architecture in GLAD<cite> (Zhong et al., 2018)</cite> for computing score of each slot-value pair, in the user utterance and previous system actions.
uses
0a55859a36d0887ba4febc98762715_8
The joint goal is the accumulation of turn goals as described in <cite>Zhong et al. (2018)</cite> .
uses similarities
0a55859a36d0887ba4febc98762715_9
The evaluation metric is based on joint goal and turn-level request and joint goal tracking accuracy. The joint goal is the accumulation of turn goals as described in <cite>Zhong et al. (2018)</cite> .
uses
0a93feafef3ba2d4bb5360ff215171_0
Several metrics have been proposed recently for evaluating VQA systems (see section 2), but accuracy is still the most commonly used evaluation criterion [4, 11, 23, 42, 44, <cite>1</cite>, 5, 14, 45, 2] .
background
0a93feafef3ba2d4bb5360ff215171_1
In recent years, a number of VQA datasets have been proposed: VQA 1.0 [4] , VQA-abstract [<cite>1</cite>] , VQA 2.0 [47, 14] , FM-IQA [13] , DAQUAR [24] , COCO-QA [30] , Visual Madlibs [46] , Visual Genome [20] , VizWiz [16] , Visual7W [48] , TDIUC [18] , CLEVR [17] , SHAPES [3] , Visual Reasoning [34] , Embodied QA [7] . What all these resources have in common is the task for which they were designed: Given an image (either real or abstract) and a question in natural language, models are asked to correctly answer the question.
background
0a93feafef3ba2d4bb5360ff215171_2
Being simple to compute and interpret, this metric (hence, VQA3+) is the standard evaluation criterion for open-ended VQA [4, <cite>1</cite>, 16, 47, 14] .
background
0a93feafef3ba2d4bb5360ff215171_3
Moreover, it only works with rigid semantic concepts, making it not suitable for phrasal or sentence answers that can be found in [4, <cite>1</cite>, 16, 47, 14] .
background
0a93feafef3ba2d4bb5360ff215171_4
This is crucial since, as shown in Figure 3 , in current datasets the proportion of samples with a perfect inter-annotator agreement (i.e., 1 unique answer) is relatively low: 35% in VQA 1.0 [4] , 33% in VQA 2.0 [14] , 43% in VQA-abstract [<cite>1</cite>] , and only 3% in VizWiz [16] .
background
0a93feafef3ba2d4bb5360ff215171_5
We tested the validity of our metric by experimenting with four VQA datasets: VQA 1.0 [4] , VQA 2.0 [14] , VQA-abstract [<cite>1</cite>] , and VizWiz [16] .
uses
0a93feafef3ba2d4bb5360ff215171_6
To enable a fair comparison across the datasets, for each dataset we followed the same pipeline: The standard VQA model used in [<cite>1</cite>] was trained on the training split and tested on the validation split.
uses
0ae49d1618e18eb794666543d924ed_0
By adding very simple CLM-based features to the system, our scores approach those of a state-of-the-art NER system<cite> (Lample et al., 2016)</cite> across multiple languages, demonstrating both the unique importance and the broad utility of this approach.
similarities
0ae49d1618e18eb794666543d924ed_1
4 We compare the CLM's Entity Identification against two state-of-the-art NER systems: CogCompNER (Khashabi et al., 2018) and LSTM-CRF<cite> (Lample et al., 2016)</cite> .
uses
0ae49d1618e18eb794666543d924ed_2
4 We compare the CLM's Entity Identification against two state-of-the-art NER systems: CogCompNER (Khashabi et al., 2018) and LSTM-CRF<cite> (Lample et al., 2016)</cite> . As Table 2 shows, the result of Ngram CLM, which yields the highest performance, is remarkably close to the result of state-of-theart NER systems (especially for English) given the simplicity of the model.
similarities
0ae49d1618e18eb794666543d924ed_3
CogCompNER is run with standard features, including Brown clusters;<cite> (Lample et al., 2016)</cite> is run with default parameters and pre-trained embeddings.
uses
0ae49d1618e18eb794666543d924ed_4
We compare with the state-of-theart character-level neural NER system of<cite> (Lample et al., 2016)</cite> , which inherently encodes comparable information to CLMs, as a way to investigate how much of that system's performance can be attributed directly to name-internal structure.
uses
0ae49d1618e18eb794666543d924ed_5
The results in Table 3 show that for six of the eight languages we studied, the baseline NER can be significantly improved by adding simple CLM features; for English and Arabic, it performs better even than the neural NER model of<cite> (Lample et al., 2016)</cite> .
differences
0ae49d1618e18eb794666543d924ed_6
While the end-to-end model developed by<cite> (Lample et al., 2016)</cite> clearly includes information comparable to that in the CLM, it requires a fully annotated NER corpus, takes significant time and computational resources to train, and is non-trivial to integrate into a new NER system.
motivation
0ae49d1618e18eb794666543d924ed_7
While the end-to-end model developed by<cite> (Lample et al., 2016)</cite> clearly includes information comparable to that in the CLM, it requires a fully annotated NER corpus, takes significant time and computational resources to train, and is non-trivial to integrate into a new NER system. The CLM approach captures a very large fraction of the entity/non-entity distinction capacity of full NER systems, and can be rapidly trained using only entity and non-entity token lists -i.e., it is corpus-agnostic.
motivation differences
0ae49d1618e18eb794666543d924ed_8
<cite>Lample et al. (2016)</cite> use character embeddings in an LSTM-CRF model.
background
0af8cacc0f85bb557e1943e32450e2_0
We present a replication study of BERT pretraining<cite> (Devlin et al., 2019)</cite> that carefully measures the impact of many key hyperparameters and training data size.
uses
0af8cacc0f85bb557e1943e32450e2_1
Self-training methods such as ELMo (Peters et al., 2018) , GPT (Radford et al., 2018) , BERT<cite> (Devlin et al., 2019)</cite> , XLM (Lample and Conneau, 2019) , and XLNet have brought significant performance gains, but it can be challenging to determine which aspects of the methods contribute the most.
motivation background
0af8cacc0f85bb557e1943e32450e2_2
We present a replication study of BERT pretraining<cite> (Devlin et al., 2019)</cite> , which includes a careful evaluation of the effects of hyperparmeter tuning and training set size.
uses
0af8cacc0f85bb557e1943e32450e2_3
We present a replication study of BERT pretraining<cite> (Devlin et al., 2019)</cite> , which includes a careful evaluation of the effects of hyperparmeter tuning and training set size. We find that BERT was significantly undertrained and propose an improved recipe for training BERT models, which we call RoBERTa, that can match or exceed the performance of all of the post-BERT methods.
extends
0af8cacc0f85bb557e1943e32450e2_4
In this section, we give a brief overview of the BERT<cite> (Devlin et al., 2019)</cite> pretraining approach and some of the training choices that we will examine experimentally in the following section.
uses background
0af8cacc0f85bb557e1943e32450e2_5
Unlike<cite> Devlin et al. (2019)</cite> , we do not randomly inject short sequences, and we do not train with a reduced sequence length for the first 90% of updates.
differences
0af8cacc0f85bb557e1943e32450e2_6
Our finetuning procedure follows the original BERT paper<cite> (Devlin et al., 2019)</cite> .
uses
0af8cacc0f85bb557e1943e32450e2_7
For SQuAD V1.1 we adopt the same span prediction method as BERT<cite> (Devlin et al., 2019)</cite> .
uses
0af8cacc0f85bb557e1943e32450e2_8
Results Table 1 compares the published BERT BASE results from<cite> Devlin et al. (2019)</cite> to our reimplementation with either static or dynamic masking.
uses
0af8cacc0f85bb557e1943e32450e2_9
Results Table 1 compares the published BERT BASE results from<cite> Devlin et al. (2019)</cite> to our reimplementation with either static or dynamic masking. We find that our reimplementation with static masking performs similar to the original BERT model, and dynamic masking is comparable or slightly better than static masking.
uses similarities
0af8cacc0f85bb557e1943e32450e2_10
β€’ SEGMENT-PAIR+NSP: This follows the original input format used in BERT<cite> (Devlin et al., 2019)</cite> , with the NSP loss.
uses
0af8cacc0f85bb557e1943e32450e2_11
We first compare the original SEGMENT-PAIR input format from<cite> Devlin et al. (2019)</cite> to the SENTENCE-PAIR format; both formats retain the NSP loss, but the latter uses single sentences.
uses
0af8cacc0f85bb557e1943e32450e2_12
We find that this setting outperforms the originally published BERT BASE results and that removing the NSP loss matches or slightly improves downstream task performance, in contrast to<cite> Devlin et al. (2019)</cite> .
differences
0af8cacc0f85bb557e1943e32450e2_13
The original BERT implementation<cite> (Devlin et al., 2019)</cite> uses a character-level BPE vocabulary of size 30K, which is learned after preprocessing the input with heuristic tokenization rules.
background
0af8cacc0f85bb557e1943e32450e2_14
The original BERT implementation<cite> (Devlin et al., 2019)</cite> uses a character-level BPE vocabulary of size 30K, which is learned after preprocessing the input with heuristic tokenization rules. Following Radford et al. (2019) , we instead consider training BERT with a larger byte-level BPE vocabulary containing 50K subword units, without any additional preprocessing or tokenization of the input.
background differences
0af8cacc0f85bb557e1943e32450e2_15
For example, the recently proposed XLNet architecture ) is pretrained using nearly 10 times more data than the original BERT<cite> (Devlin et al., 2019)</cite> .
differences
0af8cacc0f85bb557e1943e32450e2_16
We pretrain for 100K steps over a comparable BOOK-CORPUS plus WIKIPEDIA dataset as was used in<cite> Devlin et al. (2019)</cite> .
similarities
0af8cacc0f85bb557e1943e32450e2_17
This formulation significantly simplifies the task, but is not directly comparable to BERT<cite> (Devlin et al., 2019)</cite> .
differences
0af8cacc0f85bb557e1943e32450e2_18
In particular, while both BERT<cite> (Devlin et al., 2019)</cite> and XLNet augment their training data with additional QA datasets, we only finetune RoBERTa using the provided SQuAD training data.
differences
0af8cacc0f85bb557e1943e32450e2_19
For SQuAD v1.1 we follow the same finetuning procedure as<cite> Devlin et al. (2019)</cite> .
uses
0af8cacc0f85bb557e1943e32450e2_20
Most of the top systems build upon either BERT<cite> (Devlin et al., 2019)</cite> or XLNet , both of which rely on additional external training data.
background
0af8cacc0f85bb557e1943e32450e2_21
Most of the top systems build upon either BERT<cite> (Devlin et al., 2019)</cite> or XLNet , both of which rely on additional external training data. In contrast, our submission does not use any additional data.
differences
0af8cacc0f85bb557e1943e32450e2_22
Pretraining methods have been designed with different training objectives, including language modeling (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018) , machine translation (McCann et al., 2017) , and masked language modeling <cite>(Devlin et al., 2019</cite>; Lample and Conneau, 2019) .
background
0af8cacc0f85bb557e1943e32450e2_23
Performance is also typically improved by training bigger models on more data <cite>(Devlin et al., 2019</cite>; Yang et al., 2019; Radford et al., 2019) .
background
0b2e3651610aba4bd7150eee50797f_0
These approaches were either complicated (Ma et al., 2007; Chang et al., 2008; Ma and Way, 2009; Paul et al., 2010) , or of high computational complexity (Chung and Gildea 2009;<cite> Duan et al., 2010)</cite> .
background
0b2e3651610aba4bd7150eee50797f_1
However, this kind of errors cannot be fixed by methods which learn new words by packing already segmented words, such as word packing (Ma et al., 2007) and Pseudo-word <cite>(Duan et al., 2010)</cite> .
background
0b2e3651610aba4bd7150eee50797f_2
In this setting, we gradually set the phrase length and the distortion limits of the phrase-based decoder (context size) to 7, 9, 11 and 13, in order to remove the disadvantage of shorter context size of using character as WSR for fair comparison with WordSys as suggested by<cite> Duan et al. (2010)</cite> .
uses
0b334057bc358f5537497ed15344c1_1
This is probably the reason for growing interest in creation of annotated corpora [4] , development of methods for augmenting the existing annotation [5] , speeding up the annotation process [5] , and reducing its cost; evaluating the comparability of results obtained applying the same methods to different collections<cite> [6]</cite> , And increasing compatibility of different annotations [7] .
background
0b334057bc358f5537497ed15344c1_2
Increasingly sophisticated relation extraction methods <cite>[6,</cite> 8] are being applied to a broader set of iii relations [9] .
background
0c233d68fb2ccdf033fc6a08c8f4bf_0
The goal of the <cite>Penn Discourse Treebank (PDTB)</cite> project is to develop a large-scale corpus, annotated with coherence relations marked by discourse connectives. Currently, the primary application of the <cite>PDTB</cite> annotation has been to news articles.
background
0c233d68fb2ccdf033fc6a08c8f4bf_1
In this study, we tested whether the <cite>PDTB</cite> guidelines can be adapted to a different genre.
motivation
0c233d68fb2ccdf033fc6a08c8f4bf_2
In this study, we tested whether the <cite>PDTB</cite> guidelines can be adapted to a different genre. We annotated discourse connectives and <cite>their</cite> arguments in one 4,937-token full-text biomedical article.
uses
0c233d68fb2ccdf033fc6a08c8f4bf_3
Thus our experiments suggest that the <cite>PDTB</cite> annotation can be adapted to new domains by minimally adjusting the guidelines and by adding some further domain-specific linguistic cues.
extends
0c233d68fb2ccdf033fc6a08c8f4bf_4
The <cite>Penn Discourse Treebank (PDTB)</cite> (http://www.seas.upenn.edu/~pdtb) (<cite>Prasad et al. 2008a</cite>) annotates the argument structure, semantics, and attribution of discourse connectives and their arguments.
background
0c233d68fb2ccdf033fc6a08c8f4bf_5
This work examines whether the <cite>PDTB</cite> annotation guidelines can be adapted to a different genre, the biomedical literature.
motivation
0c233d68fb2ccdf033fc6a08c8f4bf_6
Following the <cite>PDTB</cite> annotation manual (Prasad et al. 2008b ), we conducted a pilot annotation of discourse connectivity in biomedical text.
uses
0c233d68fb2ccdf033fc6a08c8f4bf_7
When the annotation work was completed, we measured the inter-annotator agreement, following the <cite>PDTB</cite> exact match criterion (Miltsakaki et al. 2004 ).
uses
0c233d68fb2ccdf033fc6a08c8f4bf_8
We discussed the annotation results and made suggestions to adapt the <cite>PDTB</cite> guidelines to biomedical text.
extends
0c233d68fb2ccdf033fc6a08c8f4bf_9
The <cite>PDTB</cite> also reported a higher level of agreement in annotating Arg2 than in annotating Arg1 (Miltsakaki et al. 2004) .
background
0c233d68fb2ccdf033fc6a08c8f4bf_10
The overall agreement for the 68 discourse relations is 45.6% for exact match, 45.6% for Arg1, and 79.4% for Arg2. The <cite>PDTB</cite> also reported a higher level of agreement in annotating Arg2 than in annotating Arg1 (Miltsakaki et al. 2004) . We manually analyzed the cases with disagreement.
differences
0c233d68fb2ccdf033fc6a08c8f4bf_11
After the completion of the pilot annotation and the discussion, we decided to add the following conventions to the <cite>PDTB</cite> annotation guidelines to address the characteristics of biomedical text: i. Citation references are to be annotated as a part of an argument because the inclusion will benefit many text-mining tasks including identifying the semantic relations among citations.
extends
0c233d68fb2ccdf033fc6a08c8f4bf_12
We will annotate a wider variety of nominalizations as arguments than allowed by the <cite>PDTB</cite> guidelines.
extends
0c3f9588b6f587d04c286384ca24e0_0
In this paper we aim to improve the state-of-the-art for the task of learning a TAG supertagger from an annotated treebank <cite>(Kasai et al., 2018)</cite> .
uses
0c3f9588b6f587d04c286384ca24e0_1
Our experimental results show that our novel multi-task learning framework leads to a new state-of-the-art accuracy score of 91.39% for TAG supertagging on the Penn Treebank dataset (Marcus et al., 1993; Chen et al., 2006) which is a significant improvement over the previous multi-task result for supertagging that combines supertagging with graph-based parsing <cite>(Kasai et al., 2018)</cite> .
differences
0c3f9588b6f587d04c286384ca24e0_2
Neural linear-time transition based parsers are still not accurate enough to compete with the state-of-the-art supertagging models or parsers that use supertagging as the initial step (Chung et al., 2016;<cite> Kasai et al., 2018)</cite> .
background
0c3f9588b6f587d04c286384ca24e0_3
For our baseline supertagging model we use the state-of-the-art model that currently has the highest accuracy on the Penn treebank dataset <cite>(Kasai et al., 2018)</cite> . For the supertagging model the main contribution of<cite> Kasai et al. (2018)</cite> was two-fold: the first was to add a character CNN for modeling word embeddings using subword features, and the second was to add highway connections to add more layers to a standard bidirectional LSTM.
uses
0c3f9588b6f587d04c286384ca24e0_4
Another extension to the standard sequence prediction model in<cite> Kasai et al. (2018)</cite> was to combine supertagging with graph-based parsing.
extends
0c3f9588b6f587d04c286384ca24e0_5
<cite>(Kasai et al., 2018)</cite> we use two components in the word embedding: β€’ a 30-dimensional character level embedding vector computed using a char-CNN which captures the morphological information (Santos and Zadrozny, 2014; Chiu and Nichols, 2016; Ma and Hovy, 2016;<cite> Kasai et al., 2018)</cite> .
uses
0c3f9588b6f587d04c286384ca24e0_6
Unlike <cite>(Kasai et al., 2018)</cite> we do not use predicted part of speech (POS) tags as part of the input sequence.
differences
0c3f9588b6f587d04c286384ca24e0_7
For the hyperparameters, we use the settings in<cite> Kasai et al. (2018)</cite> in order to ensure a fair comparison.
uses
0c3f9588b6f587d04c286384ca24e0_8
Unlike <cite>(Kasai et al., 2018)</cite> we do not use highway connections in our model.
differences
0c3f9588b6f587d04c286384ca24e0_9
In our case, because we re-use the same training set for multi-task learning, we have made sure our experimental settings exactly match the previous best state-of-the-art method for supertagging <cite>(Kasai et al., 2018)</cite> and we use the same pre-trained word embeddings to ensure a fair comparison.
uses
0c3f9588b6f587d04c286384ca24e0_10
We use the dataset that has been widely used by previous work in supertagging and TAG parsing (Bangalore et al., 2009; Chung et al., 2016; Friedman et al., 2017;<cite> Kasai et al., , 2018</cite> .
uses
0c3f9588b6f587d04c286384ca24e0_12
All of those words are<cite> Kasai et al. (2018)</cite> refers to highway connections, and POS refers to the use of predicted part-of-speech tags as inputs. We do not use HW or POS in our models as they do not provide any benefit.
differences
0c3f9588b6f587d04c286384ca24e0_13
Neural network based supertagging models in TAG <cite>(Kasai et al., 2018)</cite> and CCG (Xu Lewis et al., 2016; Xu, 2016; Vaswani et al., 2016) have shown substantial improvement in performance, but the supertagging models are all quite similar as they all use a bi-directional RNN feeding into a prediction layer.
background
0c3f9588b6f587d04c286384ca24e0_14
<cite>(Kasai et al., 2018)</cite> combines supertagging with parsing which does provide state-of-the-art accuracy but at the expense of computational complexity.
background
0c3f9588b6f587d04c286384ca24e0_15
extends the BiLSTM model with predicted part-of-speech tags and suffix embeddings as inputs, then<cite> Kasai et al. (2018)</cite> further extends the BiLSTM model with highway connection as well as character CNN as input, and jointly train the supertagging model with parsing model and this work had the state-of-the-art accuracy before our paper on the Penn treebank dataset.
background
0cc576e90c5ee2af043e09234792f5_0
Finally, it would be interesting to determine whether using ASs extracted from a corpus of native texts enables a better prediction than that obtained by using the simple frequency of the unigrams and bigrams<cite> (Yannakoudakis et al., 2011)</cite> .
future_work
0cc576e90c5ee2af043e09234792f5_1
Dataset: The analyses were conducted on the First Certificate in English (FCE) ESOL examination scripts described in <cite>Yannakoudakis et al. (2011</cite> Yannakoudakis et al. ( , 2012 .
similarities uses
0cc576e90c5ee2af043e09234792f5_2
As in<cite> Yannakoudakis et al. (2011)</cite> , the 1141 texts from the year 2000 were used for training, while the 97 texts from the year 2001 were used for testing.
similarities
0cc576e90c5ee2af043e09234792f5_3
Lexical Features: As a benchmark for comparison, the lexical features that were showed to be good predictors of the quality of the texts in this dataset<cite> (Yannakoudakis et al., 2011)</cite> were chosen.
similarities uses
0cc576e90c5ee2af043e09234792f5_4
These features were extracted as described in<cite> Yannakoudakis et al. (2011)</cite> ; the only difference is that they used the RASP tagger and not the CLAWS tagger.
extends differences
0cc576e90c5ee2af043e09234792f5_5
Supervised Learning Approach and Evaluation: As in<cite> Yannakoudakis et al. (2011)</cite> , the automated scoring task was treated as a rankpreference learning problem by means of the SVM-Rank package (Joachims, 2006) , which is a much faster version of the SVM-Light package used by<cite> Yannakoudakis et al. (2011)</cite> .
extends differences
0cc576e90c5ee2af043e09234792f5_6
Since the quality ratings are distributed on a zero to 40 scale, I chose Pearson's correlation coefficient, also used by<cite> Yannakoudakis et al. (2011)</cite> , as the measure of performance.
similarities uses
0cc576e90c5ee2af043e09234792f5_7
To get an idea of how well the collocational and lexical features perform, the correlations in Table 2 can be compared to the average correlation between the Examiners' scores reported by<cite> Yannakoudakis et al. (2011)</cite> , which give an upper bound of 0.80 while the All models with more than three bins obtain a correlation of at least 0.75.
similarities
0d06c8509ebbdc61985bebcdb26e6c_0
In a similar work, Mnih et al. <cite>[13]</cite> proposed to use Noise Contrastive Estimation (NCE) [14] to speed-up the training.
background
0d06c8509ebbdc61985bebcdb26e6c_1
Hence, an adaptive IS may use a large number of samples to solve this problem whereas NCE is more stable and requires a fixed small number of noise samples (e.g., 100) to achieve a good performance<cite> [13,</cite> 16] .
background
0d06c8509ebbdc61985bebcdb26e6c_2
Furthermore, we can show that this solution optimally approximates the sampling from a unigram distribution, which has been shown to be a good noise distribution choice<cite> [13,</cite> 16] .
background
0d06c8509ebbdc61985bebcdb26e6c_3
Hence, we solely focus our experiments on NCE as a major approach to achieve this goal [17,<cite> 13,</cite> 16] in comparison to the reference full softmax function.
uses
0d06c8509ebbdc61985bebcdb26e6c_4
Following the setup proposed in<cite> [13,</cite> 16] , S-NCE uses K = 100 noise samples, whereas B-NCE uses only the target words in the batch (K=0).
uses
0d1fb27d847ca44af36862cf78744e_0
In addition, there are several approaches to non-projective dependency parsing that are still to be evaluated in the large (Covington, 1990;<cite> Kahane et al., 1998</cite>; Duchier and Debusmann, 2001; Holan et al., 2001; Hellwig, 2003) .
background
0d1fb27d847ca44af36862cf78744e_1
First, the training data for the parser is projectivized by applying a minimal number of lifting operations<cite> (Kahane et al., 1998)</cite> and encoding information about these lifts in arc labels.
background
0d1fb27d847ca44af36862cf78744e_3
As observed by <cite>Kahane et al. (1998)</cite> , any (nonprojective) dependency graph can be transformed into a projective one by a lifting operation, which replaces each non-projective arc w j β†’ w k by a projective arc w i β†’ w k such that w i β†’ * w j holds in the original graph.
background
0d1fb27d847ca44af36862cf78744e_4
Using the terminology of <cite>Kahane et al. (1998)</cite> , we say that jedna is the syntactic head of Z, while je is its linear head in the projectivized representation.
uses
0d1fb27d847ca44af36862cf78744e_5
Unlike <cite>Kahane et al. (1998)</cite> , we do not regard a projectivized representation as the final target of the parsing process.
differences
0d798fcdee6ee5722d6dc5638210c2_0
Recent state-of-the-art models (Wang et al., 2018;<cite> Fried et al., 2018b</cite>; Ma et al., 2019) have demonstrated large gains in accuracy on the VLN task.
background
0d798fcdee6ee5722d6dc5638210c2_1
In this paper, we find that agents without any visual input can achieve competitive performance, matching or even outperforming their vision-based counterparts under two state-of-theart model models<cite> (Fried et al., 2018b</cite>; Ma et al., 2019) .
motivation
0d798fcdee6ee5722d6dc5638210c2_2
In this paper, we show that the same trends hold for two recent state-of-the-art architectures (Ma et al., 2019;<cite> Fried et al., 2018b)</cite> for the VLN task; we also analyze to what extent object-based representations and mixture-ofexperts methods can address these issues.
similarities