id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
0d798fcdee6ee5722d6dc5638210c2_3
In this work, we analyze two recent VLN models, which typify the visual grounding approaches of VLN work: the panoramic "follower" model from the Speaker-Follower (SF) system of<cite> Fried et al. (2018b)</cite> and the Self-Monitoring (SM) model of Ma et al. (2019) .
similarities uses
0d798fcdee6ee5722d6dc5638210c2_4
We compare performance on the validation sets of the R2R dataset: the val-seen split, consisting of the same environments as in training, and the val- Table 1 : Success rate (SR) of the vision-based full agent ("RN", using ResNet) and the non-visual agent ("no vis.", setting all visual features to zero) on the R2R dataset under different model architectures (SpeakerFollower (SF)<cite> (Fried et al., 2018b)</cite> and Self-Monitoring (SM) (Ma et al., 2019) ) and training schemes.
extends differences
0d798fcdee6ee5722d6dc5638210c2_5
We then use the same visual attention mechanism as in<cite> Fried et al. (2018b)</cite> and Ma et al. (2019) to obtain an attended object representation x obj,att over these {x obj,j } vectors.
similarities uses
0d798fcdee6ee5722d6dc5638210c2_7
The Speaker-Follower (SF) model<cite> (Fried et al., 2018b</cite> ) and the Self-Monitoring (SM) model (Ma et al., 2019) which we analyze both use sequenceto-sequence model (Cho et al., 2014) with attention (Bahdanau et al., 2015) as their base instruction-following agent.
similarities
0e4ca87c0e2b899bfd1f36dc5974b9_0
For other languages, one could retrain a language-specific model using the BERT architecture Martin et al., 2019; de Vries et al., 2019] or employ existing pre-trained multilingual BERT-based models [Devlin et al., 2019;<cite> Conneau et al., 2019</cite>;<cite> Conneau and Lample, 2019]</cite> .
background
0e4ca87c0e2b899bfd1f36dc5974b9_1
In terms of Vietnamese language modeling, to the best of our knowledge, there are two main concerns: (i) The Vietnamese Wikipedia corpus is the only data used to train all monolingual language models , and it also is the only Vietnamese dataset included in the pre-training data used by all multilingual language models except XLM-R<cite> [Conneau et al., 2019]</cite> .
motivation background
0e4ca87c0e2b899bfd1f36dc5974b9_2
For NLI, we use the Vietnamese validation and test sets from the XNLI corpus v1.0 <cite>[Conneau et al., 2018]</cite> where the Vietnamese training data is machinetranslated from English.
uses
0e5c3df8309dbaf93d10c94fb292fc_0
The model improves over previous work on reference resolution applied to the same data (Iida et al., 2010;<cite> Iida et al., 2011)</cite> .
differences
0e5c3df8309dbaf93d10c94fb292fc_1
It has been shown that incorporating gaze improves RR in a situated setting because speakers need to look at and distinguish from distractors the objects they are describing: this has been shown in a static scene on a computer screen (Prasov and Chai, 2008) , in human-human interactive puzzle tasks (Iida et al., 2010;<cite> Iida et al., 2011)</cite> , in web browsing (Hakkani-tür et al., 2014) , and in a moving car where speakers look at objects in their vicinity (Misu et al., 2014) .
background
0e5c3df8309dbaf93d10c94fb292fc_2
The corpora presented in <cite>Iida et al. (2011)</cite> and Spanger et al. (2012) are a collection of human/human interaction data where the participants collaboratively solved Tangram puzzles.
background
0e5c3df8309dbaf93d10c94fb292fc_3
Further details of the corpus can be found in<cite> (Iida et al., 2011)</cite> .
uses
0e5c3df8309dbaf93d10c94fb292fc_4
<cite>Iida et al. (2011)</cite> applied a support vector machine-based ranking algorithm (Joachims, 2002) to the task of resolving REs in this corpus.
background
0e5c3df8309dbaf93d10c94fb292fc_5
In order to compare our results directly with those of <cite>Iida et al. (2011)</cite> , we provide our model with the same training and evaluation data, in a 10-fold cross-validation of the 1192 REs from 27 dialogues (the T2009-11 corpus in ).
uses
0e5c3df8309dbaf93d10c94fb292fc_6
3 We derive these properties from a representation of the scene; similar to how <cite>Iida et al. (2011)</cite> computed features to present to their classifier: namely Ling (linguistic features), TaskSp (task specific features), and Gaze (from SV only).
similarities
0e5c3df8309dbaf93d10c94fb292fc_7
These properties differ somewhat from the features for the Ling model presented in <cite>Iida et al. (2011)</cite> .
differences
0e5c3df8309dbaf93d10c94fb292fc_8
TaskSp <cite>Iida et al. (2011)</cite> used 14 task-specific features, three of which they found to be the most informative in their model.
background
0e5c3df8309dbaf93d10c94fb292fc_9
TaskSp <cite>Iida et al. (2011)</cite> used 14 task-specific features, three of which they found to be the most informative in their model. Here, we will only use the two most informative features as properties (the third one, whether or not an object was being manipulated at the beginning of the RE, did not improve results in a held-out test): the object that was most recently moved received the most recent move property and objects that have the mouse cursor over them received the mouse pointed property (see Figure 2 ; object 4 would receive both of these properties, but only for the duration that the mouse was actually over it).
differences
0e5c3df8309dbaf93d10c94fb292fc_10
Gaze Similar to <cite>Iida et al. (2011)</cite> , we consider gaze during a window of 1500ms before the onset of the RE.
similarities
0e5c3df8309dbaf93d10c94fb292fc_11
Our Gaze properties are made up of these 4 properties, as opposed to the 14 features in <cite>Iida et al. (2011)</cite> .
differences
0e5c3df8309dbaf93d10c94fb292fc_12
Going beyond <cite>Iida et al. (2011)</cite> , our model computes a resolution hypothesis incrementally; for the performance of this aspect of the system we followed previously used metrics for evaluation : first correct: how deep into the RE does the model predict the referent for the first time? first final: how deep into the RE does the model predict the correct referent and keep that decision until the end? edit overhead: how often did the model unnecessarily change its prediction (the only necessary prediction happens when it first makes a correct prediction)?
extends
0e5c3df8309dbaf93d10c94fb292fc_13
We compare non-incremental results to three evaluations performed in <cite>Iida et al. (2011)</cite> , namely when Ling is used alone, Ling+TaskSP used together, and Ling+TaskSp+Gaze.
uses
0e5c3df8309dbaf93d10c94fb292fc_14
The SIUM model performs better than the combined approach of <cite>Iida et al. (2011)</cite> , and performs better than their separated model-when not including gaze (there is a significant difference between SIUM and the separated models for Ling+TaskSp, though (2011) SIUM only got one more correct than the separated model).
differences
0e5c3df8309dbaf93d10c94fb292fc_15
Second, and more importantly, separated models means less feature confusion: in <cite>Iida et al. (2011)</cite> (Section 5.2) , the authors give a comparison of the most informative features for each model; task and gaze features were prominent for the pronoun model, whereas gaze and language features were prominent for the non-pronoun model.
background
0e5c3df8309dbaf93d10c94fb292fc_17
In contrast, previous work in RR<cite> (Iida et al., 2011</cite>; Chai et al., 2014 ) used a hand-coded concept-labeled semantic representation and checked if aspects of the RE match that of a particular object.
background differences
0e5c3df8309dbaf93d10c94fb292fc_18
However, in the current work we observed that REs with pronouns were more difficult for the model to resolve than the model presented in <cite>Iida et al. (2011)</cite> .
differences
0e5c3df8309dbaf93d10c94fb292fc_19
We surmise that SIUM had a difficult time grounding certain properties, as the Japanese pronoun sore can be used anaphorically or demonstratively in this kind of context (i.e., sometimes sore refers to previously-manipulated objects, or objects that are newly identified with a mouse pointer over them); the model presented in <cite>Iida et al. (2011)</cite> made more use of contextual information when pronouns were used, particularly in the combined model which incorporated gaze information, as shown above.
differences
0f0e13e275c4bc4021b1b0d26f3e0c_0
For the task of fact extraction from billions of Web pages the method of Open Information Extraction (OIE) <cite>(Fader et al., 2011)</cite> trains domainindependent extractors.
background
0f0e13e275c4bc4021b1b0d26f3e0c_1
Existing approaches for OIE, such as REVERB <cite>(Fader et al., 2011)</cite> , WOE (Wu and Weld, 2010) or WANDER-LUST (Akbik and Bross, 2009 ) focus on the extraction of binary facts, e.g. facts that consist of only two arguments, as well as a fact phrase which denotes the nature of the relationship between the arguments.
background
0f0e13e275c4bc4021b1b0d26f3e0c_2
Worse, the analyses performed in <cite>(Fader et al., 2011)</cite> and (Akbik and Bross, 2009) show that incorrect handling of N-ary facts leads to extraction errors, such as incomplete, uninformative or erroneous facts.
background
0f0e13e275c4bc4021b1b0d26f3e0c_3
Worse, the analyses performed in <cite>(Fader et al., 2011)</cite> and (Akbik and Bross, 2009) show that incorrect handling of N-ary facts leads to extraction errors, such as incomplete, uninformative or erroneous facts. Our first example illustrates the case of a significant information loss: a) In the 2002 film Bubba Ho-tep, Elvis lives in a nursing home.
motivation
0f0e13e275c4bc4021b1b0d26f3e0c_4
We examine intra sentence fact correctness (true/false) and fact completeness for KRAKEN and REVERB on the corpus of <cite>(Fader et al., 2011)</cite> .
uses
0f0e13e275c4bc4021b1b0d26f3e0c_5
The OIE system REVERB <cite>(Fader et al., 2011)</cite> by contrast uses a fast shallow syntax parser for labeling sentences and applies syntactic and a lexical constraints for identifying binary facts.
background
0f0e13e275c4bc4021b1b0d26f3e0c_6
Data set: We use the data set from <cite>(Fader et al., 2011)</cite> which consists of 500 sentences sampled from the Web using Yahoo's random link service.
uses
0f5c87e5434785a612c6578244543d_0
<cite>Faruqui and Dyer (2014)</cite> use canonical correlation analysis to project the embeddings in both languages to a shared vector space.
background
0f5c87e5434785a612c6578244543d_1
We start with a basic optimization objective (Mikolov et al., 2013b) and introduce several meaningful and intuitive constraints that are equivalent or closely related to previously proposed methods <cite>(Faruqui and Dyer, 2014</cite>; Xing et al., 2015) .
extends uses
0f5c87e5434785a612c6578244543d_2
We start with a basic optimization objective (Mikolov et al., 2013b) and introduce several meaningful and intuitive constraints that are equivalent or closely related to previously proposed methods <cite>(Faruqui and Dyer, 2014</cite>; Xing et al., 2015) . Our framework provides a more general view of bilingual word embedding mappings, showing the underlying connection between the existing methods, revealing some flaws in their theoretical justification and providing an alternative theoretical interpretation for them.
extends differences
0f5c87e5434785a612c6578244543d_3
We start with a basic optimization objective (Mikolov et al., 2013b) and introduce several meaningful and intuitive constraints that are equivalent or closely related to previously proposed methods <cite>(Faruqui and Dyer, 2014</cite>; Xing et al., 2015) . Our framework provides a more general view of bilingual word embedding mappings, showing the underlying connection between the existing methods, revealing some flaws in their theoretical justification and providing an alternative theoretical interpretation for them. Our experiments on an existing English-Italian word translation induction and an English word analogy task give strong empirical evidence in favor of our theoretical reasoning, while showing that one of our models clearly outperforms previous alternatives.
differences
0f5c87e5434785a612c6578244543d_4
where C m denotes the centering matrix This equivalence reveals that the method proposed by <cite>Faruqui and Dyer (2014)</cite> is closely related to our framework.
similarities
0f5c87e5434785a612c6578244543d_5
where C m denotes the centering matrix This equivalence reveals that the method proposed by <cite>Faruqui and Dyer (2014)</cite> is closely related to our framework. More concretely, <cite>Faruqui and Dyer (2014)</cite> use Canonical Correlation Analysis (CCA) to project the word embeddings in both languages to a shared vector space. Therefore, the only fundamental difference between both methods is that, while our model enforces monolingual invariance, <cite>Faruqui and Dyer (2014)</cite> do change the monolingual embeddings to meet this restriction.
similarities differences
0f5c87e5434785a612c6578244543d_6
As for the method by <cite>Faruqui and Dyer (2014)</cite> , we used their original implementation in Python and MAT-LAB 6 , which we extended to cover cases where the dictionary contains more than one entry for the same word.
extends uses
0f5c87e5434785a612c6578244543d_8
In any case, it is our proposed method with the orthogonality constraint and a global preprocessing with length normalization followed by dimensionwise mean centering that achieves the best accuracy in the word translation induction task. Moreover, it does not suffer from any considerable degradation in monolingual quality, with an anecdotal drop of only 0.07% in contrast with 2.86% for Mikolov et al. (2013b) and 7.02% for <cite>Faruqui and Dyer (2014)</cite> .
differences
0f5c87e5434785a612c6578244543d_9
It should be noted that the implementation by <cite>Faruqui and Dyer (2014)</cite> also length-normalizes the word embeddings in a preprocessing step.
similarities
0f5c87e5434785a612c6578244543d_10
Following the discussion in Section 2.3, this means that our best performing configuration is conceptually very close to the method by <cite>Faruqui and Dyer (2014)</cite> , as they both coincide on maximizing the average dimension-wise covariance and length-normalize the embeddings in both languages first, the only difference being that our model enforces monolingual invariance after the normalization while theirs does change the monolingual embeddings to make different dimensions have the same variance and be uncorrelated among themselves.
similarities
0f5c87e5434785a612c6578244543d_11
However, our model performs considerably better than any configuration from <cite>Faruqui and Dyer (2014)</cite> in both the monolingual and the bilingual task, supporting our hypothesis that these two constraints that are implicit in their method are not only conceptually confusing, 2292 but also have a negative impact.
differences
0f5c87e5434785a612c6578244543d_12
Our experiments show the effectiveness of the proposed model and give strong empirical evidence in favor of our reinterpretation of Xing et al. (2015) and <cite>Faruqui and Dyer (2014)</cite> .
differences
0fed8b9e785426880fa8e5641116a4_0
AMBER is a machine translation evaluation metric first described in <cite>(Chen and Kuhn, 2011)</cite> .
background
0fed8b9e785426880fa8e5641116a4_1
Our original AMBER paper <cite>(Chen and Kuhn, 2011)</cite> describes the ten penalties used at that time; two of these penalties, the normalized Spearman's correlation penalty and the normalized Kendall's correlation penalty, model word reordering.
background
0fed8b9e785426880fa8e5641116a4_2
The AMBER score can be computed with different types of text preprocessing, i.e. different combinations of several text preprocessing techniques: lowercasing, tokenization, stemming, word splitting, etc. 8 types were tried in <cite>(Chen and Kuhn, 2011)</cite> .
background
0fed8b9e785426880fa8e5641116a4_3
In <cite>(Chen and Kuhn, 2011)</cite> , we manually set the 17 free parameters of AMBER (see section 3.2 of that paper). In the experiments reported below, we tuned the 18 free parameters -the original 17 plus the ordering metric v described in the previous section -automatically, using the downhill simplex method of (Nelder and Mead, 1965) as described in (Press et al., 2002) .
differences
0fed8b9e785426880fa8e5641116a4_4
We have made two changes to AMBER, a metric described in <cite>(Chen and Kuhn, 2011)</cite> .
extends
1056d36c5ed22c7a34f6fe82b4962f_0
While there is a substantial amount of work on statistical (Rozovskaya and Roth, 2016; Junczys-Dowmunt and Grundkiewicz, 2014; Yannakoudakis et al., 2017) and neural (Ji et al., 2017; Xie et al., 2016; Yuan and Briscoe, 2016; Chollampatt et al., 2016; Chollampatt and Ng, 2017; Chollampatt and Ng, 2018) machine translation methods for GEC, we follow the approach of <cite>Bryant and Briscoe (2018)</cite> and explore how such models would fare in this task when treated as simple language models.
uses
1056d36c5ed22c7a34f6fe82b4962f_1
More specifically, <cite>Bryant and Briscoe (2018)</cite> train a 5-gram language model on the One Billion Word Benchmark (Chelba et al., 2013) dataset and find that it produces competitive baseline results without any supervised training. In our work, we extend <cite>this work</cite> by substituting the n-gram model for several publicly available implementations of state-of-the-art Transformer language models trained on large linguistic corpora and assess their performance on GEC without any supervised training.
extends motivation
1056d36c5ed22c7a34f6fe82b4962f_2
However, <cite>Bryant and Briscoe (2018)</cite> recently revived the idea, achieving competitive performance with the state-ofthe-art, demonstrating the effectiveness of the approaches to the task without using any annotated data for training.
background
1056d36c5ed22c7a34f6fe82b4962f_3
In this work, we follow the setup from <cite>Bryant and Briscoe (2018)</cite> substituting the 5-gram language model for different language models based on the Transformer architecture.
extends
1056d36c5ed22c7a34f6fe82b4962f_4
Since our systems do not generate novel sequences, we follow <cite>Bryant and Briscoe (2018)</cite> and use simple heuristics to generate a confusion set of sentences that our language models score.
uses
1056d36c5ed22c7a34f6fe82b4962f_5
Finally, for spelling mistakes, we, again, follow <cite>Bryant and Briscoe (2018)</cite> and use CyHunSpell 3 to generate alternatives for non-words.
uses
1056d36c5ed22c7a34f6fe82b4962f_6
Concretely, let P (s c ) be the probability of the candidate sentence and P (s o ) the probability of the Table 2 : Results of our Transformer-Language Model approach against similar approaches <cite>(Bryant and Briscoe, 2018)</cite> and state-of-the-art on Grammatical Error Correction.
uses
1056d36c5ed22c7a34f6fe82b4962f_8
Note that in our method, we do not make use of the training sets commonly used with these datasets. However, we use the development sets used by <cite>Bryant and Briscoe (2018)</cite> to tune the hyperparameter τ .
uses differences
1056d36c5ed22c7a34f6fe82b4962f_9
Similar to <cite>Bryant and Briscoe (2018)</cite> , we report results on three metrics.
similarities
1056d36c5ed22c7a34f6fe82b4962f_10
Table 2 presents the results of our method comparing them against recent state-of-the-art supervised models and the simple n-gram language model used by <cite>Bryant and Briscoe (2018)</cite> .
uses
1056d36c5ed22c7a34f6fe82b4962f_11
Our key motivation was to corroborate and extend the results of <cite>Bryant and Briscoe (2018)</cite> to current state-of-the-art language models which have been trained in several languages and show that these models are tough baselines to beat for novel GEC systems.
extends motivation
10de18ba49c0da530b15ff2d14f343_0
Aspect and/or opinion terms extraction research has been conducted by Wang et al. [2] and Xu et al. <cite>[3]</cite> that outperformed the best systems in the aspect-based sentiment analysis task on the International Workshop on Semantic Evaluation (SemEval) for aspect and opinion terms extraction.
background
10de18ba49c0da530b15ff2d14f343_1
Xu et al. <cite>[3]</cite> proposed a Convolutional Neural Network (CNN) model employing two types of pre-trained word embeddings, general-purpose embeddings and domainspecific embeddings, for aspect term extraction.
background
10de18ba49c0da530b15ff2d14f343_2
Wang et al. [2] and Xu et al. <cite>[3]</cite> approaches have not been applied for Indonesian reviews.
background
10de18ba49c0da530b15ff2d14f343_3
This paper aims to perform aspect and opinion terms extraction in Indonesian hotel reviews by adapting CMLA architecture [2] and double embeddings mechanism <cite>[3]</cite> .
similarities uses
10de18ba49c0da530b15ff2d14f343_4
Xu et al. <cite>[3]</cite> use double embeddings that leverage both general embeddings and domain embeddings as a feature for a CNN model and let the CNN model decide which embeddings have more useful information.
background
10de18ba49c0da530b15ff2d14f343_5
The experiment conducted in <cite>[3]</cite> demonstrated that double embedding mechanism achieved better performance for aspect terms extraction compared to the use of general embeddings or domain embeddings alone.
background
10de18ba49c0da530b15ff2d14f343_6
As stated previously, the goal of this work is to extract aspect and opinion terms in Indonesian hotel reviews by adapting CMLA architecture [2] and double embeddings mechanism <cite>[3]</cite> .
uses similarities
10de18ba49c0da530b15ff2d14f343_7
We use various types of word embeddings adapted from <cite>[3]</cite> .
uses
10de18ba49c0da530b15ff2d14f343_8
For the general embeddings and domain embeddings, we use the same dimension and number of iterations as in <cite>[3]</cite> .
similarities uses
10f17930192132077f0d4526e7d755_0
In this work, we experiment with the <cite>Self-reported Mental Health Diagnoses</cite> (<cite>SMHD</cite>) dataset (<cite>Cohan et al., 2018</cite>) , consisting of thousands of Reddit users diagnosed with one or more mental illnesses.
uses
10f17930192132077f0d4526e7d755_1
The <cite>SMHD</cite> dataset (<cite>Cohan et al., 2018</cite>) is a largescale dataset of Reddit posts from users with one or multiple mental health conditions.
background
10f17930192132077f0d4526e7d755_2
For each disorder, <cite>Cohan et al. (2018)</cite> analyze the differences in language use between diagnosed users and their respective control groups.
background
10f17930192132077f0d4526e7d755_3
For each disorder, <cite>Cohan et al. (2018)</cite> analyze the differences in language use between diagnosed users and their respective control groups. <cite>They</cite> also provide benchmark results for the binary classification task of predicting whether the user belongs to the diagnosed or the control group.
background
10f17930192132077f0d4526e7d755_4
For each disorder, <cite>Cohan et al. (2018)</cite> analyze the differences in language use between diagnosed users and their respective control groups. <cite>They</cite> also provide benchmark results for the binary classification task of predicting whether the user belongs to the diagnosed or the control group. We reproduce <cite>their</cite> baseline models for each disorder and compare to our deep learning-based model, explained in Section 2.3.
uses
10f17930192132077f0d4526e7d755_5
<cite>Cohan et al. (2018)</cite> select nine or more control users for each diagnosed user and run their experiments with these mappings.
background
10f17930192132077f0d4526e7d755_6
<cite>Cohan et al. (2018)</cite> select nine or more control users for each diagnosed user and run their experiments with these mappings. With this exact mapping not being available, for each of the nine conditions, we had to select the control group ourselves.
extends differences
10f17930192132077f0d4526e7d755_7
For each diagnosed user, we draw exactly nine control users from the pool of 335,952 control users present in <cite>SMHD</cite> and proceed to train and test our binary classifiers on the newly created sub-datasets.
extends uses
10f17930192132077f0d4526e7d755_8
In order to create a statistically-fair comparison, we run the selection process multiple times, as well as reimplement the benchmark models used in <cite>Cohan et al. (2018)</cite> .
uses
10f17930192132077f0d4526e7d755_9
We implement the baselines as in <cite>Cohan et al. (2018)</cite> .
uses
10f17930192132077f0d4526e7d755_10
In contrast to <cite>Cohan et al. (2018)</cite> , supervised FastText yields worse results than tuned linear models.
differences
10f17930192132077f0d4526e7d755_11
We examine attention weights on a word level and compare the most attended words to prior research on depression. Depression is selected as the most prevalent disorder in the <cite>SMHD</cite> dataset with a number of studies in the field (Rude et al., 2004; Chung and Pennebaker, 2007; De Choudhury et al., 2013b; Park et al., 2012) .
uses
10f17930192132077f0d4526e7d755_12
The importance of personal pronouns in distinguishing depressed authors from the control group is supported by multiple studies (Rude et al., 2004; Chung and Pennebaker, 2007; De Choudhury et al., 2013b; <cite>Cohan et al., 2018</cite>) .
background
10f17930192132077f0d4526e7d755_13
In the categories Affective processes, Social processes, and Biological processes, <cite>Cohan et al. (2018)</cite> report significant differences between depressed and control group, similar to some other disorders.
background
10f17930192132077f0d4526e7d755_14
While most studies use Twitter data (Coppersmith et al., 2015a (Coppersmith et al., , 2014 Benton et al., 2017; Coppersmith et al., 2015b) , a recent stream turns to Reddit as a richer source of high-volume data (De Choudhury and De, 2014; Shen and Rudzicz, 2017; Gjurković andŠnajder, 2018; <cite>Cohan et al., 2018</cite>; Sekulić et al., 2018; Zirikly et al., 2019) .
background
119d473a0a5a4c42de193e51564f1f_0
The example is taken from the Simple English Wikipedia corpus <cite>(Coster and Kauchak, 2011)</cite> connectives do not belong to any linguistic class and except for a few discourse connectives such as oh and well, most carry meaning.
background
119d473a0a5a4c42de193e51564f1f_1
The first data set was created from the Simple English Wikipedia corpus <cite>(Coster and Kauchak, 2011)</cite> ; the other was created from the Newsela corpus (Xu et al., 2015) .
extends
119d473a0a5a4c42de193e51564f1f_2
The Simple English Wikipedia (SEW) corpus <cite>(Coster and Kauchak, 2011)</cite> contains two sections: 1) article-aligned and 2) sentence-aligned. Here, we used the sentence-aligned section, which contains 167,686 pairs of aligned sentences.
extends uses
119d473a0a5a4c42de193e51564f1f_3
We used this article-aligned corpus to align it at the sentence-level using an approach similar to <cite>(Coster and Kauchak, 2011)</cite> .
similarities
123d8e8ddef15fed120908c5c20656_0
While most of the work in this direction has been devoted to learning the acoustic model directly from sequences of phonemes or characters without intermediate alignment step or phone-state/senome induction, the other end of the pipeline model -namely, learning directly from the waveform rather than from speech features such as mel-filterbanks or MFCC -has recently received attention [1, 2, 3, 4, 5, 6, 7,<cite> 8]</cite> , but the performances on the master task of speech recognition still seem to be lagging behind those of models trained on speech features [9, 10] .
background
123d8e8ddef15fed120908c5c20656_1
More recently, Zeghidour et al. <cite>[8]</cite> proposed an alternative learnable architecture based on a convolutional architecture that computes a scattering transform and can be initialized as an approximation of mel-filterbanks, and obtained promising results on endto-end phone recognition on TIMIT.
background
123d8e8ddef15fed120908c5c20656_2
3. For scattering-based trainable filterbanks, keeping the lowpass filter fixed during training allows to efficiently learn the filters from a random initialization, whereas the results of <cite>[8]</cite> with random initialization of both the filters and the lowpass filter showed poor performances compared to a suitable initialization;
differences
123d8e8ddef15fed120908c5c20656_3
The first architecture we consider is inspired by [3, 4] , the second one is taken from <cite>[8]</cite> .
uses
123d8e8ddef15fed120908c5c20656_4
In their work, they use a max-pooling operator for low-pass filtering. In contrast, Zeghidour et al. <cite>[8]</cite> use 40 complex-valued filters with a square modulus operator as non-linearity.
background
123d8e8ddef15fed120908c5c20656_5
For both architectures, we also propose to keep this low-pass filter fixed while learning the convolution filter weights, a setting that was not explored by Zeghidour et al. <cite>[8]</cite> , who learnt the lowpass filter weights when randomly initializing the convolutions.
differences
123d8e8ddef15fed120908c5c20656_6
2 <cite>[8]</cite> use 1 to prevent log(0) and [3, 4] use 0.01. We kept the values initially used by the authors of the respective papers and did not try alternatives.
uses
123d8e8ddef15fed120908c5c20656_8
As described in Section 2.2, we evaluate the integration of instance normalization after the log-compression in the trainable filterbanks, which was not used in previous work [3, 4, 7,<cite> 8]</cite> but is used in our baseline.
differences
123d8e8ddef15fed120908c5c20656_9
More importantly, using either an Han-fixed or Han-learnt filter when learning scatteringbased filterbanks from a random initialization removes the gap in performance with the Gabor wavelet initialization that was observed in <cite>[8]</cite> where the lowpass filter was also initialized randomly.
differences
12ab280d48ef6bfae0ff27a400e2ab_0
This session focused on experimental or planned approaches to human language technology evaluation and included an overview and five papers: two papers on experimental evaluation approaches [l, 2], and three about the ongoing work in new annotation and evaluation approaches for human language technology [3, <cite>4,</cite> 5] .
background
12ab280d48ef6bfae0ff27a400e2ab_1
The last three papers ([3, <cite>4,</cite> 5]) take various approaches to the issue of predicate-argument 1The Penn Treebank parse annotations provide an interesting case where annotation supported evaluation.
background