id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
ef742defff1c2bdf145f72796cf3af_16
We use the software provided by<cite> Jansen et al. (2014)</cite> 7 to extract the discourse features described in Section 4 and referred to as x ext in Section 3.
uses
ef742defff1c2bdf145f72796cf3af_17
Following<cite> Jansen et al. (2014)</cite> , we train them using the skip-gram model (Mikolov et al., 2013) We use the L6 Yahoo dataset 8 to train the skip-gram model for the YA dataset and the Ask Ubuntu September 2015 data dump for the AU dataset.
uses
ef742defff1c2bdf145f72796cf3af_18
They also perform better than the approach of<cite> Jansen et al. (2014)</cite> who used SVMrank with a linear kernel.
differences
ef742defff1c2bdf145f72796cf3af_19
On the YA dataset, the results are better than<cite> Jansen et al. (2014)</cite> and very similar to Bogdanova and Foster (2016) .
differences
ef742defff1c2bdf145f72796cf3af_20
As external features, we evaluate the discourse features that were found useful for this task by<cite> Jansen et al. (2014)</cite> .
uses
f2925513a7cce2e80ade1f948164d0_0
In the domain of text, many modern approaches often begin by embedding the input text data into an embedding space that is used as the first layer in a subsequent deep network [4] , [14] . These word embeddings have been shown to contain the same biases <cite>[3]</cite> , due to the source data from which they are trained. In effect, biases from the source data, such as in the differences in representation for men and women, that have been found in many different large-scale studies [5] , [10] , [12] , carry through to the semantic relations in the word embeddings, which become baked into the learning systems that are built on top of them.
motivation background
f2925513a7cce2e80ade1f948164d0_1
First we propose a new version of the Word Embedding Association Tests (WEATs) studied in <cite>[3]</cite> , designed to demonstrate and quantify bias in word embeddings, which puts them on a firm foundation by using the Linguistic Inquiry and Word Count (LIWC) lexica [17] to systematically detect and measure embedding biases. With this improved experimental setting, we find that European-American names are viewed more positively than African-American names, male names are more associated with work while female names are more associated with family, and that the academic disciplines of science and maths are more associated with male terms than the arts, which are more associated with female terms. Using this new methodology, we then find that there is a gender bias in the way different occupations are represented by the embedding.
motivation background differences
f2925513a7cce2e80ade1f948164d0_2
We first propose a new version of the Word Embedding Association Tests studied in <cite>[3]</cite> by using the LIWC lexica to systematically detect and measure the biases within the embedding, keeping the tests comparable with the same set of target words. We further extend this work using additional sets of target words, and compare sentiment across male and female names. Furthermore, we investigate gender bias in words that represent different occupations, comparing these associations with UK national employment statistics.
extends background
f2925513a7cce2e80ade1f948164d0_3
We begin by using the target words from <cite>[3]</cite> which were originally used in [8] , allowing us to directly compare our findings with the original WEAT. Our approach differs from that of <cite>[3]</cite> in that while we use the same set of target words in each test, we use an expanded set of attribute words, allowing us to perform a more rigorous, systematic study of the associations found within the word embeddings. For this, we use attribute words sourced from the LIWC lexica [17] .
extends
f2925513a7cce2e80ade1f948164d0_4
For each of the original word categories used in <cite>[3]</cite> , we matched them with their closest equivalent within the LIWC categories, for example matching the word lists for 'career' and 'family' with the 'work' and 'family' LIWC categories. We tested the association between each target word and the set of attribute words using the method described in Sec. II-B, focussing on the differences in association between sentimental terms and European-and African-American names, subject disciplines to each of the genders, career and family terms with gendered names, as well as looking at the association between gender and sentiment.
extends
f2925513a7cce2e80ade1f948164d0_5
Taking the list of target European-American and African-American names used in <cite>[3]</cite> , we tested each of them for their associated with the positive and negative emotion concepts found in [17] by using the methodology described by Eq. 3 in Sec. II-B, replacing the short list of words used to originally represent pleasant and unpleasant attribute sets.
uses
f2925513a7cce2e80ade1f948164d0_6
Our test found that while both European-American names and African-American names are more associated with positive emotions than negative emotions, the test showed that European-American names are more associated with positive emotions than their African-American counterparts, as shown in Fig. 1a . This finding supports the association test in <cite>[3]</cite> , where they also found that European-American names were more pleasant than African-American names.
similarities
f2925513a7cce2e80ade1f948164d0_8
3) Association of Gender with Career and Family : Taking the list of target gendered names used in <cite>[3]</cite> , we tested each of them for their associated with the career and family concepts using the categories of 'work' and 'family' found in LIWC [17] .
uses
f2925513a7cce2e80ade1f948164d0_9
As shown in Fig. 1c , we found that the set of male names was more associated with the concept of work, while the female names were more associated with family, mirroring the results found in <cite>[3]</cite> . Extending this test, we generated a much larger set of male and female target names from an online list of baby names 1 . Repeating the same test on this larger set of names, we found that male and female names were much less separated than suggested by previous results, with only minor differences between the two, as shown in Fig. 1d .
extends
f2925513a7cce2e80ade1f948164d0_10
We found that there is a strong, significant correlation (ρ = 0.57, p-value < 10 −6 ) between the word embedding association between gender and occupation and the number of people of each gender in the United Kingdom working in those roles. This supports a similar finding for U.S. employment statistics using an independent set of occupations found in <cite>[3]</cite> .
background similarities
f2925513a7cce2e80ade1f948164d0_12
In this paper, we have introduced the LIWC-WEAT, a set of objective tests extending the association tests in <cite>[3]</cite> by using the LIWC lexica to measure bias within word embeddings.
motivation background
f2925513a7cce2e80ade1f948164d0_13
We found bias in both the associations of gender and race, as first described in <cite>[3]</cite> , while additionally finding that male names have a slightly higher positive association than female names. Biases found in the embedding were also shown to reflect biases in the real world and the media, where we found a correlation between the number of men and women in an occupation and its association with each set of male and female names.
differences background
f2db88c0d4e0ec4c34fc295a5d59ba_1
These constraints can be lexicalized (Collins, 1999; Charniak, 2000) , unlexicalized (Johnson, 1998; Klein and Manning, 2003b) or automatically learned (Matsuzaki et al., 2005; <cite>Petrov et al., 2006)</cite> .
background
f2db88c0d4e0ec4c34fc295a5d59ba_4
Computing the joint likelihood of the observed parse trees T and sentences w requires summing over all derivations t over split subcategories: Matsuzaki et al. (2005) derive an EM algorithm for maximizing the joint likelihood, and<cite> Petrov et al. (2006)</cite> extend this algorithm to use a split&merge procedure to adaptively determine the optimal number of subcategories for each observed category.
background
f2db88c0d4e0ec4c34fc295a5d59ba_5
While the split&merge procedure described above is shown in<cite> Petrov et al. (2006)</cite> to reduce the variance in final performance, we found after closer examination that there are substantial differences in the patterns learned by the grammars.
differences
f2db88c0d4e0ec4c34fc295a5d59ba_6
In previous work<cite> (Petrov et al., 2006</cite>; Petrov and Klein, 2007 ) the final grammar was chosen based on its performance on a held-out set (section 22), and corresponds to the second best grammar in Figure 3 (because only 8 different grammars were trained).
background
f2db88c0d4e0ec4c34fc295a5d59ba_7
Using weights learned on a held-out set and rescoring 50-best lists from Charniak (2000) and<cite> Petrov et al. (2006)</cite> , they obtain an F 1 score of 91.0 (which they further improve to 91.4 using a voting scheme).
background
f2db88c0d4e0ec4c34fc295a5d59ba_8
The parameters of each latent variable grammar are typically smoothed in a linear fashion to prevent excessive overfitting<cite> (Petrov et al., 2006)</cite> .
background
f2db88c0d4e0ec4c34fc295a5d59ba_9
It is also interesting to note that the best results in Zhang et al. (2009) are achieved by combining kbest lists from a latent variable grammar of<cite> Petrov et al. (2006)</cite> with the self-trained reranking parser of McClosky et al. (2006) .
background
f2ff155003d139b3677f746baf3807_0
In this paper, we followed the line of predicting ICD codes from unstructured text of the MIMIC dataset (Johnson et al. 2016 ), because it is widely studied and publicly available. The state-of-the-art model for this line of work is the combination of the convolutional neural network (CNN) and the attention mechanism<cite> (Mullenbach et al. 2018)</cite> . However, this model only contains one convolutional layer to build document representations for subsequent layers to predict ICD codes.
motivation background
f2ff155003d139b3677f746baf3807_1
Our Mul-arXiv:1912.00862v1 [cs.CL] 25 Nov 2019 tiResCNN model is composed of five layers: the input layer leverages word embeddings pre-trained by word2vec (Mikolov et al. 2013) ; the multi-filter convolutional layer consists of multiple convolutional filters (Kim 2014); the residual convolutional layer contains multiple residual blocks (He et al. 2016) ; the attention layer keeps the interpretability for the model following<cite> (Mullenbach et al. 2018)</cite> ; the output layer utilizes the sigmoid function to predict the probability of each ICD code.
similarities background
f2ff155003d139b3677f746baf3807_2
To evaluate our model, we employed the MIMIC dataset (Johnson et al. 2016 ) which has been widely used for automated ICD coding. Compared with 5 existing and stateof-the-art models (Perotte et al. 2013; Prakash et al. 2017; Shi et al. 2017; Baumel et al. 2018;<cite> Mullenbach et al. 2018)</cite> , our model outperformed them in nearly all the evaluation metrics (i.e., macro-and micro-AUC, macro-and micro-F1, precision at K).
background
f2ff155003d139b3677f746baf3807_3
<cite>Mullenbach et al. (2018)</cite> incorporated the convolutional neural network (CNN) with per-label attention mechanism. Their model achieved the state-of-the-art performance among the work using only unstructured text of the MIMIC dataset.
background
f2ff155003d139b3677f746baf3807_4
Following <cite>Mullenbach et al. (2018)</cite> , we employed the perlabel attention mechanism to make each ICD code attend to different parts of the document representation H. The attention layer is formalized as: where U ∈ R (m×d p )×l represents the parameter matrix of the attention layer, A ∈ R n×l represents the attention weights for each pair of an ICD code and a word, V ∈ R l×(m×d p ) represents the output of the attention layer.
uses
f2ff155003d139b3677f746baf3807_5
For training, we treated the ICD coding task as a multi-label classification problem following previous work (McCallum 1999;<cite> Mullenbach et al. 2018)</cite> . The training objective is to minimize the binary cross entropy loss between the predictionỹ and the target y: where w denotes the input word sequence and θ denotes all the parameters.
uses
f2ff155003d139b3677f746baf3807_6
Following <cite>Mullenbach et al. (2018)</cite> , we used discharge summaries, split them by patient IDs, and conducted experiments using the full codes as well as the top-50 most frequent codes.
uses
f2ff155003d139b3677f746baf3807_8
Preprocessing Following previous work<cite> (Mullenbach et al. 2018)</cite> , the text was tokenized, and each token were transformed into its lowercase. The tokens that contain no alphabetic characters were removed such as numbers and punctuations. The maximum length of a token sequence is 2,500 and the one that exceeds this length will be truncated.
uses
f2ff155003d139b3677f746baf3807_9
Since our model has a number of hyper-parameters, it is infeasible to search optimal values for all hyper-parameters. Therefore, some hyper-parameter values were chosen empirically or following prior work<cite> (Mullenbach et al. 2018</cite> ).
uses background
f2ff155003d139b3677f746baf3807_10
• CNN, which only has one convolutional filter and is equivalent to the CAML model<cite> (Mullenbach et al. 2018</cite> ).
uses
f2ff155003d139b3677f746baf3807_11
CAML & DR-CAML The Convolutional Attention network for Multi-Label classification (CAML) was proposed by <cite>Mullenbach et al. (2018)</cite> . It has achieved the state-of-theart results on the MIMIC-III and MIMIC-II datasets among the models using unstructured text. It consists of one convolutional layer and one attention layer to generate label-aware features for multi-label classification (McCallum 1999). The Description Regularized CAML (DR-CAML) is an extension of CAML and incorporates the text description of each code to regularize the model.
background
f2ff155003d139b3677f746baf3807_13
For CAML, we used the optimal hyper-parameter setting reported in their paper<cite> (Mullenbach et al. 2018)</cite> .
uses
f3012301e42a4075ed6d4d2b39b528_0
The past work in sarcasm detection involves rule-based and statistical approaches using: (a) unigrams and pragmatic features (such as emoticons, etc.) (Gonzalez-Ibanez et al., 2011; Carvalho et al., 2009; Barbieri et al., 2014) , (b) extraction of common patterns, such as hashtag-based sentiment (Maynard and Greenwood, 2014; Liebrecht et al., 2013) , a positive verb being followed by a negative situation<cite> (Riloff et al., 2013)</cite> , or discriminative n-grams (Tsur et al., 2010a; Davidov et al., 2010) .
background
f3012301e42a4075ed6d4d2b39b528_1
• Our sarcasm detection system outperforms two state-of-art sarcasm detection systems <cite>(Riloff et al., 2013</cite>; Maynard and Greenwood, 2014) .
differences
f3012301e42a4075ed6d4d2b39b528_2
Our feature engineering is based on<cite> Riloff et al. (2013)</cite> and Ramteke et al. (2013) .
similarities uses
f3012301e42a4075ed6d4d2b39b528_3
For this, we modify the algorithm given in<cite> Riloff et al. (2013)</cite> in two ways: (a) they extract only positive verbs and negative noun situation phrases.
extends differences
f3012301e42a4075ed6d4d2b39b528_4
2. Tweet-B (2278 tweets, 506 sarcastic): This dataset was manually labeled for<cite> Riloff et al. (2013</cite> To extract the implicit incongruity features, we run the iterative algorithm described in Section 4.2, on a dataset of 4000 tweets (50% sarcastic) (also created using hashtag-based supervision).
uses similarities
f3012301e42a4075ed6d4d2b39b528_5
Table 2 : Comparative results for Tweet-A using rule-based algorithm and statistical classifiers using our feature combinations 6 Evaluation Table 2 shows the performance of our classifiers in terms of Precision (P), Recall (R) and F-score<cite> Riloff et al. (2013)</cite> 's two rule-based algorithms: the ordered version predicts a tweet as sarcastic if it has a positive verb phrase followed by a negative situation/noun phrase, while the unordered does so if the two are present in any order. We see that all statistical classifiers surpass the rule-based algorithms.
differences
f3012301e42a4075ed6d4d2b39b528_6
This is an improvement of about 5% over the baseline, and 40% over the algorithm by<cite> Riloff et al. (2013)</cite> .
extends differences
f3012301e42a4075ed6d4d2b39b528_7
Table 4 shows that we achieve a 10% higher F-score than the best reported F-score of<cite> Riloff et al. (2013)</cite> .
differences
f3012301e42a4075ed6d4d2b39b528_8
Our system also outperforms two past works <cite>(Riloff et al., 2013</cite>; Maynard and Greenwood, 2014) with 10-20% improvement in F-score.
differences
f3282df3adadf78320e99c09d8384f_0
Following <cite>Gong et al. (2018)</cite> , we consider two document collections heterogeneous if <cite>their</cite> documents differ systematically with respect to vocabulary and / or level of abstraction. With these defining differences, there often also comes a difference in length, which, however, by itself does not make document collections heterogeneous.
background motivation
f3282df3adadf78320e99c09d8384f_1
We demonstrate our method with the Concept-Project matching task (<cite>Gong et al. (2018)</cite> ), which is described in the next section.
uses
f3282df3adadf78320e99c09d8384f_2
The annotation was done by undergrad engineering students. <cite>Gong et al. (2018)</cite> do not provide any specification, or annotation guidelines, of the semantics of the 'matches' relation to be annotated. Instead, <cite>they</cite> create gold standard annotations based on a majority vote of three manual annotations.
differences
f3282df3adadf78320e99c09d8384f_3
The extent to which this information is used by <cite>Gong et al. (2018)</cite> is not entirely clear, so we experiment with several setups (cf. Section 4).
extends motivation
f3282df3adadf78320e99c09d8384f_4
**<cite>GONG ET AL. (2018)</cite>'S APPROACH** The approach by <cite>Gong et al. (2018)</cite> is based on the idea that the longer document in the pair is reduced to a set of topics which capture the essence of the document in a way that eliminates the effect of a potential length difference.
background
f3282df3adadf78320e99c09d8384f_5
<cite>Gong et al. (2018)</cite> motivate <cite>their</cite> approach mainly with the length mismatch argument, which <cite>they</cite> claim makes approaches relying on document representations (incl. vector averaging) unsuitable.
background
f3282df3adadf78320e99c09d8384f_6
Accordingly, <cite>they</cite> use Doc2Vec (Le and Mikolov (2014) ) as one of their baselines, and show that its performance is inferior to <cite>their</cite> method.
background
f3282df3adadf78320e99c09d8384f_7
<cite>They</cite> do not, however, provide a much simpler averaging-based baseline.
background
f3282df3adadf78320e99c09d8384f_8
As a second baseline, <cite>they</cite> use Word Mover's Distance (Kusner et al. (2015) ), which is based on word-level distances, rather than distance of global document representations, but which also fails to be competitive with <cite>their</cite> topic-based method.
background
f3282df3adadf78320e99c09d8384f_9
<cite>Gong et al. (2018)</cite> use two different sets of word embeddings: One (topic wiki) was trained on a full English Wikipedia dump, the other (wiki science) on a smaller subset of the former dump which only contained science articles.
background
f3282df3adadf78320e99c09d8384f_10
We implement this standard measure (AVG COS SIM) as a baseline for both our method and for the method by <cite>Gong et al. (2018)</cite> .
extends
f3282df3adadf78320e99c09d8384f_11
Parameter tuning experiments were performed on a random subset of 20% of our data set (54% positive). Note that <cite>Gong et al. (2018)</cite> used only 10% of <cite>their</cite> 537 instances data set as tuning data.
differences
f3282df3adadf78320e99c09d8384f_12
Since the original data split used by <cite>Gong et al. (2018)</cite> is unknown, we cannot exactly replicate <cite>their</cite> settings, but we also perform ten runs using randomly selected 10% of our 408 instances test data set, and report average P, R, F, and standard deviation.
differences
f3282df3adadf78320e99c09d8384f_14
Note that our Both setting is probably the one most similar to the concept input used by <cite>Gong et al. (2018)</cite> .
similarities
f3282df3adadf78320e99c09d8384f_15
This result corroborates our findings on the tuning data, and clearly contradicts the (implicit) claim made by <cite>Gong et al. (2018)</cite> regarding the infeasibility of document-level matching for documents of different lengths.
differences
f3282df3adadf78320e99c09d8384f_16
The second, more important finding is that our proposed TOP n COS SIM AVG measure is also very competitive, as it also outperforms both systems by <cite>Gong et al. (2018)</cite> in two out of three settings.
similarities
f3282df3adadf78320e99c09d8384f_17
8 This is the more important as we exclusively employ off-the-shelf, general-purpose embeddings, while <cite>Gong et al. (2018)</cite> reach <cite>their</cite> best results with a much more sophisticated system and with embeddings that were custom-trained for the science domain.
differences
f3282df3adadf78320e99c09d8384f_18
Thus, while the performance of our proposed TOP n COS SIM AVG method is superior to the approach by <cite>Gong et al. (2018)</cite> , it is itself outperformed by the 'baseline' AVG COS SIM method with appropriate weighting.
differences
f3282df3adadf78320e99c09d8384f_19
We presented a simple method for semantic matching of documents from heterogeneous collections as a solution to the Concept-Project matching task by <cite>Gong et al. (2018)</cite> .
motivation
f3282df3adadf78320e99c09d8384f_20
Another result is that, contrary to the claim made by <cite>Gong et al. (2018)</cite> , the standard averaging approach does indeed work very well even for heterogeneous document collections, if appropriate weighting is applied.
differences
f3f61d50929f862e263e3f658852bc_0
Section 3 then empirically analyzes correlations in two recent argument corpora, one annotated for 15 well-defined quality dimensions taken from theory (Wachsmuth et al., 2017a) and one with 17 reasons for quality differences phrased spontaneously in practice<cite> (Habernal and Gurevych, 2016a)</cite> .
background
f3f61d50929f862e263e3f658852bc_1
Conv A is more convincing than B. Table 2 : The 17+1 practical reason labels given in the corpus of <cite>Habernal and Gurevych (2016a)</cite> .
background
f3f61d50929f862e263e3f658852bc_2
Without giving any guidelines, the authors also asked for reasons as to why A is more convincing than B. In a follow-up study<cite> (Habernal and Gurevych, 2016a)</cite> , these reasons were used to derive a hierarchical annotation scheme.
background
f3f61d50929f862e263e3f658852bc_3
9111 argument pairs were then labeled with one or more of the 17 reason labels in Table 2 Negative Properties of Argument B Positive Properties of Argument A Quality Dimension 5-1 5-2 5-3 6-1 6-2 6-3 7-1 7-2 7-3 7-4 8-1 8-4 8-5 9-1 9-2 9-3 9- Wachsmuth et al. (2017a) given for each of the 17+1 reason labels of <cite>Habernal and Gurevych (2016a)</cite> .
background
f3f61d50929f862e263e3f658852bc_4
For Hypotheses 1 and 2, we consider all 736 pairs of arguments from <cite>Habernal and Gurevych (2016a)</cite> where both have been annotated by Wachsmuth et al. (2017a) .
similarities uses
f3f61d50929f862e263e3f658852bc_5
Besides, the descriptions of 6-2 and 6-3 sound like local but cor- Table 4 : The mean rating for each quality dimension of those arguments from Wachsmuth et al. (2017a) given for each reason label<cite> (Habernal and Gurevych, 2016a)</cite> .
background
f3f61d50929f862e263e3f658852bc_6
For explicitness, we computed the mean rating for each quality dimension of all arguments from Wachsmuth et al. (2017a) with a particular reason label from <cite>Habernal and Gurevych (2016a)</cite> .
similarities uses
f3f61d50929f862e263e3f658852bc_7
3 Also, Table 4 reveals which reasons predict absolute differences most: The mean ratings of 7-3 (off-topic) are very low, indicating a strong negative impact, while 6-3 (irrelevant reasons) still shows rather 3 While the differences seem not very large, this is expected, as in many argument pairs from <cite>Habernal and Gurevych (2016a)</cite> both arguments are strong or weak respectively.
background
f3f61d50929f862e263e3f658852bc_8
Regarding simplification, the most common practical reasons of <cite>Habernal and Gurevych (2016a)</cite> imply what to focus on.
background
f4becae9cd7eeaa7fd3085ff904aaa_0
Recently, there has been much interest in applying neural network models to solve the problem, where little or no linguistic analysis is performed except for tokenization<cite> (Filippova et al., 2015</cite>; Rush et al., 2015; Chopra et al., 2016) .
background
f4becae9cd7eeaa7fd3085ff904aaa_1
For example,<cite> Filippova et al. (2015)</cite> used close to two Figure 1 : Examples of in-domain and out-ofdomain results by a standard abstractive sequenceto-sequence model trained on the Gigaword corpus.
background
f4becae9cd7eeaa7fd3085ff904aaa_2
Although neural network-based models have achieved good performance on this task recently, they tend to suffer from two problems: (1) They require a large amount of data for training. For example,<cite> Filippova et al. (2015)</cite> used close to two Figure 1 : Examples of in-domain and out-ofdomain results by a standard abstractive sequenceto-sequence model trained on the Gigaword corpus.
background motivation
f4becae9cd7eeaa7fd3085ff904aaa_3
To this end, we extend the deletionbased LSTM model for sentence compression by<cite> Filippova et al. (2015)</cite> .
extends
f4becae9cd7eeaa7fd3085ff904aaa_4
Specifically, we propose two major changes to the model by<cite> Filippova et al. (2015)</cite> : (1) We explicitly introduce POS embeddings and dependency relation embeddings into the neural network model.
extends
f4becae9cd7eeaa7fd3085ff904aaa_5
We evaluate our method using around 10,000 sentence pairs released by<cite> Filippova et al. (2015)</cite> and two other data sets representing out-ofdomain data.
uses
f4becae9cd7eeaa7fd3085ff904aaa_6
Our problem setup is the same as that by<cite> Filippova et al. (2015)</cite> .
uses
f4becae9cd7eeaa7fd3085ff904aaa_7
This base model is largely based on the model by<cite> Filippova et al. (2015)</cite> with some differences, which will be explained below.
uses differences
f4becae9cd7eeaa7fd3085ff904aaa_8
Following<cite> Filippova et al. (2015)</cite> , our bi-LSTM has three layers, as shown in Figure 2 .
uses
f4becae9cd7eeaa7fd3085ff904aaa_9
There are some differences between our base model and the LSTM model by<cite> Filippova et al. (2015)</cite> .
differences
f4becae9cd7eeaa7fd3085ff904aaa_10
(1)<cite> Filippova et al. (2015)</cite> first encoded the input sentence in its reverse order using the same LSTM before processing the sentence for sequence labeling.
background
f4becae9cd7eeaa7fd3085ff904aaa_11
There are some differences between our base model and the LSTM model by<cite> Filippova et al. (2015)</cite> . (1)<cite> Filippova et al. (2015)</cite> first encoded the input sentence in its reverse order using the same LSTM before processing the sentence for sequence labeling.
differences
f4becae9cd7eeaa7fd3085ff904aaa_12
(2)<cite> Filippova et al. (2015)</cite> used only a single-directional LSTM while we use bi-LSTM to capture contextual information from both directions.
differences
f4becae9cd7eeaa7fd3085ff904aaa_13
(3) Although<cite> Filippova et al. (2015)</cite> did not use any syntactic information in their basic model, they introduced some features based on dependency parse trees in their advanced models.
background
f4becae9cd7eeaa7fd3085ff904aaa_14
There are some differences between our base model and the LSTM model by<cite> Filippova et al. (2015)</cite> . (3) Although<cite> Filippova et al. (2015)</cite> did not use any syntactic information in their basic model, they introduced some features based on dependency parse trees in their advanced models.
differences
f4becae9cd7eeaa7fd3085ff904aaa_15
(4)<cite> Filippova et al. (2015)</cite> combined the predicted y i−1 with w i to help predict y i .
background
f4becae9cd7eeaa7fd3085ff904aaa_16
(4)<cite> Filippova et al. (2015)</cite> combined the predicted y i−1 with w i to help predict y i . This adds some dependency between consecutive labels. We do not do this because later we will introduce an ILP layer to introduce dependencies among labels.
differences
f4becae9cd7eeaa7fd3085ff904aaa_17
For example, the method above as well as the original method by<cite> Filippova et al. (2015)</cite> cannot impose any length constraint on the compressed sentences.
motivation
f4becae9cd7eeaa7fd3085ff904aaa_18
Google News: The first dataset contains 10,000 sentence pairs collected and released by<cite> Filippova et al. (2015)</cite> 2 .
uses
f4becae9cd7eeaa7fd3085ff904aaa_19
We compare our methods with a few baselines: LSTM: This is the basic LSTM-based deletion method proposed by<cite> (Filippova et al., 2015)</cite> .
uses
f4becae9cd7eeaa7fd3085ff904aaa_20
LSTM+: This is advanced version of the model proposed by<cite> Filippova et al. (2015)</cite> , where the authors incorporated some dependency parse tree information into the LSTM model and used the prediction on the previous word to help the prediction on the current word.
background
f4becae9cd7eeaa7fd3085ff904aaa_21
We compare our methods with a few baselines: LSTM: This is the basic LSTM-based deletion method proposed by<cite> (Filippova et al., 2015)</cite> . LSTM+: This is advanced version of the model proposed by<cite> Filippova et al. (2015)</cite> , where the authors incorporated some dependency parse tree information into the LSTM model and used the prediction on the previous word to help the prediction on the current word.
uses
f4becae9cd7eeaa7fd3085ff904aaa_22
We took the first 1,000 sentence pairs from Google News as the test set, following the same practice as<cite> Filippova et al. (2015)</cite> .
uses
f4becae9cd7eeaa7fd3085ff904aaa_23
(2) In the in-domain setting, with the same amount of training data (8,000), our BiLSTM method with syntactic features (BiLSTM+SynFeat and BiLSTM+SynFeat+ILP) performs similarly to or better than the LSTM+ method proposed by<cite> Filippova et al. (2015)</cite> , in terms of both F1 and accuracy.
differences
f4becae9cd7eeaa7fd3085ff904aaa_24
In order to evaluate whether sentences generated by our method are readable, we adopt the manual evaluation procedure by<cite> Filippova et al. (2015)</cite> to compare our method with LSTM+ and Traditional ILP in terms of readability and informativeness.
uses
f4becae9cd7eeaa7fd3085ff904aaa_25
Our work is based on the deletion-based LSTM model for sentence compression by<cite> Filippova et al. (2015)</cite> .
uses