id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
fb87be2081ce1515dd8dbda46b4f3f_17
The results of the baselines on LexMTurk are from<cite> [Paetzold and Specia, 2017b]</cite> and the results on BenchLS and NNSeval are from<cite> [Paetzold and Specia, 2017a]</cite> .
uses
fb87be2081ce1515dd8dbda46b4f3f_18
The results of the baselines on LexMTurk are from<cite> [Paetzold and Specia, 2017b]</cite> and the results on BenchLS and NNSeval are from<cite> [Paetzold and Specia, 2017a]</cite> . As can be seen, despite being entirely unsupervised, our model BERT-LS obtains F1 scores on three 7 http://ghpaetzold.github.io/data/NNSeval.zip datasets, largely outperforming the previous best baselines.
differences
fb87be2081ce1515dd8dbda46b4f3f_19
We chooses the state-of-the-art two baselines based word embeddings (Glavaš[Glavaš andŠtajner, 2015] and Paetzold-NE<cite> [Paetzold and Specia, 2017a]</cite>) as comparison.
uses
fb87be2081ce1515dd8dbda46b4f3f_20
We chooses the state-of-the-art two baselines based word embeddings (Glavaš[Glavaš andŠtajner, 2015] and Paetzold-NE<cite> [Paetzold and Specia, 2017a]</cite>) as comparison. From Table 2 , we observe that BERT-LS achieves the best simplification candidates for complex words compared with the two baselines based word embeddings.
differences
fb87be2081ce1515dd8dbda46b4f3f_21
We adopt the following two well-known metrics used by these work [Horn et al., 2014;<cite> Paetzold and Specia, 2017b]</cite> .
uses
fb87be2081ce1515dd8dbda46b4f3f_22
The results of the baselines on LexMTurk are from<cite> [Paetzold and Specia, 2017b]</cite> and the results on BenchLS and NNSeval are from<cite> [Paetzold and Specia, 2017a]</cite> .
uses
fb87be2081ce1515dd8dbda46b4f3f_23
The results of the baselines on LexMTurk are from<cite> [Paetzold and Specia, 2017b]</cite> and the results on BenchLS and NNSeval are from<cite> [Paetzold and Specia, 2017a]</cite> . We can see that our method BERT-LS attains the highest Accuracy on three datasets, which has an average increase of 11.6% over the former state-of-theart baseline (Paetzold-NE).
differences
fbd028e073459b1b4c2d8d99173e15_0
On the FrameNet 1.5 data, presented additional semi-supervised experiments using gold targets, which was recently outperformed by an approach presented by <cite>Hermann et al. (2014)</cite> that made use of distributed word representations.
differences background
fbd028e073459b1b4c2d8d99173e15_1
On the FrameNet 1.5 data, presented additional semi-supervised experiments using gold targets, which was recently outperformed by an approach presented by <cite>Hermann et al. (2014)</cite> that made use of distributed word representations.
background differences
fbd028e073459b1b4c2d8d99173e15_2
Subsequently, <cite>Hermann et al. (2014)</cite> used a very similar framework but presented a novel method using distributed word representations for better frame identification, outperforming the aforementioned update to SEMAFOR.
differences
fbd028e073459b1b4c2d8d99173e15_3
For example, the training dataset used for the state-ofthe-art system of <cite>Hermann et al. (2014)</cite> contains only 4,458 labeled targets, which is approximately 40 times less than the number of annotated targets in Ontonotes 4.0 (Hovy et al., 2006) , a standard NLP dataset, containing PropBank-style verb annotations.
background
fbd028e073459b1b4c2d8d99173e15_4
Given the wide body of work in frame-semantic analysis of text, and recent interest in using framesemantic parsers in NLP applications, the future directions of research look exciting. First and foremost, to improve the quality of automatic frame-semantic parsers, the coverage of the FrameNet lexicon on free English text, and the number of annotated targets needs to increase. For example, the training dataset used for the state-ofthe-art system of <cite>Hermann et al. (2014)</cite> contains only 4,458 labeled targets, which is approximately 40 times less than the number of annotated targets in Ontonotes 4.0 (Hovy et al., 2006) , a standard NLP dataset, containing PropBank-style verb annotations.
background future_work
fc3775c0d23292160f5c5eb86861be_0
The dataset we used is a Romanian language resource containing a total of 480,722 inflected forms of Romanian nouns and adjectives. It was extracted from the text form of the morphological dictionary RoMorphoDict<cite> (Barbu, 2008)</cite> , which was also used by Nastase and Popescu (2009) for their Romanian classifier, where every entry has the following structure:
uses
fc4b56c865c8a9d0f6a7f5ae37ba96_0
Table 1 presents the statistics of the available training and LM corpora for the constrained (C) systems in WMT15 <cite>(Bojar et al., 2015)</cite> as well as the statistics of the ParFDA selected training and LM data.
uses
fc4b56c865c8a9d0f6a7f5ae37ba96_1
We run ParFDA SMT experiments using Moses (Koehn et al., 2007) in all language pairs in WMT15 <cite>(Bojar et al., 2015)</cite> and obtain SMT performance close to the top constrained Moses systems.
uses similarities
fc4b56c865c8a9d0f6a7f5ae37ba96_2
We run ParFDA SMT experiments for all language pairs in both directions in the WMT15 translation task <cite>(Bojar et al., 2015)</cite> , which include English-Czech (en-cs), English-German (en-de), English-Finnish (en-fi), English-French (en-fr), and English-Russian (en-ru).
uses
fc4b56c865c8a9d0f6a7f5ae37ba96_3
Table 1 presents the statistics of the available training and LM corpora for the constrained (C) systems in WMT15 <cite>(Bojar et al., 2015)</cite> as well as the statistics of the ParFDA selected training and LM data.
uses
fc4b56c865c8a9d0f6a7f5ae37ba96_4
We run ParFDA SMT experiments using Moses (Koehn et al., 2007) in all language pairs in WMT15 <cite>(Bojar et al., 2015)</cite> and obtain SMT performance close to the top constrained Moses systems.
uses
fc4b56c865c8a9d0f6a7f5ae37ba96_5
We run ParFDA SMT experiments for all language pairs in both directions in the WMT15 translation task <cite>(Bojar et al., 2015)</cite> , which include English-Czech (en-cs), English-German (en-de), English-Finnish (en-fi), English-French (en-fr), and English-Russian (en-ru).
uses
fc5de471ba4cc82a2156ed25d2c78b_0
An end-to-end approach [1] [2] [3] [4] <cite>[5]</cite> [6] [7] is particularly appealing for source languages with no written form, or for endangered languages where translations into a high-resource language may be easier to collect than transcriptions [8] .
background
fc5de471ba4cc82a2156ed25d2c78b_1
An end-to-end approach [1] [2] [3] [4] <cite>[5]</cite> [6] [7] is particularly appealing for source languages with no written form, or for endangered languages where translations into a high-resource language may be easier to collect than transcriptions [8] . However, building high-quality endto-end AST with little parallel data is challenging, and has led researchers to explore how other sources of data could be used to help.
background motivation
fc5de471ba4cc82a2156ed25d2c78b_2
For example, Bansal et al. <cite>[5]</cite> showed that pre-training on either English or French ASR improved their Spanish-English AST system (trained on 20 hours of parallel data) and Tian [10] got improvements on an 8-hour Swahili-English AST dataset using English ASR pretraining.
background
fc5de471ba4cc82a2156ed25d2c78b_3
To answer these questions, we use the same AST architecture and Spanish-English parallel data as Bansal et al. <cite>[5]</cite> , but pretrain the encoder using a number of different ASR datasets: the 150hour AISHELL corpus of Chinese as well as seven GlobalPhone languages, each with about 20 hours of data.
extends uses
fc5de471ba4cc82a2156ed25d2c78b_4
For both ASR and AST tasks we use the same end-to-end system architecture shown in Figure 1 : the encoder-decoder model from <cite>[5]</cite> , which itself is adapted from [2] , [4] and [3] .
uses
fc5de471ba4cc82a2156ed25d2c78b_5
Previous experiments <cite>[5]</cite> showed that the encoder accounts for most of the benefits of transferring the parameters.
background
fc5de471ba4cc82a2156ed25d2c78b_6
Finally, to reproduce one of the experiments from <cite>[5]</cite> , we pretrained one model using 300 hours of Switchboard English [18] .
uses
fc5de471ba4cc82a2156ed25d2c78b_7
However, as noted by <cite>[5]</cite> , the Fisher Spanish speech contains many words that are actually in English (code-switching), so pretraining on English may provide an unfair advantage relative to other languages.
background motivation
fc5de471ba4cc82a2156ed25d2c78b_8
Following the architecture and training procedure described in <cite>[5]</cite> , input speech features are fed into a stack of two CNN layers.
uses
fc5de471ba4cc82a2156ed25d2c78b_9
We use code and hyperparameter settings from <cite>[5]</cite> 4 : the Adam optimizer [25] with an initial learning rate of 0.001 and decay it by a factor of 0.5 based on the dev set BLEU score.
uses
fc5de471ba4cc82a2156ed25d2c78b_10
Our baseline 20-hour AST system obtains a BLEU score of 10.3 ( Table 1 , first row), 0.5 BLEU point lower than that reported by <cite>[5]</cite> .
differences
fc5de471ba4cc82a2156ed25d2c78b_11
Moreover, pretraining on the large Chinese dataset yields a bigger improvement than either of these-4.3 BLEU points. This is nearly as much as the 6 point improvement reported by <cite>[5]</cite> when pretraining on 100 hours of English data, which is especially surprising given not only that Chinese is very different from Spanish, but also that the Spanish data contains some English words.
similarities
fc5de471ba4cc82a2156ed25d2c78b_12
This is nearly as much as the 6 point improvement reported by <cite>[5]</cite> when pretraining on 100 hours of English data, which is especially surprising given not only that Chinese is very different from Spanish, but also that the Spanish data contains some English words.
similarities
fca75d394e9f7007e1f674c7b99794_0
Litman et al. <cite>(2016)</cite> found significant group-level differences in pitch, jitter and shimmer between first and second halves of conversation.
background
fca75d394e9f7007e1f674c7b99794_1
Finally, to support our studies, we have developed an innovative representation of multi-party entrainment by extending the measurement from Litman et al. <cite>(2016)</cite> and adapting it to study the feature of linguistic style from Pennebaker and King (1999) .
extends
fca75d394e9f7007e1f674c7b99794_2
The freely available Teams Corpus<cite> (Litman et al. 2016)</cite> The corpus also includes survey data.
uses
fca75d394e9f7007e1f674c7b99794_3
Recently, Litman et al. <cite>(2016)</cite> proposed a method to compute multi-party entrainment on acoustic-prosodic features based on the same Teams Corpus as used here.
similarities
fca75d394e9f7007e1f674c7b99794_4
More specifically, T Dif f unw (unweighted team difference) converts the team difference of Litman et al. <cite>(2016)</cite> to deal with multiple feature categories.
uses
fca75d394e9f7007e1f674c7b99794_5
Litman et al. <cite>(2016)</cite> then define convergence, a type of entrainment measuring increase in feature similarity, by comparing the T Dif f of two non-overlapping temporal intervals of a game as in Equation 4.
similarities
fca75d394e9f7007e1f674c7b99794_6
Since Litman et al. <cite>(2016)</cite> previously found that in the Teams corpus the highest acoustic-prosodic convergence occurred within the first and last three minutes, we used this finding to define our n. We evenly divided each game, which was limited to 30 minutes, into ten intervals, so each interval is less than three minutes.
uses
fdfb8fbdb8544dca17b1aeba768124_0
In this paper, we compare <cite>CFG filtering techniques for LTAG</cite> (Harbusch, 1990; <cite>Poller and Becker, 1998</cite>) and HPSG (Torisawa et al., 2000; Kiefer and Krieger, 2000) , following an approach to parsing comparison among different grammar formalisms ).
uses
fdfb8fbdb8544dca17b1aeba768124_1
An empirical comparison of <cite>CFG filtering techniques for LTAG</cite> and HPSG is presented.
uses
fdfb8fbdb8544dca17b1aeba768124_2
We performed a comparison between the existing <cite>CFG filtering techniques for LTAG</cite> (<cite>Poller and Becker, 1998</cite>) and HPSG (Torisawa et al., 2000) , using strongly equivalent grammars obtained by converting LTAGs extracted from the Penn Treebank (Marcus et al., 1993) into HPSG-style.
uses
fdfb8fbdb8544dca17b1aeba768124_3
Investigating the difference between the ways of context-free (CF) approximation of LTAG and HPSG will thereby enlighten a way of further optimization for both techniques. We performed a comparison between the existing <cite>CFG filtering techniques for LTAG</cite> (<cite>Poller and Becker, 1998</cite>) and HPSG (Torisawa et al., 2000) , using strongly equivalent grammars obtained by converting LTAGs extracted from the Penn Treebank (Marcus et al., 1993) into HPSG-style.
motivation
fdfb8fbdb8544dca17b1aeba768124_4
In this section, we introduce a grammar conversion ) and <cite>CFG filtering</cite> (Harbusch, 1990; <cite>Poller and Becker, 1998</cite>; Torisawa et al., 2000; Kiefer and Krieger, 2000) .
uses background
fdfb8fbdb8544dca17b1aeba768124_5
**<cite>CFG FILTERING</cite> TECHNIQUES**
background
fdfb8fbdb8544dca17b1aeba768124_6
An initial offline step of <cite>CFG filtering</cite> is performed to approximate a given grammar with a CFG.
background
fdfb8fbdb8544dca17b1aeba768124_7
The <cite>CFG filtering</cite> generally consists of two steps.
background
fdfb8fbdb8544dca17b1aeba768124_8
The parsers with <cite>CFG filtering</cite> used in our experiments follow the above parsing strategy, but are different in the way the CF approximation and the elimination of impossible parse trees in phase 2 are performed.
extends uses differences
fdfb8fbdb8544dca17b1aeba768124_9
In <cite>CFG filtering techniques for LTAG</cite> (Harbusch, 1990; <cite>Poller and Becker, 1998</cite>) , every branching of elementary trees in a given grammar is extracted as a CFG rule as shown in Figure 1 .
uses background
fdfb8fbdb8544dca17b1aeba768124_10
In this section, we compare a pair of <cite>CFG filtering techniques for LTAG</cite> (<cite>Poller and Becker, 1998</cite>) and HPSG (Torisawa et al., 2000) described in Section 2.2.1 and 2.2.2.
uses
fdfb8fbdb8544dca17b1aeba768124_11
We can thereby construct another <cite>CFG filtering for LTAG</cite> by combining this CFG filter with an existing LTAG parsing algorithm (van Noord, 1994) .
uses
fdfb8fbdb8544dca17b1aeba768124_12
Because the processed portions of generated tree structures are no longer used later, we regard the unprocessed portions of the tree structures as nonterminals of CFG. We can thereby construct another <cite>CFG filtering for LTAG</cite> by combining this CFG filter with an existing LTAG parsing algorithm (van Noord, 1994) .
uses
fdfb8fbdb8544dca17b1aeba768124_13
Experimental results showed that the existing CF approximation of HPSG (Torisawa et al., 2000) produced a more effective filter than that of LTAG (<cite>Poller and Becker, 1998</cite>) .
differences
fdfb8fbdb8544dca17b1aeba768124_14
**CONCLUSION AND FUTURE DIRECTION** We are going to integrate the advantage of the CF approximation of HPSG into that of LTAG in order to establish another <cite>CFG filtering for LTAG</cite>.
future_work
fdfb8fbdb8544dca17b1aeba768124_15
We are going to integrate the advantage of the CF approximation of HPSG into that of LTAG in order to establish another <cite>CFG filtering for LTAG</cite>.
uses future_work
fe3e71020dfb32927f5c348a6fdcfc_0
We bring together two strands of research: one strand uses Reinforcement Learning to automatically optimise dialogue strategies, e.g. (Singh et al., 2002) , (Henderson et al., 2008) , (Rieser and Lemon, 2008a;<cite> Rieser and Lemon, 2008b)</cite> ; the other other focuses on automatic evaluation of dialogue strategies, e.g. the PARADISE framework (Walker et al., 1997) , and meta-evaluation of dialogue metrics, e.g. (Engelbrecht and Möller, 2007; Paek, 2007) .
uses
fe3e71020dfb32927f5c348a6fdcfc_1
In the following we evaluate different aspects of an objective function obtained from Wizard-of-Oz (WOZ) data<cite> (Rieser and Lemon, 2008b)</cite> .
uses
fe3e71020dfb32927f5c348a6fdcfc_2
We therefore formulate dialogue learning as a hierarchical optimisation problem<cite> (Rieser and Lemon, 2008b)</cite> .
uses
fe3e71020dfb32927f5c348a6fdcfc_3
In the following the overall method is shortly summarised. Please see <cite>(Rieser and Lemon, 2008b</cite>; Rieser, 2008) for details.
background
fe3e71020dfb32927f5c348a6fdcfc_4
The PARADISE regression model is constructed from 3 different corpora: the SAMMIE WOZ experiment (Rieser et al., 2005) , and the iTalk system used for the user tests<cite> (Rieser and Lemon, 2008b)</cite> running the supervised baseline policy and the RL-based policy.
uses
fe3e71020dfb32927f5c348a6fdcfc_5
In previous work we showed that the RL-based policy significantly outperforms the supervised policy in terms of improved user ratings and dialogue performance measures<cite> (Rieser and Lemon, 2008b)</cite> .
background
fe3e71020dfb32927f5c348a6fdcfc_6
The SL policy, in contrast, did not learn an upper boundary for when to show items on the screen (since the wizards did not follow a specific pattern,<cite> (Rieser and Lemon, 2008b)</cite> ).
differences
fe443d5e13b525cbdfa58dafb83162_0
Recently, empty-element recovery for Chinese has begun to receive attention: <cite>Yang and Xue (2010)</cite> treat it as classification problem, while Chung and Gildea (2010) pursue several approaches for both Korean and Chinese, and explore applications to machine translation.
background
fe443d5e13b525cbdfa58dafb83162_1
The method is language-independent and performs very well on both languages we tested it on: for English, it outperforms the best published method we are aware of (Schmid, 2006) , and for Chinese, it outperforms the method of <cite>Yang and Xue (2010)</cite> .
differences
fe443d5e13b525cbdfa58dafb83162_2
<cite>Yang and Xue (2010)</cite> simply count unlabeled empty elements: items are (i, i) for each empty element, where i is its position.
background
fe443d5e13b525cbdfa58dafb83162_4
The unlabeled empty elements column shows that our system outperforms the baseline system of <cite>Yang and Xue (2010)</cite> .
differences
fe443d5e13b525cbdfa58dafb83162_5
Our system outperformed that of <cite>Yang and Xue (2010)</cite> especially on *pro*, used for dropped arguments, and *T*, used for relative clauses and topicalization.
differences
fe8d369d4a6f940a1eb25aa7c9b4fe_0
It is in general to normalize the model score by translation length (say length normalization) to eliminate this system bias<cite> (Wu et al., 2016)</cite> .
background
fe8d369d4a6f940a1eb25aa7c9b4fe_1
Alternatively, one can rerank the n-best outputs by coverage-sensitive models, but this method just affects the final output list which has a very limited scope<cite> (Wu et al., 2016)</cite> .
background
fe8d369d4a6f940a1eb25aa7c9b4fe_2
Given a source position i, we define its coverage as the sum of the past attention probabilities c i = |y| j a ij <cite>(Wu et al., 2016</cite>; Tu et al., 2016) .
similarities
fe8d369d4a6f940a1eb25aa7c9b4fe_3
Note that our way of truncation is different from<cite> Wu et al. (2016)</cite> 's, where they clip the coverage into [0, 1] and ignore the fact that a source word may be translated into multiple target words and its coverage should be of a value larger than 1.
differences
fe8d369d4a6f940a1eb25aa7c9b4fe_4
For comparison, we re-implemented the length normalization (LN) and coverage penalty (CP) methods<cite> (Wu et al., 2016)</cite> .
similarities uses
fe8d369d4a6f940a1eb25aa7c9b4fe_5
We used grid search to tune all hyperparameters on the development set as<cite> Wu et al. (2016)</cite> .
similarities uses
fe8d369d4a6f940a1eb25aa7c9b4fe_6
The simplest of these is length normalization which penalizes short translations in decoding<cite> (Wu et al., 2016)</cite> .
background
fe8d369d4a6f940a1eb25aa7c9b4fe_7
Perhaps the most related work to this paper is<cite> Wu et al. (2016)</cite> .
background
fe8d369d4a6f940a1eb25aa7c9b4fe_8
Another difference lies in that our coverage model is applied to every beam search step, while<cite> Wu et al. (2016)</cite> 's model affects only a small number of translation outputs.
differences
febb64368c09d03932742fc557f3d3_0
We applied the monolingual sentence alignment algorithm of <cite>Barzilay and Elhadad (2003)</cite> .
uses
febb64368c09d03932742fc557f3d3_1
<cite>Barzilay and Elhadad (2003)</cite> additionally considered every word starting with a capital letter inside a sentence to be a proper name. In German, all nouns (i.e., regular nouns as well as proper names) are capitalized; thus, this approach does not work. We used a list of 61,228 first names to remove at least part of the proper names.
extends background
febb64368c09d03932742fc557f3d3_2
We adapted the hierarchical completelink clustering method of <cite>Barzilay and Elhadad (2003)</cite> : While the authors claimed to have set a specific number of clusters, we believe this is not generally possible in hierarchical agglomerative clustering. Therefore, we used the largest number of clusters in which all paragraph pairs had a cosine similarity strictly greater than zero. Following the formation of the clusters, lexical similarity between all paragraphs of corresponding AS and LS texts was computed to establish probable mappings between the two sets of clusters.
extends background
febb64368c09d03932742fc557f3d3_3
<cite>Barzilay and Elhadad (2003)</cite> used the boosting tool Boostexter (Schapire and Singer, 2000) . All possible cross-combinations of paragraphs from the parallel training data served as training instances. An instance consisted of the cosine similarity of the two paragraphs and a string combining the two cluster IDs. The classification result was extracted from the manual alignments. In order for an AS and an LS paragraph to be aligned, at least one sentence from the LS paragraph had to be aligned to one sentence in the AS paragraph.
background
febb64368c09d03932742fc557f3d3_4
Like <cite>Barzilay and Elhadad (2003)</cite> , we performed 200 iterations in Boostexter.
similarities uses
febb64368c09d03932742fc557f3d3_5
We set the skip penalty to 0.001 conforming to the value of <cite>Barzilay and Elhadad (2003)</cite> .
similarities
febb64368c09d03932742fc557f3d3_6
Adapted algorithm of <cite>Barzilay and Elhadad (2003)</cite> 27.7% 5.0% 8.5% Baseline I: First sentence 88.1% 4.8% 9.3% Baseline II: Word in common 2.2% 8.2% 3.5% Table 2 : Alignment results on test set 1. Aligning only the first sentence of each text ("First sentence") 2.
extends
febb64368c09d03932742fc557f3d3_7
As can be seen from Table 2 , by applying the sentence alignment algorithm of <cite>Barzilay and Elhadad (2003)</cite> we were able to extract only 5% of all reference alignments, while precision was below 30%.
uses
febb64368c09d03932742fc557f3d3_8
In conclusion, none of the three approaches (adapted algorithm of <cite>Barzilay and Elhadad (2003)</cite> , two baselines "First sentence" and "Word in common") performed well on our test set.
differences background
febb64368c09d03932742fc557f3d3_9
Compared with the results of <cite>Barzilay and Elhadad (2003)</cite> , who achieved 77% precision at 55.8% recall for their data, our alignment scores were considerably lower (27.7% precision, 5% recall).
differences
febb64368c09d03932742fc557f3d3_10
While <cite>Barzilay and Elhadad (2003)</cite> aligned English/Simple English texts, we dealt with German/Simple German data.
differences
febb64368c09d03932742fc557f3d3_11
In terms of domain, <cite>Barzilay and Elhadad (2003)</cite> used city descriptions from an encyclopedia for their experiments. For these descriptions clustering worked well because all articles had the same structure (paragraphs about culture, sports, etc.). The domain of our corpus was broader: It included information about housing, work, and events for people with disabilities as well as information about the organizations behind the respective websites. Apart from language and domain challenges we observed heavy transformations from AS to LS in our data (Figure 1 shows a sample article in AS and LS). As a result, LS paragraphs were typically very short and the clustering process returned many singleton clusters.
differences background
febb64368c09d03932742fc557f3d3_12
Since all of our data was from the same language, we applied the monolingual sentence alignment approach of <cite>Barzilay and Elhadad (2003)</cite> .
similarities
febb64368c09d03932742fc557f3d3_13
For example, named entity recognition, a preprocessing step to clustering, is harder for German than for English, the language <cite>Barzilay and Elhadad (2003)</cite> worked with. Moreover, German features richer morphology than English, which leads to less lexical overlap when working on the word form level.
background
febb64368c09d03932742fc557f3d3_14
The domain of our corpus was also broader than that of <cite>Barzilay and Elhadad (2003)</cite> , who used city descriptions from an encyclopedia for their experiments. This made it harder to identify common article structures that could be exploited in clustering.
background differences