source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For example, the passive voice pattern “<subject> were kidnapped” and the active voice pattern “kidnapped <direct object>” are merged into a single normalized pattern “kidnapped <patient>”.2 For the sake of sim plicity, we will refer to these normalized extraction patterns as caseframes.3 These caseframes c...
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
43 80.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
So, this was a surprise element due to practical reasons, not malice.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
We maintain a separate array for each length n containing all n-gram entries sorted in suffix order.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
1 55.8 38.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
At each point during the derivation, the parser has a choice between pushing the next input token onto the stack – with or without adding an arc from the token on top of the stack to the token pushed – and popping a token from the stack – with or without adding an arc from the next input token to the token popped.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
When a comparison against previous results requires additional pre-processing, we state it explicitly to allow for the reader to replicate the reported results.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Pairwise comparison is done using the sign test.
It is probably the first analysis of Arabic parsing of this kind.
0
A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
They have made use of local and global features to deal with the instances of same token in a document.
0
The zone to which a token belongs is used as a feature.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
This assumption, however, is not inherent to type-based tagging models.
0
The table shows that the lexicon tag frequency predicated by our full model are the closest to the gold standard.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
This style of naming is never required and seems to be losing currency.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The ability to redistribute belief values across sets rather than individual hypotheses is key.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
— I would also like to point out to commissioner Liikanen that it is not easy to take a matter to a national court.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure a...
0
Semilinearity and the closely related constant growth property (a consequence of semilinearity) have been discussed in the context of grammars for natural languages by Joshi (1983/85) and Berwick and Weinberg (1984).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Preferences for constituent order (especially in languages with relatively free word order) often belong to this group.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
(2009).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
These two principles guide experimentation in this framework, and together with the evaluation measures help us decide which specific type of substructure to combine.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
In many cases these failures in recall would be fixed by having better estimates of the actual prob­ abilities of single-hanzi words, since our estimates are often inflated.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The second algorithm extends ideas from boosting algorithms, designed for supervised learning tasks, to the framework suggested by (Blum and Mitchell 98).
It is probably the first analysis of Arabic parsing of this kind.
0
All experiments use ATB parts 1–3 divided according to the canonical split suggested by Chiang et al.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
(2010) reports the best unsupervised results for English.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
In general, the lemma of the previous section does not ensure that all the productions in the combined parse are found in the grammars of the member parsers.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Once HMM parameters (θ, φ) are drawn, a token-level tag and word sequence, (t, w), is generated in the standard HMM fashion: a tag sequence t is generated from φ.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
IRSTLM (Federico et al., 2008) is an open-source toolkit for building and querying language models.
A beam search concept is applied as in speech recognition.
0
In.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For example, it is well-known that one can build a finite-state bigram (word) model by simply assigning a state Si to each word Wi in the vocabulary, and having (word) arcs leaving that state weighted such that for each Wj and corresponding arc aj leaving Si, the cost on aj is the bigram cost of WiWj- (Costs for unseen...
The texts were annotated with the RSTtool.
0
Not all the layers have been produced for all the texts yet.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (r...
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
In a more recent study than Chang et al., Wang, Li, and Chang (1992) propose a surname-driven, non-stochastic, rule-based system for identifying personal names.17 Wang, Li, and Chang also compare their performance with Chang et al.'s system.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
In the second part of the experiment, we applied the inverse transformation based on breadth-first search under the three different encoding schemes.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Since each composition operation is linear and nonerasing, a bounded sequences of substrings associated with the resulting structure is obtained by combining the substrings in each of its arguments using only the concatenation operation, including each substring exactly once.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
We therefore also normalized judgements on a per-sentence basis.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
30 16.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Gather phrases using keywords Now, the keyword with the top TF/ITF score is selected for each phrase.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Other good classes include JADE and GOLD; other bad classes are DEATH and RAT.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
In general, different modalities (“planned to buy”, “agreed to buy”, “bought”) were considered to express the same relationship within an extraction setting.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
This PP modifies another NP, whose head is a singular noun.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
We have tested the translation system on the Verbmobil task (Wahlster 1993).
They found replacing it with a ranked evaluation to be more suitable.
0
The other half was replaced by other participants, so we ended up with roughly the same number.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
First, the training data for the parser is projectivized by applying a minimal number of lifting operations (Kahane et al., 1998) and encoding information about these lifts in arc labels.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
We train and test on the CoNLL-X training set.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Though we are not able to calculate their memory usage on our model, results reported in their paper suggest lower memory consumption than TRIE on large-scale models, at the expense of CPU time.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
This paper is based on work supported in part by DARPA through IBM.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Table 6: Example Translations for the Verbmobil task.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
(2008) reported agreement between the teams (measured with Evalb) at 93.8% F1, the level of the CTB.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The algorithm works due to the fact that not all permutations of cities have to be considered explicitly.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary.
They focused on phrases which two Named Entities, and proceed in two stages.
0
We then gather all phrases with the same keyword.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Because of this threshold, very few NE instance pairs could be used and hence the variety of phrases was also limited.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
The number of paths that can be dependent is bounded by the grammar (in fact the maximum cardinality of a tree set determines this bound).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The highestorder N-gram array omits backoff and the index, since these are not applicable.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Each unlabeled pair (x1,i, x2,i) is represented as an edge between nodes corresponding to x1,i and X2,i in the graph.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
ni2ya3 and @5:2 xilya3, respectively.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We also collapse unary chains withidentical basic categories like NP → NP.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Richer tag sets have been suggested for modeling morphologically complex distinctions (Diab, 2007), but we find that linguistically rich tag sets do not help parsing.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
As noted in Section 1, our code finds the longest matching entry wnf for query p(wn|s(wn−1 f ) The probability p(wn|wn−1 f ) is stored with wnf and the backoffs are immediately accessible in the provided state s(wn−1 When our code walks the data structure to find wnf , it visits wnn, wnn−1, ... , wnf .
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Linear probing hash tables must have more buckets than entries, or else an empty bucket will never be found.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The terrorism examples reflect fairly obvious relationships: people who are murdered are killed; agents that “report” things also “add” and “state” things; crimes that are “perpetrated” are often later “condemned”.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
This intuition is born out by the experimental results.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for training models.
There are clustering approaches that assign a single POS tag to each word type.
0
While Berg-Kirkpatrick et al.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
This is the only important case, because otherwise the simple majority combining technique would pick the correct constituent.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
In future work we plan to try this approach with more competitive SMT systems, and to extend instance weighting to other standard SMT components such as the LM, lexical phrase weights, and lexicalized distortion.
Two general approaches are presented and two combination techniques are described for each approach.
0
There are simply not enough votes remaining to allow any of the crossing structures to enter the hypothesized constituent set.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
02 99.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
The 8 similarity-to-IN features are based on word frequencies and scores from various models trained on the IN corpus: To avoid numerical problems, each feature was normalized by subtracting its mean and dividing by its standard deviation.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
We consider two variants of Berg-Kirkpatrick et al.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
But gold segmentation is not available in application settings, so a segmenter and parser are arranged in a pipeline.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
wo rd => na m e 2.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
3 60.7 50.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
An easy way to achieve this is to put the domain-specific LMs and TMs into the top-level log-linear model and learn optimal weights with MERT (Och, 2003).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
In contrast to the Bayesian HMM, θt is not drawn from a distribution which has support for each of the n word types.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Given counts cn1 where e.g. c1 is the vocabulary size, total memory consumption, in bits, is Our PROBING data structure places all n-grams of the same order into a single giant hash table.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
This WFST represents the segmentation of the text into the words AB and CD, word boundaries being marked by arcs mapping between f and part-of-speech labels.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For each case- frame, BABAR collects the head nouns of noun phrases that were extracted by the caseframe in the training corpus.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
(f1; ;mg n fl1; l2g ; l) 4 (f1; ;m 􀀀 1g n fl1; l2; l3g ; l0) !
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
We start with noun features since written Arabic contains a very high proportion of NPs.
Two general approaches are presented and two combination techniques are described for each approach.
0
This is the only important case, because otherwise the simple majority combining technique would pick the correct constituent.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Both implementations employ a state object, opaque to the application, that carries information from one query to the next; we discuss both further in Section 4.2.
This corpus has several advantages: it is annotated at different levels.
0
Clearly this poses a number of research challenges, though, such as the applicability of tag sets across different languages.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Finally, a DempsterShafer probabilistic model evaluates the evidence provided by the knowledge sources for all candidate antecedents and makes the final resolution decision.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Final (F): The rest of the sentence is processed monotonically taking account of the already covered positions.
They have made use of local and global features to deal with the instances of same token in a document.
0
Each feature group can be made up of many binary features.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
In Equations 1 through 3 we develop the model for constructing our parse using naïve Bayes classification.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
They demonstrated this with the comparison of statistical systems against (a) manually post-edited MT output, and (b) a rule-based commercial system.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
0750271 and by the DARPA GALE program.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
In (b) “they” refers to the kidnapping victims, but in (c) “they” refers to the armed men.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
By considering derivation trees, and thus abstracting away from the details of the composition operation and the structures being manipulated, we are able to state the similarities and differences between the 'This work was partially supported by NSF grants MCS42-19116-CER, MCS82-07294 and DCR-84-10413, ARO grant DAA 2...