id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_0
They also adopted sophisticated plan-based dialogue models as well at the initial stage of the project.
the trend changed rather drastically in the early 90s and most research groups with practical applications in mind gave up such strategies and switched to more corpus-oriented and statistical methods.
contrasting
train_1
Since the grammar overgenerates, we h a ve to choose single parse results among a combinatorially large numb e r o f p o ssible parses.
an experiment shows that a statistic method using ME we use the program for ME developed by NYU can select around 88.6 of correct analysis in terms of dependency relationships among !
contrasting
train_2
XHPSG fails to produce parses for about half of the sentences that cover the whole.
in application such as IE, a system needs not have parses covering the whole sentence.
contrasting
train_3
This apparent progress in spoken language technology has been fuelled by a n umber of developments: the relentless increase in desktop computing power, the introduction of statistical modelling techniques, the availability of vast quantities of recorded speech material, and the institution of public system evaluations.
our understanding of the fundamental patterning in speech has progressed at a much slower pace, not least in the area of its high-level linguistic properties.
contrasting
train_4
The print data was much cleaner than the transcribed broadcast data in the sense that there were very few typographical errors, spelling and grammar were good.
the print data also had longer, more complex sentences with somewhat greater variety in the words used to represent dates.
contrasting
train_5
They obtained roughly .91 Precision and .80 Recall on one test set, and .87 Precision and .68 Recall on another.
they adjust the reference time during processing, which is something that we have not yet addressed.
contrasting
train_6
(The MUC task required recognizing a wider variety of TIMEXs, including event-dependent ones.
at least 30% of the dates and times in the MUC test were fixed-format ones occurring in document headers, trailers, and copyright notices. )
contrasting
train_7
To take an example from the BNC, the adjectives large and green never occur together in the training data, and so would be assigned a random order by the direct evidence method.
the pairs large, new and new, green occur fairly frequently.
contrasting
train_8
The two highest scoring methods, using memorybased learning and positional probability, perform similarly, and from the point of view of accuracy there is little to recommend one method over the other.
it is interesting to note that the errors made by the two methods do not completely overlap: while either of the methods gives the right answer for about 89% of the test data, one of the two is right 95.00% of the time.
contrasting
train_9
Spoken dialogue managers have benefited from using stochastic planners such as Markov Decision Processes (MDPs).
so far, MDPs do not handle well noisy and ambiguous speech utterances.
contrasting
train_10
There are several POMDP algorithms that may be the natural choice for policy generation (Sondik, 1971;Monahan, 1982;Parr and Russell, 1995;Cassandra et al., 1997;Kaelbling et al., 1998;Thrun, 1999).
solving real world dialogue scenarios is computationally in-tractable for full-blown POMDP solvers, as the complexity is doubly exponential in the number of states.
contrasting
train_11
Most other actions are penalised with an equivalent negative amount.
the confir-mation/clarification actions are penalised lightly (values close to 0), and the motion commands are penalised heavily if taken from the wrong state, to illustrate the difference between an undesirable action that is merely irritating (i.e., giving an inappropriate response) and an action that can be much more costly (e.g., having the robot leave the room at the wrong time, or travel to the wrong destination).
contrasting
train_12
Consequently, these algorithms collect training data for the and performance is not affected.
unsupervised methods break down on such examples.
contrasting
train_13
#NAME?
evaluations based on judgements along these dimensions are clearly weaker than evaluations measuring actual behavioural and attitudinal changes (Olso and Zanna 1991).
contrasting
train_14
The experiment results show that arguments generated at the more concise level are significantly better than arguments generated at the more verbose level.
further experiments are needed to determine what is the optimal level of conciseness.
contrasting
train_15
Electronically readable Arabic text has only recently become available on a useful scale, hence our experiments were run on short texts.
the coverage of the data sets allows us to verify our experiments on demanding samples, and their size lets us verify correct clustering manually.
contrasting
train_16
Finer grained evaluation of cluster quality would be needed in an IR context.
our main concern is comparing algorithms.
contrasting
train_17
Unfortunately, this approach is not applicable to multivariate data with more than two dimensions.
we consider syllables to consist of at least three dimensions corresponding to parts of the internal syllable structure: onset, nucleus and coda.
contrasting
train_18
For many applications, people have devoted considerable energy to improving both components, with resulting improvements in overall system accuracy.
relatively little research has gone into improving the channel model for spelling correction.
contrasting
train_19
For instance, P(e | a) does not vary greatly between the three positions mentioned above.
p(ent | ant) is highly dependent upon position.
contrasting
train_20
Problems with evaluation: Some of the statistics that we presented in the previous discussion suggest that this relatively simple statistical summarization system is not very good compared to some of the extraction based summarization systems that have been presented elsewhere (e.g., (Radev and Mani, 1997)).
it is worth emphasizing that many of the headlines generated by the system were quite good, but were penalized because our evaluation metric was based on the word-error rate and the generated headline terms did not exactly match the original ones.
contrasting
train_21
Pereira's (Pereira, 1985) algorithm also stores changes to nodes separate from the graph.
pereira's mechanism incurs a log(n) overhead for accessing the changes (where n is the number of nodes in a graph), resulting in an O(n log n) time algorithm.
contrasting
train_22
Every solution of a constraint is a solution of one of its irredundant solved forms.
the number of irredundant solved forms is always finite, whereas the number of solutions typically is not: X:a ∧ Y :b is in solved form, but each solution must contain an additional node with arbitrary label that combines X and Y into a tree (e.g.
contrasting
train_23
Melamed (1999) aligns texts using correspondence points taken either from orthographic cognates (Michel Simard et al., 1992) or from a seed translation lexicon.
although the heuristics both approaches use to filter noisy points may be intuitively quite acceptable, they are not theoretically supported by Statistics.
contrasting
train_24
In cases where TAG models dependencies correctly, the use of R-MCTAG is straightforward: when an auxiliary tree adjoins at a site pair which is just a single node, it looks just like conventional adjunction.
in problematic cases we can use the extra expressive p o wer of R-MCTAG to model dependencies correctly.
contrasting
train_25
DSG can generate the language count-k for some arbitrary k that is, fa 1 n a 2 n : : : a k n g, which makes it extremely powerful, whereas R-MCTAG can only generate count-4.
dSG cannot generate the copy language that is, fww j w 2 g with some terminal al-phabet, whereas R-MCTAG can; this may be problematic for a formalism modeling natural language, given the key role of the copy language in demonstrating that natural language is not context-free Shieber, 1985.
contrasting
train_26
E ectively, a dependency structure is made parasitic on the phrase structure so that they can be generated together by a context-free model.
this solution is not ideal.
contrasting
train_27
Clearly, a direct comparison of these results to stateof-the-art statistical parsers cannot bemade because of di erent training and test data and other evaluation measures.
we w ould like to draw the following conclusions from our experiments: The problem of chaotic convergence behaviour of EM estimation can besolved for log-linear models.
contrasting
train_28
For data consisting of unannotated sentences so-called incomplete data the iterative method of the EM algorithm (Dempster et al., 1977) has to be employed.
since even complete-data estimation for log-linear models requires iterative methods, an application of EM to log-linear models results in an algorithm which is expensive since it is doubly-iterative.
contrasting
train_29
Thus a precise indication of correct c/f-structure pairs was possible.
the average ambiguity of this corpus is only 5.4 parses per sentence, for sentences with on average 7.5 words.
contrasting
train_30
In recent years, statistical approaches on ATR (Automatic Term Recognition) (Bourigault, 1992;Dagan et al, 1994;Justeson and Katz, 1995;Frantzi, 1999) have achieved good results.
there are scopes to improve the performance in extracting terms still further.
contrasting
train_31
Accuracy is somewhat similar to the familiar metric of precision in that it is calculated over cases for which a decision is made, and performance is similar to recall in that it is calculated over all true frame elements.
unlike a traditional precision recall trade-o , these results have no threshold to adjust, and the task is a multi-way classi cation rather than a binary decision.
contrasting
train_32
It would seem reasonable to think that resolution of pronominal anaphora would only be accomplished when the ratio of pronominal occurrence exceeds a minimum level.
we have to take into account that the cost of solving these references is proportional to the number of pronouns analysed and consequently, proportional to the amount of information a system will ignore if these references are not solved.
contrasting
train_33
Using this approach, we have developed grammar that understands declarative sentences (Shavitri, 1999).
our experience shows that we need to have a more detailed word categories than is currently available in the standard Indonesian word dictionary (KBBI) before the grammar can be used effectively.
contrasting
train_34
The status of practical research on continuous speech recognition is in its initial step with at least one published paper [15].
to western speech recognition, topics specifying tonal languages or tone recognition have been deeply researched as seen in many papers e.g., [16].
contrasting
train_35
Thus, SCFGs have been successfully used on limited-domain tasks of low perplexity.
sCFGs work poorly for large vocabulary, generalpurpose tasks, because the parameter learning and the computation of word transition probabilities present serious problems for complex real tasks.
contrasting
train_36
The initial probabilities were randomly generated and three different seeds were tested.
only one of them is here given that the results were very similar.
contrasting
train_37
At the endpoint of each training run in the graph, the same number of samples has been annotated for training.
we see that the larger the pool of candidate instances for annotation is, the better the resulting accuracy.
contrasting
train_38
Variable occurrences or more generally strings in £ i 7 x p x @ r © 8 s can be instantiated to ranges.
an occurrence of the terminal y can be instantiated to the range .
contrasting
train_39
With a wide coverage English TAG, on a small sample set of short sentences, a guided parser is on the average three times faster than its non-guided counterpart, while, for longer sentences, more than one order of magnitude may be expected.
the guided parser speed is very sensitive to the level of the guide, which must be chosen very carefully since potential benefits may be overcome by the time taken by the guiding structure book-keeping procedures.
contrasting
train_40
The global parse time for TAGs might also be further improved using the transformation described in (Boullier, 1999) which, starting from any TAG, constructs an equivalent RCG that can be parsed in .
this improvement is not definite, since, on typical input sentences, the increase in size of the resulting grammar may well ruin the expected practical benefits, as in the case of the ¥ » -guiding parser processing short sentences.
contrasting
train_41
A novel feature of our approach is the ability to extract multiple kinds of paraphrases: Identification of lexical paraphrases.
to earlier work on similarity, our approach allows identification of multi-word paraphrases, in addition to single words, a challenging issue for corpus-based techniques.
contrasting
train_42
Statistical techniques were also successfully used by (Lapata, 2001) to identify paraphrases of adjective-noun phrases.
our method is not limited to a particular paraphrase type.
contrasting
train_43
This is, of course, not exactly what the original query meant.
it is superior to queries like ''browsers'' AND NOT ''netscape'' which rejects all pages containing Netscape, even if they also contain other browsers.
contrasting
train_44
I then show that alternative phrases appear frequently enough in dialog to warrant serious attention, yet present natural language search engines perform poorly on queries containing alternative phrases.
by approximating my semantic analysis into a form understood by a natural language search engine, I showed that the performance of that search engine improved dramatically.
contrasting
train_45
The only other model that uses frontier lexicalization and that was tested on the standard WSJ split is Chiang (2000) who extracts a stochastic tree-insertion grammar or STIG (Schabes & Waters 1996) from the WSJ, obtaining 86.6% LP and 86.9% LR for sentences ≤ 40 words.
chiang's approach is limited in at least two respects.
contrasting
train_46
They are based on dominance constraints (Marcus et al., 1983;Rambow et al., 1995) and extend them with parallelism (Erk and Niehren, 2000) and binding constraints.
lifting -reduction to an operation on underspecified descriptions is not trivial, and to our knowledge it is not known how this can be done.
contrasting
train_47
The feature independence assumption of memory-based learning appears to be the harming cause: by its definition, IB1-IG does not give extra weight to apparently relevant interactions of feature values from different sources.
in nine out of the twelve rules that RIPPER produces, word graph features and system questions type features are explicitly integrated as joint left-hand side conditions.
contrasting
train_48
Specifically, we do not make use of the possibilities offered by the interleaving of the RE and LC, as the examples we cover are too simple.
this setup enables RE, in principle, to make use of information about precisely how a previous reference to an entity has been realised.
contrasting
train_49
At the start of turns, posture shift duration is approximately the same whether a new topic is introduced or not (2.5 seconds).
when ending a turn, speakers move significantly longer (7.0 seconds) when finishing a topic than when the topic is continued by the other interlocutor (2.7 seconds) (F(1,148)=17.9; p<0.001).
contrasting
train_50
In our framework, however, every is the following (using qeq conditions, as in the LinGO ERG): So these composes via op spec to yield every dog: A slight complication is that the determiner is also syntactically selected by the N via the SPR slot (following Pollard and Sag (1994)).
from the standpoint of the compositional semantics, the determiner is the semantic head, and it is only its SPEC hole which is involved: the N must be treated as having an empty SPR hole.
contrasting
train_51
While these transforms do reduce the size of the grammar, and modestly reduce the level of ambiguity from 1.96 to 1.92, they did not initially appear to improve recognition performance.
that was with the nuance parameter -node array optimization level set to the default value FULL.
contrasting
train_52
Their precision values are not significantly 10 different from the baseline obtained by random selection.
to our expectation stated at the beginning of this section, the performance of ¢ ¡ and £ ¥ ¤ relative to the other AMs is not better for high-frequency data than for low-frequency data.
contrasting
train_53
On the one hand, their method is expected to enhance existing encyclopedias, where vocabulary size is relatively limited, and therefore the quantity problems has been resolved.
encyclopedias extracted from the Web are not comparable with existing ones in terms of quality.
contrasting
train_54
In hand-crafted encyclopedias, term descriptions are carefully organized based on domains and word senses, which are especially effective for human usage.
the output of Fujii's method is simply a set of unorganized term descriptions.
contrasting
train_55
For this purpose, the use of large-scale corpora annotated with domains is desirable.
since those resources are prohibitively expensive, we used the "Nova" dictionary for Japanese/English machine translation systems 3 , which includes approximately one million entries related to 19 technical fields as listed below: aeronautics, biotechnology, business, chemistry, computers, construction, defense, ecology, electricity, energy, finance, law, mathematics, mechanics, medicine, metals, oceanography, plants, trade.
contrasting
train_56
For this purpose, a number of quality rating methods for Web pages (Amento et al., 2000;Zhu and Gauch, 2000) can be used.
since Google (i.e., the search engine used in our system) rates the quality of pages based on hyperlink information, and selectively retrieves those with higher quality (Brin and Page, 1998), we tentatively regarded P Q (d) as a constant.
contrasting
train_57
Second, P (d), which is a product of probabilities for N -grams in d, is quite sensitive to the length of d. In the cases of machine translation and speech recognition, this problem is less crucial because multiple candidates compared based on the language model are almost equivalent in terms of length.
since in our case length of descriptions are significantly different, shorter descriptions are more likely to be selected, regardless of the quality.
contrasting
train_58
• Some non-verbal dependents, such as separable verbal prefixes (for example the an of anfangen 'begin'), predicative adjectives, and nouns governed by a copular verb or a support verb, can go into the right bracket (the prefix even forms one word with its following governor).
to verbs, these elements do not usually open up a new position for their dependents, which consequently have to be placed somewhere else.
contrasting
train_59
It would also be interesting to attempt to describe other languages in this formalism, configurational languages such as English or French, as well as languages such as Russian where the surface order is mainly determined by the communicative structure.
german is an especially interesting case because surface order depends strongly on both the syntactic position (e.g.
contrasting
train_60
Even choosing among the responses in (5) might be a pretty knowledge intensive business.
there are some clear strategies that might be pursued.
contrasting
train_61
For example, Levin+ class 9.4 has three possible Word-Net senses for drop.
the WordNet sense 8 is not associated with any of the other classes; thus, it is considered to have a higher "information content" than the others.
contrasting
train_62
This does not mean, however, that it must be wired into the computational model.
a computational model based on a small set of primitives that combine via simple composition rules will be more flexible in practice and easier to implement.
contrasting
train_63
On the contrary, a computational model based on a small set of primitives that combine via simple composition rules will be more flexible in practice and easier to implement.
in the type-logical approach, the syntactic contents of a lexical entry is outlined by the following patern: <atom> : <syntactic category> the semantic contents obeys the following scheme: <λ-term> : <semantic type> This asymmetry may be broken by: 1. allowing λ-terms on the syntactic side (atomic expressions being, after all, particular cases of λ-terms), 2. using the same type theory for expressing both the syntactic categories and the semantic types.
contrasting
train_64
Any ACG generates two languages, an abstract language and an object language.
the abstract language generated by G (A(G )) is defined as follows: In words, the abstract language generated by G is the set of closed linear λ-terms, built upon the abstract vocabulary Σ 1 , whose type is the distinguished type s. the object language generated by G (O(G )) is defined to be the image of the abstract language by the term homomorphism induced by the lexicon L : It may be useful of thinking of the abstract language as a set of abstract grammatical structures, and of the object language as the set of concrete forms generated from these abstract structures.
contrasting
train_65
The object language of this second ACG is defined as follows: Then, a lexicon from Σ 1 to Σ 3 is defined: This allows the ACG G 13 to be defined as The abstract language shared by G 12 and G 13 contains the two following terms: The syntactic lexicon L 12 applied to each of these terms yields the same image.
it β-reduces to the following object term: the semantic lexicon L 13 yields the de re reading when applied to (2): and it yields the de dicto reading when applied to (3): Our handling of the two possible readings of (1) differs from the type-logical account of Morrill (1994) and Carpenter (1996).
contrasting
train_66
In particular, they all satisfy the first requirement.
the satisfaction of the second requirement is, in most of the cases, an open problem.
contrasting
train_67
Most of the current work on corpus annotation is concentrated on morphemics, lexical semantics and sentence structure.
it becomes more and more obvious that attention should and can be also paid to phenomena that reflect the links between a sentence and its context, i.e.
contrasting
train_68
Loop 1 was generated more often than any other loop.
the small overall average number of feedback loops that have been carried out indicate that the fact they port little overhead to the Q&A system.
contrasting
train_69
Features related to argument structure are not significantly correlated with VPE.
whether the two argument structures are identical is a factor approaching significance: in the two cases where they differ, no VPE happens (Q ).
contrasting
train_70
For this reason we use the machine learning system Ripper (Cohen, 1996).
before we can use Ripper, we must discuss the issue of how our new trainable VPE module fits into the architecture of generation.
contrasting
train_71
Finally, we extend this process so that the training procedure acts hierarchically on different portions of the messages at different times.
to the baseline flex system, the transducers that we induce are nondeterministic and stochastic -a given word sequence may align to multiple paths through the transducer.
contrasting
train_72
In the above examples, the last words were common nouns.
the last word can also be a proper noun.
contrasting
train_73
Thus, the two methods returned similar results.
we cannot expect good performance for other documents because CRL NE is limited to January, 1995.
contrasting
train_74
*:all-katakana:misc-proper-noun -> PERSON,0,0.
they are easy to understand as follows.
contrasting
train_75
Many statistical NLP tagging and parsing models are estimated by maximizing the (joint) likelihood of the fully-observed training data.
since these applications only require the conditional probability distributions, these distributions can in principle be learnt by maximizing the conditional likelihood of the training data.
contrasting
train_76
Applications such as language modelling for speech recognition and EM procedures for estimating from hidden data either explicitly or implicitly require marginal distributions over the visible data (i.e., word strings), so it is not statistically sound to use MCLEs for such applications.
applications which involve predicting the value of the hidden variable from the visible variable (such as tagging or parsing) usually only involve the conditional distribution, which the MCLE estimates directly.
contrasting
train_77
Thus one might expect the MLE to converge faster than the MCLE in situations where training data is not over-abundant, which is often the case in computational linguistics.
since the intended application requires a conditional distribution, it seems reasonable to directly estimate this conditional distribution from the training data as the MCLE does.
contrasting
train_78
It seems to be difficult to find model classes for which the MLE and MCLE are both easy to compute.
often it is possible to find two closely related model classes, one of which has an easily computed MLE and the other which has an easily computed MCLE.
contrasting
train_79
None of the models investigated here are stateof-the-art; the goal here is to compare two different estimation procedures, and for that reason this paper concentrated on simple, easily implemented models.
it would also be interesting to compare the performance of joint and conditional estimators on more sophisticated models.
contrasting
train_80
Finally, a shallow−parser developed using these techniques will have to mirror the information contained in the training data.
for instance, if one trains such a tool on data were only non recursive NP chunks are marked 2 , then one will not be able to obtain richer information such as chunks of other categories, embeddings, syntactic functions... finite−state techniques rely on the development of a large set of rules (often based on regular expressions) to capture all the ways a constituent can expend.
contrasting
train_81
Since the parser is running in a garbage-collected environment, it is hard to distinguish required memory from utilized memory.
unlike time and traversals which in practice can diverge, memory requirements match the number of edges in the chart almost exactly, since the large data structures are all proportional in size to the number of edges .
contrasting
train_82
Given the highly unrestrictive nature of the treebank grammar, it is not very surprising that top-down filtering provides such little benefit.
this is a useful observation about real world parsing performance.
contrasting
train_83
is slightly larger for the NOTRANS-FORM grammar, since the empty-reachable set is nonempty.
note that even for NOTRANSFORM, the largest SCC is smaller than the empty-reachable set, since empties provide direct entry into some of the lower SCCs, in particular because of WH-gaps.
contrasting
train_84
To reach a new category always requires the use of at least one overt word.
for spans of size 6 or so, enough words exist that the same high saturation effect will still be observed.
contrasting
train_85
11 Without unaries, the more gradual saturation growth increases the total exponent, more so for NOUNARIESLOW than NOUNARIESHIGH.
note that for spans around 8 and onward, the saturation curves are essentially constant for all settings.
contrasting
train_86
Given the large SCCs seen in section 4.1, phrasal categories, to a first approximation, might as well be wildcards, able to match any span, especially if empties are present.
the tags are, in comparison, very restricted.
contrasting
train_87
In the case of about the tags 'in' (for: preposition) or 'rb' (for: adverb) would be appropriate.
since the POS tagger cannot resolve this ambiguity from local context, the underspecified tag 'about' is assigned, instead.
contrasting
train_88
However, since the POS tagger cannot resolve this ambiguity from local context, the underspecified tag 'about' is assigned, instead.
this can in turn lead to misclassification in the chunker.
contrasting
train_89
Postprocessing mainly consists of shortening the tree from the instance base so that it covers only those parts of the chunk that could be matched.
if the match is done on the lexical level, a correction of tagging errors is possible if there is enough evidence in the instance base.
contrasting
train_90
In the current version of the algorithm, generalization heavily relies on lexical and part-of-speech information.
a richer set of backing-off strategies that rely on larger domains of structure are easy to envisage and are likely to significantly improve recall performance.
contrasting
train_91
The correlation was significant both for frequencies recreated by smoothing over adjectives (r = .214, p < .05) and over nouns (r = .232, p < .05).
co-occurrence frequency recreated using the Jensen-Shannon divergence was not reliably correlated with plausibility.
contrasting
train_92
If W ij > 0, it means that if document x is related to T j , it may also have some contribution ( W ij ) to topic T j .
if W ij < 0, it means the two topics are negatively correlated, and a document x will not be related to both T j and T i .
contrasting
train_93
The Probability-based Translation MEMory (PTMEM) was created by associating with each French phrase the English equivalent that corresponded to the alignment of highest probability.
to other TMEMs, our TMEMs explicitly encode not only the mutual translation pairs but also their corresponding word-level alignments, which are derived according to a certain translation model (in our case, IBM model 4).
contrasting
train_94
Because of this, a statistical-based MT system will have trouble producing a translation that uses the phrase "kick the bucket", no matter what decoding technique it employs.
if the two phrases are stored in the TMEM, producing such a translation becomes feasible.
contrasting
train_95
Tagging all sentences for the correct structures, however, is an intractable task for a human coder.
while it is feasible to have this information collected computationally through our parser, we are still faced with the problem of competing parses for many sentences.
contrasting
train_96
The probability of a rule is inferred by an iterative training procedure with an extended version of the inside-outside algorithm.
only those analyses are considered that meet the tagged brackets (here syllable brackets).
contrasting
train_97
The next group of systems were the two rulebased systems, ICF and RBS, which were not statistically different from one another.
sPOT was statistically better than both of these systems (p .01).
contrasting
train_98
This is also somewhat to be expected, since the baseline systems were intended to be the simplest systems constructable.
it would have been a possible outcome for SPOT to not be different than either system, e.g.
contrasting
train_99
There is increasing interest in techniques for evaluating Natural Language Generation (NLG) systems.
we are not aware of any previously reported evaluations of NLG systems which have rigorously compared the task effectiveness of an NLG system to a non-NLG alternative.
contrasting