Unnamed: 0
int64
0
3.22k
text
stringlengths
49
577
id
int64
0
3.22k
label
int64
0
6
400
We present the computational model for POS learning , and present results for applying << it >> to [[ Bulgarian ]] , a Slavic language with relatively free word order and rich morphology .
400
3
401
We present the computational model for POS learning , and present results for applying it to [[ Bulgarian ]] , a << Slavic language >> with relatively free word order and rich morphology .
401
2
402
We present the computational model for POS learning , and present results for applying it to << Bulgarian >> , a Slavic language with relatively [[ free word order ]] and rich morphology .
402
1
403
We present the computational model for POS learning , and present results for applying it to Bulgarian , a Slavic language with relatively [[ free word order ]] and << rich morphology >> .
403
0
404
We present the computational model for POS learning , and present results for applying it to << Bulgarian >> , a Slavic language with relatively free word order and [[ rich morphology ]] .
404
1
405
In << MT >> , the widely used approach is to apply a [[ Chinese word segmenter ]] trained from manually annotated data , using a fixed lexicon .
405
3
406
In MT , the widely used approach is to apply a << Chinese word segmenter >> trained from [[ manually annotated data ]] , using a fixed lexicon .
406
3
407
Such [[ word segmentation ]] is not necessarily optimal for << translation >> .
407
3
408
We propose a [[ Bayesian semi-supervised Chinese word segmentation model ]] which uses both monolingual and bilingual information to derive a << segmentation >> suitable for MT .
408
3
409
We propose a << Bayesian semi-supervised Chinese word segmentation model >> which uses both [[ monolingual and bilingual information ]] to derive a segmentation suitable for MT .
409
3
410
We propose a Bayesian semi-supervised Chinese word segmentation model which uses both monolingual and bilingual information to derive a [[ segmentation ]] suitable for << MT >> .
410
3
411
Experiments show that our [[ method ]] improves a state-of-the-art << MT system >> in a small and a large data environment .
411
5
412
In this paper we compare two competing [[ approaches ]] to << part-of-speech tagging >> , statistical and constraint-based disambiguation , using French as our test language .
412
3
413
In this paper we compare two competing << approaches >> to part-of-speech tagging , statistical and constraint-based disambiguation , using [[ French ]] as our test language .
413
3
414
We imposed a time limit on our experiment : the amount of time spent on the design of our [[ constraint system ]] was about the same as the time we used to train and test the easy-to-implement << statistical model >> .
414
5
415
The [[ accuracy ]] of the << statistical method >> is reasonably good , comparable to taggers for English .
415
6
416
The [[ accuracy ]] of the statistical method is reasonably good , comparable to << taggers >> for English .
416
6
417
The accuracy of the [[ statistical method ]] is reasonably good , comparable to << taggers >> for English .
417
5
418
The accuracy of the statistical method is reasonably good , comparable to [[ taggers ]] for << English >> .
418
3
419
[[ Structured-light methods ]] actively generate << geometric correspondence data >> between projectors and cameras in order to facilitate robust 3D reconstruction .
419
3
420
Structured-light methods actively generate [[ geometric correspondence data ]] between projectors and cameras in order to facilitate << robust 3D reconstruction >> .
420
3
421
In this paper , we present << Photogeometric Structured Light >> whereby a standard [[ structured light method ]] is extended to include photometric methods .
421
4
422
In this paper , we present << Photogeometric Structured Light >> whereby a standard structured light method is extended to include [[ photometric methods ]] .
422
4
423
[[ Photometric processing ]] serves the double purpose of increasing the amount of << recovered surface detail >> and of enabling the structured-light setup to be robustly self-calibrated .
423
3
424
[[ Photometric processing ]] serves the double purpose of increasing the amount of recovered surface detail and of enabling the << structured-light setup >> to be robustly self-calibrated .
424
3
425
Further , our << framework >> uses a [[ photogeometric optimization ]] that supports the simultaneous use of multiple cameras and projectors and yields a single and accurate multi-view 3D model which best complies with photometric and geometric data .
425
3
426
Further , our framework uses a photogeometric optimization that supports the simultaneous use of multiple cameras and projectors and yields a single and accurate << multi-view 3D model >> which best complies with [[ photometric and geometric data ]] .
426
3
427
In this paper , a discrimination and robustness oriented [[ adaptive learning procedure ]] is proposed to deal with the task of << syntactic ambiguity resolution >> .
427
3
428
Owing to the problem of [[ insufficient training data ]] and << approximation error >> introduced by the language model , traditional statistical approaches , which resolve ambiguities by indirectly and implicitly using maximum likelihood method , fail to achieve high performance in real applications .
428
0
429
Owing to the problem of insufficient training data and approximation error introduced by the language model , traditional [[ statistical approaches ]] , which resolve << ambiguities >> by indirectly and implicitly using maximum likelihood method , fail to achieve high performance in real applications .
429
3
430
Owing to the problem of insufficient training data and approximation error introduced by the language model , traditional << statistical approaches >> , which resolve ambiguities by indirectly and implicitly using [[ maximum likelihood method ]] , fail to achieve high performance in real applications .
430
3
431
The [[ accuracy rate ]] of << syntactic disambiguation >> is raised from 46.0 % to 60.62 % by using this novel approach .
431
6
432
The accuracy rate of [[ syntactic disambiguation ]] is raised from 46.0 % to 60.62 % by using this novel << approach >> .
432
6
433
This paper presents a new [[ approach ]] to << statistical sentence generation >> in which alternative phrases are represented as packed sets of trees , or forests , and then ranked statistically to choose the best one .
433
3
434
[[ It ]] also facilitates more efficient << statistical ranking >> than a previous approach to statistical generation .
434
3
435
[[ It ]] also facilitates more efficient statistical ranking than a previous << approach >> to statistical generation .
435
5
436
It also facilitates more efficient statistical ranking than a previous [[ approach ]] to << statistical generation >> .
436
3
437
An efficient [[ ranking algorithm ]] is described , together with experimental results showing significant improvements over simple << enumeration >> or a lattice-based approach .
437
5
438
An efficient [[ ranking algorithm ]] is described , together with experimental results showing significant improvements over simple enumeration or a << lattice-based approach >> .
438
5
439
An efficient ranking algorithm is described , together with experimental results showing significant improvements over simple [[ enumeration ]] or a << lattice-based approach >> .
439
0
440
This article deals with the interpretation of conceptual operations underlying the communicative use of [[ natural language -LRB- NL -RRB- ]] within the << Structured Inheritance Network -LRB- SI-Nets -RRB- paradigm >> .
440
3
441
The operations are reduced to functions of a formal language , thus changing the level of abstraction of the [[ operations ]] to be performed on << SI-Nets >> .
441
3
442
In this sense , [[ operations ]] on << SI-Nets >> are not merely isomorphic to single epistemological objects , but can be viewed as a simulation of processes on a different level , that pertaining to the conceptual system of NL .
442
3
443
In this sense , operations on SI-Nets are not merely isomorphic to single epistemological objects , but can be viewed as a simulation of processes on a different level , that pertaining to the << conceptual system >> of [[ NL ]] .
443
3
444
For this purpose , we have designed a version of [[ KL-ONE ]] which represents the epistemological level , while the new experimental language , << KL-Conc >> , represents the conceptual level .
444
5
445
For this purpose , we have designed a version of << KL-ONE >> which represents the [[ epistemological level ]] , while the new experimental language , KL-Conc , represents the conceptual level .
445
1
446
For this purpose , we have designed a version of KL-ONE which represents the epistemological level , while the new experimental language , << KL-Conc >> , represents the [[ conceptual level ]] .
446
1
447
We present an [[ algorithm ]] for << calibrated camera relative pose estimation >> from lines .
447
3
448
We evaluate the performance of the << algorithm >> using [[ synthetic and real data ]] .
448
3
449
The intended use of the [[ algorithm ]] is with robust << hypothesize-and-test frameworks >> such as RANSAC .
449
0
450
The intended use of the algorithm is with robust << hypothesize-and-test frameworks >> such as [[ RANSAC ]] .
450
2
451
Our [[ approach ]] is suitable for << urban and indoor environments >> where most lines are either parallel or orthogonal to each other .
451
3
452
In this paper , we present a [[ fully automated extraction system ]] , named IntEx , to identify << gene and protein interactions >> in biomedical text .
452
3
453
In this paper , we present a << fully automated extraction system >> , named [[ IntEx ]] , to identify gene and protein interactions in biomedical text .
453
2
454
In this paper , we present a fully automated extraction system , named IntEx , to identify << gene and protein interactions >> in [[ biomedical text ]] .
454
3
455
Then , tagging << biological entities >> with the help of [[ biomedical and linguistic ontologies ]] .
455
3
456
Our [[ extraction system ]] handles complex sentences and extracts << multiple and nested interactions >> specified in a sentence .
456
3
457
Experimental evaluations with two other state of the art << extraction systems >> indicate that the [[ IntEx system ]] achieves better performance without the labor intensive pattern engineering requirement .
457
5
458
This paper introduces a [[ method ]] for << computational analysis of move structures >> in abstracts of research articles .
458
3
459
This paper introduces a method for << computational analysis of move structures >> in [[ abstracts of research articles ]] .
459
3
460
The method involves automatically gathering a large number of << abstracts >> from the [[ Web ]] and building a language model of abstract moves .
460
3
461
The method involves automatically gathering a large number of abstracts from the Web and building a << language model >> of [[ abstract moves ]] .
461
3
462
We also present a << prototype concordancer >> , [[ CARE ]] , which exploits the move-tagged abstracts for digital learning .
462
2
463
We also present a prototype concordancer , [[ CARE ]] , which exploits the << move-tagged abstracts >> for digital learning .
463
3
464
We also present a prototype concordancer , CARE , which exploits the [[ move-tagged abstracts ]] for << digital learning >> .
464
3
465
This [[ system ]] provides a promising << approach >> to Web-based computer-assisted academic writing .
465
3
466
This system provides a promising [[ approach ]] to << Web-based computer-assisted academic writing >> .
466
3
467
This work presents a [[ real-time system ]] for << multiple object tracking in dynamic scenes >> .
467
3
468
A unique characteristic of the [[ system ]] is its ability to cope with << long-duration and complete occlusion >> without a prior knowledge about the shape or motion of objects .
468
3
469
A unique characteristic of the system is its ability to cope with long-duration and complete occlusion without a [[ prior knowledge ]] about the << shape >> or motion of objects .
469
1
470
A unique characteristic of the system is its ability to cope with long-duration and complete occlusion without a [[ prior knowledge ]] about the shape or << motion of objects >> .
470
1
471
A unique characteristic of the system is its ability to cope with long-duration and complete occlusion without a prior knowledge about the [[ shape ]] or << motion of objects >> .
471
0
472
The << system >> produces good segment and [[ tracking ]] results at a frame rate of 15-20 fps for image size of 320x240 , as demonstrated by extensive experiments performed using video sequences under different conditions indoor and outdoor with long-duration and complete occlusions in changing background .
472
6
473
We propose a [[ method ]] of << organizing reading materials >> for vocabulary learning .
473
3
474
We propose a method of [[ organizing reading materials ]] for << vocabulary learning >> .
474
3
475
We used a specialized vocabulary for an English certification test as the target vocabulary and used [[ English Wikipedia ]] , a << free-content encyclopedia >> , as the target corpus .
475
2
476
A novel [[ bootstrapping approach ]] to << Named Entity -LRB- NE -RRB- tagging >> using concept-based seeds and successive learners is presented .
476
3
477
A novel << bootstrapping approach >> to Named Entity -LRB- NE -RRB- tagging using [[ concept-based seeds ]] and successive learners is presented .
477
3
478
A novel bootstrapping approach to Named Entity -LRB- NE -RRB- tagging using [[ concept-based seeds ]] and << successive learners >> is presented .
478
0
479
A novel << bootstrapping approach >> to Named Entity -LRB- NE -RRB- tagging using concept-based seeds and [[ successive learners ]] is presented .
479
3
480
This approach only requires a few common noun or pronoun seeds that correspond to the concept for the targeted << NE >> , e.g. he/she/man / woman for [[ PERSON NE ]] .
480
2
481
The << bootstrapping procedure >> is implemented as training two [[ successive learners ]] .
481
3
482
First , [[ decision list ]] is used to learn the << parsing-based NE rules >> .
482
3
483
The resulting [[ NE system ]] approaches << supervised NE >> performance for some NE types .
483
3
484
We present the first known empirical test of an increasingly common speculative claim , by evaluating a representative << Chinese-to-English SMT model >> directly on [[ word sense disambiguation ]] performance , using standard WSD evaluation methodology and datasets from the Senseval-3 Chinese lexical sample task .
484
6
485
We present the first known empirical test of an increasingly common speculative claim , by evaluating a representative << Chinese-to-English SMT model >> directly on word sense disambiguation performance , using standard [[ WSD evaluation methodology ]] and datasets from the Senseval-3 Chinese lexical sample task .
485
6
486
We present the first known empirical test of an increasingly common speculative claim , by evaluating a representative << Chinese-to-English SMT model >> directly on word sense disambiguation performance , using standard WSD evaluation methodology and datasets from the [[ Senseval-3 Chinese lexical sample task ]] .
486
6
487
Much effort has been put in designing and evaluating << dedicated word sense disambiguation -LRB- WSD -RRB- models >> , in particular with the [[ Senseval series of workshops ]] .
487
6
488
At the same time , the recent improvements in the [[ BLEU scores ]] of << statistical machine translation -LRB- SMT -RRB- >> suggests that SMT models are good at predicting the right translation of the words in source language sentences .
488
6
489
At the same time , the recent improvements in the BLEU scores of statistical machine translation -LRB- SMT -RRB- suggests that [[ SMT models ]] are good at predicting the right << translation >> of the words in source language sentences .
489
3
490
Surprisingly however , the [[ WSD accuracy ]] of << SMT models >> has never been evaluated and compared with that of the dedicated WSD models .
490
6
491
Surprisingly however , the << WSD accuracy >> of SMT models has never been evaluated and compared with [[ that ]] of the dedicated WSD models .
491
5
492
We present controlled experiments showing the [[ WSD accuracy ]] of current typical << SMT models >> to be significantly lower than that of all the dedicated WSD models considered .
492
6
493
We present controlled experiments showing the << WSD accuracy >> of current typical SMT models to be significantly lower than [[ that ]] of all the dedicated WSD models considered .
493
5
494
This tends to support the view that despite recent speculative claims to the contrary , current [[ SMT models ]] do have limitations in comparison with << dedicated WSD models >> , and that SMT should benefit from the better predictions made by the WSD models .
494
5
495
This tends to support the view that despite recent speculative claims to the contrary , current SMT models do have limitations in comparison with dedicated WSD models , and that << SMT >> should benefit from the better predictions made by the [[ WSD models ]] .
495
3
496
In this paper we present a novel , customizable : << IE paradigm >> that takes advantage of [[ predicate-argument structures ]] .
496
3
497
<< It >> is based on : -LRB- 1 -RRB- an extended set of [[ features ]] ; and -LRB- 2 -RRB- inductive decision tree learning .
497
3
498
It is based on : -LRB- 1 -RRB- an extended set of [[ features ]] ; and -LRB- 2 -RRB- << inductive decision tree learning >> .
498
0
499
<< It >> is based on : -LRB- 1 -RRB- an extended set of features ; and -LRB- 2 -RRB- [[ inductive decision tree learning ]] .
499
3