Unnamed: 0
int64
0
3.22k
text
stringlengths
49
577
id
int64
0
3.22k
label
int64
0
6
300
For << categorization task >> , positive feature vectors and [[ negative feature vectors ]] are used cooperatively to construct generic , indicative summaries .
300
3
301
For categorization task , positive feature vectors and [[ negative feature vectors ]] are used cooperatively to construct << generic , indicative summaries >> .
301
3
302
For << adhoc task >> , a [[ text model ]] based on relationship between nouns and verbs is used to filter out irrelevant discourse segment , to rank relevant sentences , and to generate the user-directed summaries .
302
3
303
For adhoc task , a [[ text model ]] based on relationship between nouns and verbs is used to filter out irrelevant << discourse segment >> , to rank relevant sentences , and to generate the user-directed summaries .
303
3
304
For adhoc task , a [[ text model ]] based on relationship between nouns and verbs is used to filter out irrelevant discourse segment , to rank relevant sentences , and to generate the << user-directed summaries >> .
304
3
305
The result shows that the [[ NormF ]] of the best summary and that of the fixed summary for << adhoc tasks >> are 0.456 and 0 .
305
6
306
The [[ NormF ]] of the best summary and that of the fixed summary for << categorization task >> are 0.4090 and 0.4023 .
306
6
307
Our [[ system ]] outperforms the average << system >> in categorization task but does a common job in adhoc task .
307
5
308
Our << system >> outperforms the average system in [[ categorization task ]] but does a common job in adhoc task .
308
6
309
Our system outperforms the average << system >> in [[ categorization task ]] but does a common job in adhoc task .
309
6
310
Our << system >> outperforms the average system in categorization task but does a common job in [[ adhoc task ]] .
310
6
311
Our system outperforms the average system in << categorization task >> but does a common job in [[ adhoc task ]] .
311
6
312
In real-world action recognition problems , low-level features can not adequately characterize the [[ rich spatial-temporal structures ]] in << action videos >> .
312
1
313
The second type is << data-driven attributes >> , which are learned from data using [[ dictionary learning methods ]] .
313
3
314
We propose a << discriminative and compact attribute-based representation >> by selecting a subset of [[ discriminative attributes ]] from a large attribute set .
314
3
315
Three << attribute selection criteria >> are proposed and formulated as a [[ submodular optimization problem ]] .
315
3
316
Experimental results on the [[ Olympic Sports and UCF101 datasets ]] demonstrate that the proposed << attribute-based representation >> can significantly boost the performance of action recognition algorithms and outperform most recently proposed recognition approaches .
316
6
317
Experimental results on the Olympic Sports and UCF101 datasets demonstrate that the proposed [[ attribute-based representation ]] can significantly boost the performance of << action recognition algorithms >> and outperform most recently proposed recognition approaches .
317
3
318
Experimental results on the Olympic Sports and UCF101 datasets demonstrate that the proposed attribute-based representation can significantly boost the performance of [[ action recognition algorithms ]] and outperform most recently proposed << recognition approaches >> .
318
5
319
Landsbergen 's advocacy of [[ analytical inverses ]] for << compositional syntax rules >> encourages the application of Definite Clause Grammar techniques to the construction of a parser returning Montague analysis trees .
319
3
320
Landsbergen 's advocacy of [[ analytical inverses ]] for compositional syntax rules encourages the application of << Definite Clause Grammar techniques >> to the construction of a parser returning Montague analysis trees .
320
3
321
Landsbergen 's advocacy of analytical inverses for compositional syntax rules encourages the application of [[ Definite Clause Grammar techniques ]] to the construction of a << parser returning Montague analysis trees >> .
321
3
322
A << parser MDCC >> is presented which implements an [[ augmented Friedman - Warren algorithm ]] permitting post referencing * and interfaces with a language of intenslonal logic translator LILT so as to display the derivational history of corresponding reduced IL formulae .
322
3
323
A parser MDCC is presented which implements an << augmented Friedman - Warren algorithm >> permitting [[ post referencing ]] * and interfaces with a language of intenslonal logic translator LILT so as to display the derivational history of corresponding reduced IL formulae .
323
1
324
A parser MDCC is presented which implements an augmented Friedman - Warren algorithm permitting post referencing * and interfaces with a language of << intenslonal logic translator LILT >> so as to display the [[ derivational history ]] of corresponding reduced IL formulae .
324
3
325
A parser MDCC is presented which implements an augmented Friedman - Warren algorithm permitting post referencing * and interfaces with a language of intenslonal logic translator LILT so as to display the << derivational history >> of corresponding [[ reduced IL formulae ]] .
325
1
326
Some familiarity with [[ Montague 's PTQ ]] and the << basic DCG mechanism >> is assumed .
326
0
327
<< Stochastic attention-based models >> have been shown to improve [[ computational efficiency ]] at test time , but they remain difficult to train because of intractable posterior inference and high variance in the stochastic gradient estimates .
327
6
328
Stochastic attention-based models have been shown to improve computational efficiency at test time , but they remain difficult to train because of [[ intractable posterior inference ]] and high variance in the << stochastic gradient estimates >> .
328
0
329
[[ Borrowing techniques ]] from the literature on training << deep generative models >> , we present the Wake-Sleep Recurrent Attention Model , a method for training stochastic attention networks which improves posterior inference and which reduces the variability in the stochastic gradients .
329
3
330
Borrowing techniques from the literature on training deep generative models , we present the Wake-Sleep Recurrent Attention Model , a [[ method ]] for training << stochastic attention networks >> which improves posterior inference and which reduces the variability in the stochastic gradients .
330
3
331
Borrowing techniques from the literature on training deep generative models , we present the Wake-Sleep Recurrent Attention Model , a method for training [[ stochastic attention networks ]] which improves << posterior inference >> and which reduces the variability in the stochastic gradients .
331
3
332
We show that our << method >> can greatly speed up the [[ training time ]] for stochastic attention networks in the domains of image classification and caption generation .
332
6
333
We show that our method can greatly speed up the [[ training time ]] for << stochastic attention networks >> in the domains of image classification and caption generation .
333
1
334
We show that our << method >> can greatly speed up the training time for stochastic attention networks in the domains of [[ image classification ]] and caption generation .
334
6
335
We show that our method can greatly speed up the training time for stochastic attention networks in the domains of [[ image classification ]] and << caption generation >> .
335
0
336
We show that our << method >> can greatly speed up the training time for stochastic attention networks in the domains of image classification and [[ caption generation ]] .
336
6
337
A new [[ exemplar-based framework ]] unifying << image completion >> , texture synthesis and image inpainting is presented in this work .
337
3
338
A new [[ exemplar-based framework ]] unifying image completion , << texture synthesis >> and image inpainting is presented in this work .
338
3
339
A new [[ exemplar-based framework ]] unifying image completion , texture synthesis and << image inpainting >> is presented in this work .
339
3
340
A new exemplar-based framework unifying [[ image completion ]] , << texture synthesis >> and image inpainting is presented in this work .
340
0
341
A new exemplar-based framework unifying image completion , [[ texture synthesis ]] and << image inpainting >> is presented in this work .
341
0
342
Contrary to existing [[ greedy techniques ]] , these << tasks >> are posed in the form of a discrete global optimization problem with a well defined objective function .
342
5
343
Contrary to existing greedy techniques , these << tasks >> are posed in the form of a [[ discrete global optimization problem ]] with a well defined objective function .
343
1
344
Contrary to existing greedy techniques , these tasks are posed in the form of a << discrete global optimization problem >> with a [[ well defined objective function ]] .
344
1
345
For solving this << problem >> a novel [[ optimization scheme ]] , called Priority-BP , is proposed which carries two very important extensions over standard belief propagation -LRB- BP -RRB- : '' priority-based message scheduling '' and '' dynamic label pruning '' .
345
3
346
For solving this problem a novel << optimization scheme >> , called [[ Priority-BP ]] , is proposed which carries two very important extensions over standard belief propagation -LRB- BP -RRB- : '' priority-based message scheduling '' and '' dynamic label pruning '' .
346
2
347
For solving this problem a novel << optimization scheme >> , called Priority-BP , is proposed which carries two very important [[ extensions ]] over standard belief propagation -LRB- BP -RRB- : '' priority-based message scheduling '' and '' dynamic label pruning '' .
347
4
348
For solving this problem a novel optimization scheme , called Priority-BP , is proposed which carries two very important << extensions >> over standard [[ belief propagation -LRB- BP -RRB- ]] : '' priority-based message scheduling '' and '' dynamic label pruning '' .
348
3
349
For solving this problem a novel optimization scheme , called Priority-BP , is proposed which carries two very important << extensions >> over standard belief propagation -LRB- BP -RRB- : '' [[ priority-based message scheduling ]] '' and '' dynamic label pruning '' .
349
2
350
For solving this problem a novel optimization scheme , called Priority-BP , is proposed which carries two very important extensions over standard belief propagation -LRB- BP -RRB- : '' [[ priority-based message scheduling ]] '' and '' << dynamic label pruning >> '' .
350
0
351
For solving this problem a novel optimization scheme , called Priority-BP , is proposed which carries two very important << extensions >> over standard belief propagation -LRB- BP -RRB- : '' priority-based message scheduling '' and '' [[ dynamic label pruning ]] '' .
351
2
352
These two [[ extensions ]] work in cooperation to deal with the << intolerable computational cost of BP >> caused by the huge number of existing labels .
352
3
353
Moreover , both [[ extensions ]] are generic and can therefore be applied to any << MRF energy function >> as well .
353
3
354
The effectiveness of our << method >> is demonstrated on a wide variety of [[ image completion examples ]] .
354
3
355
In this paper , we compare the relative effects of [[ segment order ]] , << segmentation >> and segment contiguity on the retrieval performance of a translation memory system .
355
0
356
In this paper , we compare the relative effects of [[ segment order ]] , segmentation and segment contiguity on the retrieval performance of a << translation memory system >> .
356
3
357
In this paper , we compare the relative effects of segment order , [[ segmentation ]] and << segment contiguity >> on the retrieval performance of a translation memory system .
357
0
358
In this paper , we compare the relative effects of segment order , [[ segmentation ]] and segment contiguity on the retrieval performance of a << translation memory system >> .
358
3
359
In this paper , we compare the relative effects of segment order , segmentation and [[ segment contiguity ]] on the retrieval performance of a << translation memory system >> .
359
3
360
In this paper , we compare the relative effects of segment order , segmentation and segment contiguity on the [[ retrieval ]] performance of a << translation memory system >> .
360
6
361
We take a selection of both << bag-of-words and segment order-sensitive string comparison methods >> , and run each over both [[ character - and word-segmented data ]] , in combination with a range of local segment contiguity models -LRB- in the form of N-grams -RRB- .
361
3
362
We take a selection of both << bag-of-words and segment order-sensitive string comparison methods >> , and run each over both character - and word-segmented data , in combination with a range of [[ local segment contiguity models ]] -LRB- in the form of N-grams -RRB- .
362
0
363
We take a selection of both bag-of-words and segment order-sensitive string comparison methods , and run each over both character - and word-segmented data , in combination with a range of << local segment contiguity models >> -LRB- in the form of [[ N-grams ]] -RRB- .
363
1
364
Over two distinct datasets , we find that << indexing >> according to simple [[ character bigrams ]] produces a retrieval accuracy superior to any of the tested word N-gram models .
364
3
365
Over two distinct datasets , we find that indexing according to simple [[ character bigrams ]] produces a retrieval accuracy superior to any of the tested << word N-gram models >> .
365
5
366
Over two distinct datasets , we find that indexing according to simple << character bigrams >> produces a [[ retrieval accuracy ]] superior to any of the tested word N-gram models .
366
6
367
Over two distinct datasets , we find that indexing according to simple character bigrams produces a [[ retrieval accuracy ]] superior to any of the tested << word N-gram models >> .
367
6
368
Further , in their optimum configuration , [[ bag-of-words methods ]] are shown to be equivalent to << segment order-sensitive methods >> in terms of retrieval accuracy , but much faster .
368
5
369
Further , in their optimum configuration , << bag-of-words methods >> are shown to be equivalent to segment order-sensitive methods in terms of [[ retrieval accuracy ]] , but much faster .
369
6
370
Further , in their optimum configuration , bag-of-words methods are shown to be equivalent to << segment order-sensitive methods >> in terms of [[ retrieval accuracy ]] , but much faster .
370
6
371
In this paper we show how two standard [[ outputs ]] from information extraction -LRB- IE -RRB- systems - named entity annotations and scenario templates - can be used to enhance access to << text collections >> via a standard text browser .
371
3
372
In this paper we show how two standard << outputs >> from information extraction -LRB- IE -RRB- systems - [[ named entity annotations ]] and scenario templates - can be used to enhance access to text collections via a standard text browser .
372
2
373
In this paper we show how two standard outputs from information extraction -LRB- IE -RRB- systems - [[ named entity annotations ]] and << scenario templates >> - can be used to enhance access to text collections via a standard text browser .
373
0
374
In this paper we show how two standard << outputs >> from information extraction -LRB- IE -RRB- systems - named entity annotations and [[ scenario templates ]] - can be used to enhance access to text collections via a standard text browser .
374
2
375
In this paper we show how two standard outputs from information extraction -LRB- IE -RRB- systems - named entity annotations and scenario templates - can be used to enhance access to << text collections >> via a standard [[ text browser ]] .
375
3
376
We describe how this information is used in a [[ prototype system ]] designed to support information workers ' access to a << pharmaceutical news archive >> as part of their industry watch function .
376
3
377
We also report results of a preliminary , [[ qualitative user evaluation ]] of the << system >> , which while broadly positive indicates further work needs to be done on the interface to make users aware of the increased potential of IE-enhanced text browsers .
377
6
378
We present a new [[ model-based bundle adjustment algorithm ]] to recover the << 3D model >> of a scene/object from a sequence of images with unknown motions .
378
3
379
We present a new model-based bundle adjustment algorithm to recover the << 3D model >> of a scene/object from a sequence of [[ images ]] with unknown motions .
379
3
380
We present a new model-based bundle adjustment algorithm to recover the 3D model of a scene/object from a sequence of << images >> with [[ unknown motions ]] .
380
4
381
Instead of representing scene/object by a collection of isolated 3D features -LRB- usually points -RRB- , our << algorithm >> uses a [[ surface ]] controlled by a small set of parameters .
381
3
382
Compared with previous [[ model-based approaches ]] , our << approach >> has the following advantages .
382
5
383
First , instead of using the [[ model space ]] as a << regular-izer >> , we directly use it as our search space , thus resulting in a more elegant formulation with fewer unknowns and fewer equations .
383
3
384
First , instead of using the model space as a [[ regular-izer ]] , we directly use it as our << search space >> , thus resulting in a more elegant formulation with fewer unknowns and fewer equations .
384
5
385
First , instead of using the model space as a regular-izer , we directly use [[ it ]] as our << search space >> , thus resulting in a more elegant formulation with fewer unknowns and fewer equations .
385
3
386
Third , regarding << face modeling >> , we use a very small set of [[ face metrics ]] -LRB- meaningful deformations -RRB- to parame-terize the face geometry , resulting in a smaller search space and a better posed system .
386
3
387
Third , regarding face modeling , we use a very small set of [[ face metrics ]] -LRB- meaningful deformations -RRB- to parame-terize the << face geometry >> , resulting in a smaller search space and a better posed system .
387
3
388
Third , regarding face modeling , we use a very small set of [[ face metrics ]] -LRB- meaningful deformations -RRB- to parame-terize the face geometry , resulting in a smaller << search space >> and a better posed system .
388
3
389
Third , regarding face modeling , we use a very small set of [[ face metrics ]] -LRB- meaningful deformations -RRB- to parame-terize the face geometry , resulting in a smaller search space and a better << posed system >> .
389
3
390
Experiments with both [[ synthetic and real data ]] show that this new << algorithm >> is faster , more accurate and more stable than existing ones .
390
6
391
Experiments with both [[ synthetic and real data ]] show that this new algorithm is faster , more accurate and more stable than existing << ones >> .
391
6
392
Experiments with both synthetic and real data show that this new [[ algorithm ]] is faster , more accurate and more stable than existing << ones >> .
392
5
393
This paper presents an [[ approach ]] to the << unsupervised learning of parts of speech >> which uses both morphological and syntactic information .
393
3
394
This paper presents an << approach >> to the unsupervised learning of parts of speech which uses both [[ morphological and syntactic information ]] .
394
3
395
While the [[ model ]] is more complex than << those >> which have been employed for unsupervised learning of POS tags in English , which use only syntactic information , the variety of languages in the world requires that we consider morphology as well .
395
5
396
While the model is more complex than [[ those ]] which have been employed for << unsupervised learning of POS tags in English >> , which use only syntactic information , the variety of languages in the world requires that we consider morphology as well .
396
3
397
While the model is more complex than << those >> which have been employed for unsupervised learning of POS tags in English , which use only [[ syntactic information ]] , the variety of languages in the world requires that we consider morphology as well .
397
3
398
In many languages , [[ morphology ]] provides better clues to a word 's category than << word order >> .
398
5
399
We present the [[ computational model ]] for << POS learning >> , and present results for applying it to Bulgarian , a Slavic language with relatively free word order and rich morphology .
399
3