Unnamed: 0
int64
0
3.22k
text
stringlengths
49
577
id
int64
0
3.22k
label
int64
0
6
600
We show that the newly proposed concept-distance measures outperform traditional [[ distributional word-distance measures ]] in the << tasks >> of -LRB- 1 -RRB- ranking word pairs in order of semantic distance , and -LRB- 2 -RRB- correcting real-word spelling errors .
600
3
601
We show that the newly proposed concept-distance measures outperform traditional distributional word-distance measures in the << tasks >> of -LRB- 1 -RRB- [[ ranking word pairs in order of semantic distance ]] , and -LRB- 2 -RRB- correcting real-word spelling errors .
601
2
602
We show that the newly proposed concept-distance measures outperform traditional distributional word-distance measures in the << tasks >> of -LRB- 1 -RRB- ranking word pairs in order of semantic distance , and -LRB- 2 -RRB- [[ correcting real-word spelling errors ]] .
602
2
603
We show that the newly proposed concept-distance measures outperform traditional distributional word-distance measures in the tasks of -LRB- 1 -RRB- << ranking word pairs in order of semantic distance >> , and -LRB- 2 -RRB- [[ correcting real-word spelling errors ]] .
603
0
604
In the latter [[ task ]] , of all the << WordNet-based measures >> , only that proposed by Jiang and Conrath outperforms the best distributional concept-distance measures .
604
6
605
In the latter [[ task ]] , of all the WordNet-based measures , only that proposed by Jiang and Conrath outperforms the best << distributional concept-distance measures >> .
605
6
606
In the latter task , of all the << WordNet-based measures >> , only that proposed by Jiang and Conrath outperforms the best [[ distributional concept-distance measures ]] .
606
5
607
One of the main results of this work is the definition of a relation between [[ broad semantic classes ]] and << LCS meaning components >> .
607
0
608
Our [[ acquisition program - LEXICALL - ]] takes , as input , the result of previous work on verb classification and thematic grid tagging , and outputs << LCS representations >> for different languages .
608
3
609
Our << acquisition program - LEXICALL - >> takes , as input , the result of previous work on [[ verb classification ]] and thematic grid tagging , and outputs LCS representations for different languages .
609
3
610
Our acquisition program - LEXICALL - takes , as input , the result of previous work on [[ verb classification ]] and << thematic grid tagging >> , and outputs LCS representations for different languages .
610
0
611
Our << acquisition program - LEXICALL - >> takes , as input , the result of previous work on verb classification and [[ thematic grid tagging ]] , and outputs LCS representations for different languages .
611
3
612
These [[ representations ]] have been ported into << English , Arabic and Spanish lexicons >> , each containing approximately 9000 verbs .
612
3
613
We are currently using these [[ lexicons ]] in an << operational foreign language tutoring >> and machine translation .
613
3
614
We are currently using these [[ lexicons ]] in an operational foreign language tutoring and << machine translation >> .
614
3
615
We are currently using these lexicons in an [[ operational foreign language tutoring ]] and << machine translation >> .
615
0
616
The theoretical study of the [[ range concatenation grammar -LSB- RCG -RSB- formalism ]] has revealed many attractive properties which may be used in << NLP >> .
616
3
617
In particular , << range concatenation languages -LSB- RCL -RSB- >> can be parsed in [[ polynomial time ]] and many classical grammatical formalisms can be translated into equivalent RCGs without increasing their worst-case parsing time complexity .
617
1
618
In particular , range concatenation languages -LSB- RCL -RSB- can be parsed in polynomial time and many classical << grammatical formalisms >> can be translated into equivalent RCGs without increasing their [[ worst-case parsing time complexity ]] .
618
6
619
For example , after translation into an equivalent RCG , any << tree adjoining grammar >> can be parsed in [[ O -LRB- n6 -RRB- time ]] .
619
1
620
In this paper , we study a [[ parsing technique ]] whose purpose is to improve the practical efficiency of << RCL parsers >> .
620
3
621
The non-deterministic parsing choices of the [[ main parser ]] for a << language L >> are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L .
621
3
622
The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the << shared derivation forest >> output by a prior [[ RCL parser ]] for a suitable superset of L .
622
3
623
The results of a practical evaluation of this << method >> on a [[ wide coverage English grammar ]] are given .
623
6
624
In this paper we introduce [[ Ant-Q ]] , a family of algorithms which present many similarities with Q-learning -LRB- Watkins , 1989 -RRB- , and which we apply to the solution of << symmetric and asym-metric instances of the traveling salesman problem -LRB- TSP -RRB- >> .
624
3
625
<< Ant-Q algorithms >> were inspired by work on the [[ ant system -LRB- AS -RRB- ]] , a distributed algorithm for combinatorial optimization based on the metaphor of ant colonies which was recently proposed in -LRB- Dorigo , 1992 ; Dorigo , Maniezzo and Colorni , 1996 -RRB- .
625
3
626
Ant-Q algorithms were inspired by work on the [[ ant system -LRB- AS -RRB- ]] , a << distributed algorithm >> for combinatorial optimization based on the metaphor of ant colonies which was recently proposed in -LRB- Dorigo , 1992 ; Dorigo , Maniezzo and Colorni , 1996 -RRB- .
626
2
627
Ant-Q algorithms were inspired by work on the ant system -LRB- AS -RRB- , a [[ distributed algorithm ]] for << combinatorial optimization >> based on the metaphor of ant colonies which was recently proposed in -LRB- Dorigo , 1992 ; Dorigo , Maniezzo and Colorni , 1996 -RRB- .
627
3
628
We show that [[ AS ]] is a particular instance of the << Ant-Q family >> , and that there are instances of this family which perform better than AS .
628
2
629
We show that AS is a particular instance of the Ant-Q family , and that there are [[ instances ]] of this << family >> which perform better than AS .
629
4
630
We show that AS is a particular instance of the Ant-Q family , and that there are [[ instances ]] of this family which perform better than << AS >> .
630
5
631
We experimentally investigate the functioning of Ant-Q and we show that the results obtained by [[ Ant-Q ]] on << symmetric TSP >> 's are competitive with those obtained by other heuristic approaches based on neural networks or local search .
631
3
632
We experimentally investigate the functioning of Ant-Q and we show that the results obtained by [[ Ant-Q ]] on symmetric TSP 's are competitive with those obtained by other << heuristic approaches >> based on neural networks or local search .
632
5
633
We experimentally investigate the functioning of Ant-Q and we show that the results obtained by Ant-Q on symmetric TSP 's are competitive with those obtained by other << heuristic approaches >> based on [[ neural networks ]] or local search .
633
3
634
We experimentally investigate the functioning of Ant-Q and we show that the results obtained by Ant-Q on symmetric TSP 's are competitive with those obtained by other heuristic approaches based on [[ neural networks ]] or << local search >> .
634
0
635
We experimentally investigate the functioning of Ant-Q and we show that the results obtained by Ant-Q on symmetric TSP 's are competitive with those obtained by other << heuristic approaches >> based on neural networks or [[ local search ]] .
635
3
636
Finally , we apply [[ Ant-Q ]] to some difficult << asymmetric TSP >> 's obtaining very good results : Ant-Q was able to find solutions of a quality which usually can be found only by very specialized algorithms .
636
3
637
In this paper , we develop a [[ geometric framework ]] for << linear or nonlinear discriminant subspace learning and classification >> .
637
3
638
In our framework , the << structures of classes >> are conceptualized as a [[ semi-Riemannian manifold ]] which is considered as a submanifold embedded in an ambient semi-Riemannian space .
638
3
639
In our framework , the structures of classes are conceptualized as a semi-Riemannian manifold which is considered as a [[ submanifold ]] embedded in an << ambient semi-Riemannian space >> .
639
4
640
The << class structures >> of original samples can be characterized and deformed by [[ local metrics of the semi-Riemannian space ]] .
640
3
641
<< Semi-Riemannian metrics >> are uniquely determined by the [[ smoothing of discrete functions ]] and the nullity of the semi-Riemannian space .
641
3
642
Semi-Riemannian metrics are uniquely determined by the [[ smoothing of discrete functions ]] and the << nullity of the semi-Riemannian space >> .
642
0
643
<< Semi-Riemannian metrics >> are uniquely determined by the smoothing of discrete functions and the [[ nullity of the semi-Riemannian space ]] .
643
3
644
Based on the geometrization of class structures , optimizing << class structures >> in the [[ feature space ]] is equivalent to maximizing the quadratic quantities of metric tensors in the semi-Riemannian space .
644
1
645
Based on the geometrization of class structures , optimizing class structures in the feature space is equivalent to maximizing the << quadratic quantities of metric tensors >> in the [[ semi-Riemannian space ]] .
645
1
646
Based on the proposed [[ framework ]] , a novel << algorithm >> , dubbed as Semi-Riemannian Discriminant Analysis -LRB- SRDA -RRB- , is presented for subspace-based classification .
646
3
647
Based on the proposed framework , a novel [[ algorithm ]] , dubbed as Semi-Riemannian Discriminant Analysis -LRB- SRDA -RRB- , is presented for << subspace-based classification >> .
647
3
648
The performance of [[ SRDA ]] is tested on face recognition -LRB- singular case -RRB- and handwritten capital letter classification -LRB- nonsingular case -RRB- against existing << algorithms >> .
648
5
649
The performance of << SRDA >> is tested on [[ face recognition -LRB- singular case ]] -RRB- and handwritten capital letter classification -LRB- nonsingular case -RRB- against existing algorithms .
649
6
650
The performance of SRDA is tested on [[ face recognition -LRB- singular case ]] -RRB- and << handwritten capital letter classification -LRB- nonsingular case -RRB- >> against existing algorithms .
650
0
651
The performance of SRDA is tested on [[ face recognition -LRB- singular case ]] -RRB- and handwritten capital letter classification -LRB- nonsingular case -RRB- against existing << algorithms >> .
651
6
652
The performance of << SRDA >> is tested on face recognition -LRB- singular case -RRB- and [[ handwritten capital letter classification -LRB- nonsingular case -RRB- ]] against existing algorithms .
652
6
653
The performance of SRDA is tested on face recognition -LRB- singular case -RRB- and [[ handwritten capital letter classification -LRB- nonsingular case -RRB- ]] against existing << algorithms >> .
653
6
654
The experimental results show that [[ SRDA ]] works well on << recognition >> and classification , implying that semi-Riemannian geometry is a promising new tool for pattern recognition and machine learning .
654
3
655
The experimental results show that [[ SRDA ]] works well on recognition and << classification >> , implying that semi-Riemannian geometry is a promising new tool for pattern recognition and machine learning .
655
3
656
The experimental results show that SRDA works well on [[ recognition ]] and << classification >> , implying that semi-Riemannian geometry is a promising new tool for pattern recognition and machine learning .
656
0
657
The experimental results show that SRDA works well on recognition and classification , implying that [[ semi-Riemannian geometry ]] is a promising new tool for << pattern recognition >> and machine learning .
657
3
658
The experimental results show that SRDA works well on recognition and classification , implying that [[ semi-Riemannian geometry ]] is a promising new tool for pattern recognition and << machine learning >> .
658
3
659
The experimental results show that SRDA works well on recognition and classification , implying that semi-Riemannian geometry is a promising new tool for [[ pattern recognition ]] and << machine learning >> .
659
0
660
A [[ deterministic parser ]] is under development which represents a departure from traditional << deterministic parsers >> in that it combines both symbolic and connectionist components .
660
5
661
A deterministic parser is under development which represents a departure from traditional deterministic parsers in that << it >> combines both [[ symbolic and connectionist components ]] .
661
4
662
The << connectionist component >> is trained either from [[ patterns ]] derived from the rules of a deterministic grammar .
662
3
663
The connectionist component is trained either from << patterns >> derived from the [[ rules of a deterministic grammar ]] .
663
3
664
The development and evolution of such a [[ hybrid architecture ]] has lead to a << parser >> which is superior to any known deterministic parser .
664
3
665
The development and evolution of such a hybrid architecture has lead to a [[ parser ]] which is superior to any known << deterministic parser >> .
665
5
666
Experiments are described and powerful [[ training techniques ]] are demonstrated that permit << decision-making >> by the connectionist component in the parsing process .
666
3
667
Experiments are described and powerful training techniques are demonstrated that permit << decision-making >> by the [[ connectionist component ]] in the parsing process .
667
3
668
Experiments are described and powerful training techniques are demonstrated that permit decision-making by the [[ connectionist component ]] in the << parsing process >> .
668
4
669
Data are presented which show how a [[ connectionist -LRB- neural -RRB- network ]] trained with linguistic rules can parse both << expected -LRB- grammatical -RRB- sentences >> as well as some novel -LRB- ungrammatical or lexically ambiguous -RRB- sentences .
669
3
670
Data are presented which show how a [[ connectionist -LRB- neural -RRB- network ]] trained with linguistic rules can parse both expected -LRB- grammatical -RRB- sentences as well as some novel << -LRB- ungrammatical or lexically ambiguous -RRB- sentences >> .
670
3
671
Data are presented which show how a << connectionist -LRB- neural -RRB- network >> trained with [[ linguistic rules ]] can parse both expected -LRB- grammatical -RRB- sentences as well as some novel -LRB- ungrammatical or lexically ambiguous -RRB- sentences .
671
3
672
Data are presented which show how a connectionist -LRB- neural -RRB- network trained with linguistic rules can parse both [[ expected -LRB- grammatical -RRB- sentences ]] as well as some novel << -LRB- ungrammatical or lexically ambiguous -RRB- sentences >> .
672
0
673
Robust << natural language interpretation >> requires strong [[ semantic domain models ]] , fail-soft recovery heuristics , and very flexible control structures .
673
3
674
Robust natural language interpretation requires strong [[ semantic domain models ]] , << fail-soft recovery heuristics >> , and very flexible control structures .
674
0
675
Robust << natural language interpretation >> requires strong semantic domain models , [[ fail-soft recovery heuristics ]] , and very flexible control structures .
675
3
676
Robust natural language interpretation requires strong semantic domain models , [[ fail-soft recovery heuristics ]] , and very flexible << control structures >> .
676
0
677
Robust << natural language interpretation >> requires strong semantic domain models , fail-soft recovery heuristics , and very flexible [[ control structures ]] .
677
3
678
Although [[ single-strategy parsers ]] have met with a measure of success , a << multi-strategy approach >> is shown to provide a much higher degree of flexibility , redundancy , and ability to bring task-specific domain knowledge -LRB- in addition to general linguistic knowledge -RRB- to bear on both grammatical and ungrammatical input .
678
5
679
Although single-strategy parsers have met with a measure of success , a multi-strategy approach is shown to provide a much higher degree of flexibility , redundancy , and ability to bring [[ task-specific domain knowledge ]] -LRB- in addition to << general linguistic knowledge >> -RRB- to bear on both grammatical and ungrammatical input .
679
0
680
A << parsing algorithm >> is presented that integrates several different [[ parsing strategies ]] , with case-frame instantiation dominating .
680
4
681
A parsing algorithm is presented that integrates several different << parsing strategies >> , with [[ case-frame instantiation ]] dominating .
681
2
682
Each of these [[ parsing strategies ]] exploits different types of knowledge ; and their combination provides a strong framework in which to process << conjunctions >> , fragmentary input , and ungrammatical structures , as well as less exotic , grammatically correct input .
682
3
683
Each of these [[ parsing strategies ]] exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , << fragmentary input >> , and ungrammatical structures , as well as less exotic , grammatically correct input .
683
3
684
Each of these [[ parsing strategies ]] exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , fragmentary input , and << ungrammatical structures >> , as well as less exotic , grammatically correct input .
684
3
685
Each of these [[ parsing strategies ]] exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , fragmentary input , and ungrammatical structures , as well as less << exotic , grammatically correct input >> .
685
3
686
Each of these parsing strategies exploits different types of knowledge ; and their combination provides a strong framework in which to process [[ conjunctions ]] , << fragmentary input >> , and ungrammatical structures , as well as less exotic , grammatically correct input .
686
0
687
Each of these parsing strategies exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , [[ fragmentary input ]] , and << ungrammatical structures >> , as well as less exotic , grammatically correct input .
687
0
688
Each of these parsing strategies exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , fragmentary input , and [[ ungrammatical structures ]] , as well as less << exotic , grammatically correct input >> .
688
0
689
Several [[ specific heuristics ]] for handling << ungrammatical input >> are presented within this multi-strategy framework .
689
3
690
Several [[ specific heuristics ]] for handling ungrammatical input are presented within this << multi-strategy framework >> .
690
4
691
Recently , [[ Stacked Auto-Encoders -LRB- SAE -RRB- ]] have been successfully used for << learning imbalanced datasets >> .
691
3
692
In this paper , for the first time , we propose to use a [[ Neural Network classifier ]] furnished by an SAE structure for detecting the errors made by a strong << Automatic Speech Recognition -LRB- ASR -RRB- system >> .
692
3
693
In this paper , for the first time , we propose to use a << Neural Network classifier >> furnished by an [[ SAE structure ]] for detecting the errors made by a strong Automatic Speech Recognition -LRB- ASR -RRB- system .
693
3
694
[[ Error detection ]] on an << automatic transcription >> provided by a '' strong '' ASR system , i.e. exhibiting a small word error rate , is difficult due to the limited number of '' positive '' examples -LRB- i.e. words erroneously recognized -RRB- available for training a binary classi-fier .
694
3
695
In this paper we investigate and compare different types of [[ classifiers ]] for << automatically detecting ASR errors >> , including the one based on a stacked auto-encoder architecture .
695
3
696
In this paper we investigate and compare different types of << classifiers >> for automatically detecting ASR errors , including the [[ one ]] based on a stacked auto-encoder architecture .
696
2
697
In this paper we investigate and compare different types of classifiers for automatically detecting ASR errors , including the << one >> based on a [[ stacked auto-encoder architecture ]] .
697
3
698
We show the effectiveness of the latter by measuring and comparing performance on the << automatic transcriptions >> of an [[ English corpus ]] collected from TED talks .
698
1
699
We show the effectiveness of the latter by measuring and comparing performance on the automatic transcriptions of an << English corpus >> collected from [[ TED talks ]] .
699
3