|
,text,id,label |
|
0,"This paper presents an [[ algorithm ]] for << computing optical flow , shape , motion , lighting , and albedo >> from an image sequence of a rigidly-moving Lambertian object under distant illumination .",0,3 |
|
1,"This paper presents an << algorithm >> for computing optical flow , shape , motion , lighting , and albedo from an [[ image sequence ]] of a rigidly-moving Lambertian object under distant illumination .",1,3 |
|
2,"This paper presents an algorithm for computing optical flow , shape , motion , lighting , and albedo from an << image sequence >> of a [[ rigidly-moving Lambertian object ]] under distant illumination .",2,1 |
|
3,"This paper presents an algorithm for computing optical flow , shape , motion , lighting , and albedo from an image sequence of a << rigidly-moving Lambertian object >> under [[ distant illumination ]] .",3,1 |
|
4,"The problem is formulated in a manner that subsumes structure from [[ motion ]] , << multi-view stereo >> , and photo-metric stereo as special cases .",4,0 |
|
5,"The problem is formulated in a manner that subsumes structure from motion , [[ multi-view stereo ]] , and << photo-metric stereo >> as special cases .",5,0 |
|
6,The << algorithm >> utilizes both [[ spatial and temporal intensity variation ]] as cues : the former constrains flow and the latter constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .,6,3 |
|
7,The algorithm utilizes both spatial and temporal intensity variation as << cues >> : the [[ former ]] constrains flow and the latter constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .,7,2 |
|
8,The algorithm utilizes both spatial and temporal intensity variation as cues : the [[ former ]] constrains << flow >> and the latter constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .,8,3 |
|
9,The algorithm utilizes both spatial and temporal intensity variation as cues : the [[ former ]] constrains flow and the << latter >> constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .,9,0 |
|
10,The algorithm utilizes both spatial and temporal intensity variation as << cues >> : the former constrains flow and the [[ latter ]] constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .,10,2 |
|
11,The algorithm utilizes both spatial and temporal intensity variation as cues : the former constrains flow and the [[ latter ]] constrains << surface orientation >> ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .,11,3 |
|
12,The algorithm utilizes both spatial and temporal intensity variation as cues : the former constrains flow and the latter constrains surface orientation ; combining both [[ cues ]] enables << dense reconstruction of both textured and texture-less surfaces >> .,12,3 |
|
13,"The << algorithm >> works by iteratively [[ estimating affine camera parameters , illumination , shape , and albedo ]] in an alternating fashion .",13,3 |
|
14,An [[ entity-oriented approach ]] to << restricted-domain parsing >> is proposed .,14,3 |
|
15,"Like semantic grammar , [[ this ]] allows easy exploitation of << limited domain semantics >> .",15,3 |
|
16,"In addition , [[ it ]] facilitates << fragmentary recognition >> and the use of multiple parsing strategies , and so is particularly useful for robust recognition of extra-grammatical input .",16,3 |
|
17,"In addition , [[ it ]] facilitates fragmentary recognition and the use of << multiple parsing strategies >> , and so is particularly useful for robust recognition of extra-grammatical input .",17,3 |
|
18,"In addition , it facilitates fragmentary recognition and the use of [[ multiple parsing strategies ]] , and so is particularly useful for robust << recognition of extra-grammatical input >> .",18,3 |
|
19,"Representative samples from an entity-oriented language definition are presented , along with a [[ control structure ]] for an << entity-oriented parser >> , some parsing strategies that use the control structure , and worked examples of parses .",19,3 |
|
20,"Representative samples from an entity-oriented language definition are presented , along with a control structure for an entity-oriented parser , some << parsing strategies >> that use the [[ control structure ]] , and worked examples of parses .",20,3 |
|
21,A << parser >> incorporating the [[ control structure ]] and the parsing strategies is currently under implementation .,21,4 |
|
22,This paper summarizes the formalism of Category Cooccurrence Restrictions -LRB- CCRs -RRB- and describes two [[ parsing algorithms ]] that interpret << it >> .,22,3 |
|
23,The use of CCRs leads to << syntactic descriptions >> formulated entirely with [[ restrictive statements ]] .,23,1 |
|
24,The paper shows how conventional [[ algorithms ]] for the analysis of context free languages can be adapted to the << CCR formalism >> .,24,3 |
|
25,The paper shows how conventional << algorithms >> for the analysis of [[ context free languages ]] can be adapted to the CCR formalism .,25,3 |
|
26,Special attention is given to the part of the parser that checks the fulfillment of [[ logical well-formedness conditions ]] on << trees >> .,26,1 |
|
27,We present a [[ text mining method ]] for finding << synonymous expressions >> based on the distributional hypothesis in a set of coherent corpora .,27,3 |
|
28,We present a << text mining method >> for finding synonymous expressions based on the [[ distributional hypothesis ]] in a set of coherent corpora .,28,3 |
|
29,This paper proposes a new methodology to improve the [[ accuracy ]] of a << term aggregation system >> using each author 's text as a coherent corpus .,29,6 |
|
30,This paper proposes a new << methodology >> to improve the accuracy of a [[ term aggregation system ]] using each author 's text as a coherent corpus .,30,6 |
|
31,"Our proposed method improves the [[ accuracy ]] of our << term aggregation system >> , showing that our approach is successful .",31,6 |
|
32,"Our proposed << method >> improves the accuracy of our [[ term aggregation system ]] , showing that our approach is successful .",32,6 |
|
33,"In this work , we present a [[ technique ]] for << robust estimation >> , which by explicitly incorporating the inherent uncertainty of the estimation procedure , results in a more efficient robust estimation algorithm .",33,3 |
|
34,"In this work , we present a [[ technique ]] for robust estimation , which by explicitly incorporating the inherent uncertainty of the estimation procedure , results in a more << efficient robust estimation algorithm >> .",34,3 |
|
35,"In this work , we present a << technique >> for robust estimation , which by explicitly incorporating the [[ inherent uncertainty of the estimation procedure ]] , results in a more efficient robust estimation algorithm .",35,3 |
|
36,"The combination of these two [[ strategies ]] results in a << robust estimation procedure >> that provides a significant speed-up over existing RANSAC techniques , while requiring no prior information to guide the sampling process .",36,3 |
|
37,"The combination of these two strategies results in a << robust estimation procedure >> that provides a significant speed-up over existing [[ RANSAC techniques ]] , while requiring no prior information to guide the sampling process .",37,5 |
|
38,"In particular , our [[ algorithm ]] requires , on average , 3-10 times fewer samples than standard << RANSAC >> , which is in close agreement with theoretical predictions .",38,5 |
|
39,The efficiency of the << algorithm >> is demonstrated on a selection of [[ geometric estimation problems ]] .,39,6 |
|
40,An attempt has been made to use an [[ Augmented Transition Network ]] as a procedural << dialog model >> .,40,2 |
|
41,The development of such a model appears to be important in several respects : as a << device >> to represent and to use different [[ dialog schemata ]] proposed in empirical conversation analysis ; as a device to represent and to use models of verbal interaction ; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs .,41,3 |
|
42,The development of such a model appears to be important in several respects : as a device to represent and to use different [[ dialog schemata ]] proposed in empirical << conversation analysis >> ; as a device to represent and to use models of verbal interaction ; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs .,42,3 |
|
43,The development of such a model appears to be important in several respects : as a device to represent and to use different dialog schemata proposed in empirical conversation analysis ; as a << device >> to represent and to use [[ models ]] of verbal interaction ; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs .,43,3 |
|
44,The development of such a model appears to be important in several respects : as a device to represent and to use different dialog schemata proposed in empirical conversation analysis ; as a device to represent and to use [[ models ]] of << verbal interaction >> ; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs .,44,3 |
|
45,The development of such a model appears to be important in several respects : as a device to represent and to use different dialog schemata proposed in empirical conversation analysis ; as a device to represent and to use models of verbal interaction ; as a device combining knowledge about [[ dialog schemata ]] and about << verbal interaction >> with knowledge about task-oriented and goal-directed dialogs .,45,0 |
|
46,A standard [[ ATN ]] should be further developed in order to account for the << verbal interactions >> of task-oriented dialogs .,46,3 |
|
47,A standard ATN should be further developed in order to account for the [[ verbal interactions ]] of << task-oriented dialogs >> .,47,1 |
|
48,We present a practically [[ unsupervised learning method ]] to produce << single-snippet answers >> to definition questions in question answering systems that supplement Web search engines .,48,3 |
|
49,We present a practically unsupervised learning method to produce single-snippet answers to definition questions in [[ question answering systems ]] that supplement << Web search engines >> .,49,3 |
|
50,"The [[ method ]] exploits << on-line encyclopedias and dictionaries >> to generate automatically an arbitrarily large number of positive and negative definition examples , which are then used to train an svm to separate the two classes .",50,3 |
|
51,"The method exploits [[ on-line encyclopedias and dictionaries ]] to generate automatically an arbitrarily large number of << positive and negative definition examples >> , which are then used to train an svm to separate the two classes .",51,3 |
|
52,"The method exploits on-line encyclopedias and dictionaries to generate automatically an arbitrarily large number of [[ positive and negative definition examples ]] , which are then used to train an << svm >> to separate the two classes .",52,3 |
|
53,"We show experimentally that the proposed method is viable , that [[ it ]] outperforms the << alternative >> of training the system on questions and news articles from trec , and that it helps the search engine handle definition questions significantly better .",53,5 |
|
54,"We show experimentally that the proposed method is viable , that it outperforms the alternative of training the << system >> on questions and [[ news articles ]] from trec , and that it helps the search engine handle definition questions significantly better .",54,3 |
|
55,"We show experimentally that the proposed method is viable , that it outperforms the alternative of training the system on questions and [[ news articles ]] from << trec >> , and that it helps the search engine handle definition questions significantly better .",55,4 |
|
56,"We show experimentally that the proposed method is viable , that it outperforms the alternative of training the system on questions and news articles from trec , and that [[ it ]] helps the << search engine >> handle definition questions significantly better .",56,3 |
|
57,We revisit the << classical decision-theoretic problem of weighted expert voting >> from a [[ statistical learning perspective ]] .,57,3 |
|
58,"In the case of known expert competence levels , we give [[ sharp error estimates ]] for the << optimal rule >> .",58,3 |
|
59,We analyze a [[ reweighted version of the Kikuchi approximation ]] for estimating the << log partition function of a product distribution >> defined over a region graph .,59,3 |
|
60,We analyze a reweighted version of the Kikuchi approximation for estimating the [[ log partition function of a product distribution ]] defined over a << region graph >> .,60,1 |
|
61,"We establish sufficient conditions for the [[ concavity ]] of our << reweighted objective function >> in terms of weight assignments in the Kikuchi expansion , and show that a reweighted version of the sum product algorithm applied to the Kikuchi region graph will produce global optima of the Kikuchi approximation whenever the algorithm converges .",61,1 |
|
62,"We establish sufficient conditions for the concavity of our reweighted objective function in terms of weight assignments in the Kikuchi expansion , and show that a [[ reweighted version of the sum product algorithm ]] applied to the << Kikuchi region graph >> will produce global optima of the Kikuchi approximation whenever the algorithm converges .",62,3 |
|
63,"We establish sufficient conditions for the concavity of our reweighted objective function in terms of weight assignments in the Kikuchi expansion , and show that a reweighted version of the sum product algorithm applied to the Kikuchi region graph will produce [[ global optima ]] of the << Kikuchi approximation >> whenever the algorithm converges .",63,1 |
|
64,"Finally , we provide an explicit characterization of the polytope of concavity in terms of the [[ cycle structure ]] of the << region graph >> .",64,1 |
|
65,We apply a [[ decision tree based approach ]] to << pronoun resolution >> in spoken dialogue .,65,3 |
|
66,We apply a decision tree based approach to [[ pronoun resolution ]] in << spoken dialogue >> .,66,3 |
|
67,Our [[ system ]] deals with << pronouns >> with NP - and non-NP-antecedents .,67,3 |
|
68,Our system deals with << pronouns >> with [[ NP - and non-NP-antecedents ]] .,68,3 |
|
69,We present a set of [[ features ]] designed for << pronoun resolution >> in spoken dialogue and determine the most promising features .,69,3 |
|
70,We present a set of features designed for [[ pronoun resolution ]] in << spoken dialogue >> and determine the most promising features .,70,3 |
|
71,We evaluate the << system >> on twenty [[ Switchboard dialogues ]] and show that it compares well to Byron 's -LRB- 2002 -RRB- manually tuned system .,71,6 |
|
72,We evaluate the system on twenty Switchboard dialogues and show that [[ it ]] compares well to << Byron 's -LRB- 2002 -RRB- manually tuned system >> .,72,5 |
|
73,"We present a new [[ approach ]] for building an efficient and robust << classifier >> for the two class problem , that localizes objects that may appear in the image under different orien-tations .",73,3 |
|
74,"We present a new approach for building an efficient and robust [[ classifier ]] for the two << class problem >> , that localizes objects that may appear in the image under different orien-tations .",74,3 |
|
75,"In contrast to other works that address this problem using multiple classifiers , each one specialized for a specific orientation , we propose a simple two-step << approach >> with an [[ estimation stage ]] and a classification stage .",75,4 |
|
76,"In contrast to other works that address this problem using multiple classifiers , each one specialized for a specific orientation , we propose a simple two-step approach with an [[ estimation stage ]] and a << classification stage >> .",76,0 |
|
77,"In contrast to other works that address this problem using multiple classifiers , each one specialized for a specific orientation , we propose a simple two-step << approach >> with an estimation stage and a [[ classification stage ]] .",77,4 |
|
78,The estimator yields an initial set of potential << object poses >> that are then validated by the [[ classifier ]] .,78,3 |
|
79,This methodology allows reducing the [[ time complexity ]] of the << algorithm >> while classification results remain high .,79,6 |
|
80,"The << classifier >> we use in both stages is based on a [[ boosted combination of Random Ferns ]] over local histograms of oriented gradients -LRB- HOGs -RRB- , which we compute during a pre-processing step .",80,3 |
|
81,"The classifier we use in both stages is based on a << boosted combination of Random Ferns >> over [[ local histograms of oriented gradients -LRB- HOGs -RRB- ]] , which we compute during a pre-processing step .",81,1 |
|
82,"The classifier we use in both stages is based on a boosted combination of Random Ferns over << local histograms of oriented gradients -LRB- HOGs -RRB- >> , which we compute during a [[ pre-processing step ]] .",82,3 |
|
83,Both the use of [[ supervised learning ]] and working on the gradient space makes our << approach >> robust while being efficient at run-time .,83,3 |
|
84,Both the use of supervised learning and working on the [[ gradient space ]] makes our << approach >> robust while being efficient at run-time .,84,3 |
|
85,"We show these properties by thorough testing on standard databases and on a new << database >> made of [[ motorbikes under planar rotations ]] , and with challenging conditions such as cluttered backgrounds , changing illumination conditions and partial occlusions .",85,1 |
|
86,"We show these properties by thorough testing on standard databases and on a new << database >> made of motorbikes under planar rotations , and with challenging [[ conditions ]] such as cluttered backgrounds , changing illumination conditions and partial occlusions .",86,1 |
|
87,"We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging << conditions >> such as [[ cluttered backgrounds ]] , changing illumination conditions and partial occlusions .",87,2 |
|
88,"We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging conditions such as [[ cluttered backgrounds ]] , << changing illumination conditions >> and partial occlusions .",88,0 |
|
89,"We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging << conditions >> such as cluttered backgrounds , [[ changing illumination conditions ]] and partial occlusions .",89,2 |
|
90,"We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging conditions such as cluttered backgrounds , [[ changing illumination conditions ]] and << partial occlusions >> .",90,0 |
|
91,"We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging << conditions >> such as cluttered backgrounds , changing illumination conditions and [[ partial occlusions ]] .",91,2 |
|
92,A very simple improved [[ duration model ]] has reduced the error rate by about 10 % in both << triphone and semiphone systems >> .,92,3 |
|
93,A very simple improved duration model has reduced the [[ error rate ]] by about 10 % in both << triphone and semiphone systems >> .,93,6 |
|
94,"A new << training strategy >> has been tested which , by itself , did not provide useful improvements but suggests that improvements can be obtained by a related [[ rapid adaptation technique ]] .",94,3 |
|
95,"Finally , the << recognizer >> has been modified to use [[ bigram back-off language models ]] .",95,3 |
|
96,The [[ system ]] was then transferred from the << RM task >> to the ATIS CSR task and a limited number of development tests performed .,96,3 |
|
97,The [[ system ]] was then transferred from the RM task to the << ATIS CSR task >> and a limited number of development tests performed .,97,3 |
|
98,The system was then transferred from the [[ RM task ]] to the << ATIS CSR task >> and a limited number of development tests performed .,98,0 |
|
99,A new [[ approach ]] for << Interactive Machine Translation >> where the author interacts during the creation or the modification of the document is proposed .,99,3 |
|
100,This paper presents a new << interactive disambiguation scheme >> based on the [[ paraphrasing ]] of a parser 's multiple output .,100,3 |
|
101,We describe a novel [[ approach ]] to << statistical machine translation >> that combines syntactic information in the source language with recent advances in phrasal translation .,101,3 |
|
102,We describe a novel << approach >> to statistical machine translation that combines [[ syntactic information ]] in the source language with recent advances in phrasal translation .,102,4 |
|
103,We describe a novel approach to statistical machine translation that combines [[ syntactic information ]] in the source language with recent advances in << phrasal translation >> .,103,0 |
|
104,We describe a novel << approach >> to statistical machine translation that combines syntactic information in the source language with recent advances in [[ phrasal translation ]] .,104,4 |
|
105,"This << method >> requires a [[ source-language dependency parser ]] , target language word segmentation and an unsupervised word alignment component .",105,3 |
|
106,"This method requires a [[ source-language dependency parser ]] , << target language word segmentation >> and an unsupervised word alignment component .",106,0 |
|
107,"This << method >> requires a source-language dependency parser , [[ target language word segmentation ]] and an unsupervised word alignment component .",107,3 |
|
108,"This method requires a source-language dependency parser , [[ target language word segmentation ]] and an << unsupervised word alignment component >> .",108,0 |
|
109,"This << method >> requires a source-language dependency parser , target language word segmentation and an [[ unsupervised word alignment component ]] .",109,3 |
|
110,We describe an efficient decoder and show that using these [[ tree-based models ]] in combination with conventional << SMT models >> provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser .,110,0 |
|
111,We describe an efficient decoder and show that using these [[ tree-based models ]] in combination with conventional SMT models provides a promising << approach >> that incorporates the power of phrasal SMT with the linguistic generality available in a parser .,111,3 |
|
112,We describe an efficient decoder and show that using these tree-based models in combination with conventional [[ SMT models ]] provides a promising << approach >> that incorporates the power of phrasal SMT with the linguistic generality available in a parser .,112,3 |
|
113,We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of [[ phrasal SMT ]] with the << linguistic generality >> available in a parser .,113,0 |
|
114,We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of [[ phrasal SMT ]] with the linguistic generality available in a << parser >> .,114,3 |
|
115,We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the [[ linguistic generality ]] available in a << parser >> .,115,1 |
|
116,"<< Video >> provides not only rich [[ visual cues ]] such as motion and appearance , but also much less explored long-range temporal interactions among objects .",116,1 |
|
117,"Video provides not only rich << visual cues >> such as [[ motion ]] and appearance , but also much less explored long-range temporal interactions among objects .",117,2 |
|
118,"Video provides not only rich visual cues such as [[ motion ]] and << appearance >> , but also much less explored long-range temporal interactions among objects .",118,0 |
|
119,"Video provides not only rich << visual cues >> such as motion and [[ appearance ]] , but also much less explored long-range temporal interactions among objects .",119,2 |
|
120,We aim to capture such interactions and to construct a powerful [[ intermediate-level video representation ]] for subsequent << recognition >> .,120,3 |
|
121,"First , we develop an efficient << spatio-temporal video segmentation algorithm >> , which naturally incorporates [[ long-range motion cues ]] from the past and future frames in the form of clusters of point tracks with coherent motion .",121,3 |
|
122,"First , we develop an efficient spatio-temporal video segmentation algorithm , which naturally incorporates << long-range motion cues >> from the past and future frames in the form of [[ clusters of point tracks ]] with coherent motion .",122,3 |
|
123,"Second , we devise a new << track clustering cost function >> that includes [[ occlusion reasoning ]] , in the form of depth ordering constraints , as well as motion similarity along the tracks .",123,4 |
|
124,"Second , we devise a new track clustering cost function that includes << occlusion reasoning >> , in the form of [[ depth ordering constraints ]] , as well as motion similarity along the tracks .",124,1 |
|
125,"Second , we devise a new << track clustering cost function >> that includes occlusion reasoning , in the form of depth ordering constraints , as well as [[ motion similarity ]] along the tracks .",125,4 |
|
126,We evaluate the proposed << approach >> on a challenging set of [[ video sequences of office scenes ]] from feature length movies .,126,6 |
|
127,"In this paper , we introduce [[ KAZE features ]] , a novel << multiscale 2D feature detection and description algorithm >> in nonlinear scale spaces .",127,2 |
|
128,"In this paper , we introduce KAZE features , a novel << multiscale 2D feature detection and description algorithm >> in [[ nonlinear scale spaces ]] .",128,1 |
|
129,"In contrast , we detect and describe << 2D features >> in a [[ nonlinear scale space ]] by means of nonlinear diffusion filtering .",129,1 |
|
130,"In contrast , we detect and describe << 2D features >> in a nonlinear scale space by means of [[ nonlinear diffusion filtering ]] .",130,3 |
|
131,The << nonlinear scale space >> is built using efficient [[ Additive Operator Splitting -LRB- AOS -RRB- techniques ]] and variable con-ductance diffusion .,131,3 |
|
132,The nonlinear scale space is built using efficient [[ Additive Operator Splitting -LRB- AOS -RRB- techniques ]] and << variable con-ductance diffusion >> .,132,0 |
|
133,The << nonlinear scale space >> is built using efficient Additive Operator Splitting -LRB- AOS -RRB- techniques and [[ variable con-ductance diffusion ]] .,133,3 |
|
134,"Even though our [[ features ]] are somewhat more expensive to compute than << SURF >> due to the construction of the nonlinear scale space , but comparable to SIFT , our results reveal a step forward in performance both in detection and description against previous state-of-the-art methods .",134,5 |
|
135,"Even though our [[ features ]] are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to << SIFT >> , our results reveal a step forward in performance both in detection and description against previous state-of-the-art methods .",135,5 |
|
136,"Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our [[ results ]] reveal a step forward in performance both in detection and description against previous << state-of-the-art methods >> .",136,5 |
|
137,"Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our << results >> reveal a step forward in performance both in [[ detection ]] and description against previous state-of-the-art methods .",137,6 |
|
138,"Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our results reveal a step forward in performance both in [[ detection ]] and << description >> against previous state-of-the-art methods .",138,0 |
|
139,"Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our results reveal a step forward in performance both in [[ detection ]] and description against previous << state-of-the-art methods >> .",139,6 |
|
140,"Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our << results >> reveal a step forward in performance both in detection and [[ description ]] against previous state-of-the-art methods .",140,6 |
|
141,"Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our results reveal a step forward in performance both in detection and [[ description ]] against previous << state-of-the-art methods >> .",141,6 |
|
142,[[ Creating summaries ]] on lengthy Semantic Web documents for quick << identification of the corresponding entity >> has been of great contemporary interest .,142,3 |
|
143,<< Creating summaries >> on [[ lengthy Semantic Web documents ]] for quick identification of the corresponding entity has been of great contemporary interest .,143,3 |
|
144,"Specifically , we highlight the importance of << diversified -LRB- faceted -RRB- summaries >> by combining three dimensions : [[ diversity ]] , uniqueness , and popularity .",144,1 |
|
145,"Specifically , we highlight the importance of diversified -LRB- faceted -RRB- summaries by combining three dimensions : [[ diversity ]] , << uniqueness >> , and popularity .",145,0 |
|
146,"Specifically , we highlight the importance of << diversified -LRB- faceted -RRB- summaries >> by combining three dimensions : diversity , [[ uniqueness ]] , and popularity .",146,1 |
|
147,"Specifically , we highlight the importance of diversified -LRB- faceted -RRB- summaries by combining three dimensions : diversity , [[ uniqueness ]] , and << popularity >> .",147,0 |
|
148,"Specifically , we highlight the importance of << diversified -LRB- faceted -RRB- summaries >> by combining three dimensions : diversity , uniqueness , and [[ popularity ]] .",148,1 |
|
149,"Our novel << diversity-aware entity summarization approach >> mimics [[ human conceptual clustering techniques ]] to group facts , and picks representative facts from each group to form concise -LRB- i.e. , short -RRB- and comprehensive -LRB- i.e. , improved coverage through diversity -RRB- summaries .",149,3 |
|
150,We evaluate our [[ approach ]] against the state-of-the-art techniques and show that our work improves both the quality and the efficiency of << entity summarization >> .,150,3 |
|
151,We evaluate our << approach >> against the [[ state-of-the-art techniques ]] and show that our work improves both the quality and the efficiency of entity summarization .,151,5 |
|
152,We evaluate our approach against the [[ state-of-the-art techniques ]] and show that our work improves both the quality and the efficiency of << entity summarization >> .,152,3 |
|
153,We evaluate our approach against the state-of-the-art techniques and show that our work improves both the [[ quality ]] and the efficiency of << entity summarization >> .,153,6 |
|
154,We evaluate our approach against the state-of-the-art techniques and show that our work improves both the quality and the [[ efficiency ]] of << entity summarization >> .,154,6 |
|
155,We present a [[ framework ]] for the << fast computation of lexical affinity models >> .,155,3 |
|
156,"The << framework >> is composed of a novel [[ algorithm ]] to efficiently compute the co-occurrence distribution between pairs of terms , an independence model , and a parametric affinity model .",156,4 |
|
157,"The framework is composed of a novel [[ algorithm ]] to efficiently compute the << co-occurrence distribution >> between pairs of terms , an independence model , and a parametric affinity model .",157,3 |
|
158,"The framework is composed of a novel [[ algorithm ]] to efficiently compute the co-occurrence distribution between pairs of terms , an << independence model >> , and a parametric affinity model .",158,0 |
|
159,"The << framework >> is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms , an [[ independence model ]] , and a parametric affinity model .",159,4 |
|
160,"The framework is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms , an [[ independence model ]] , and a << parametric affinity model >> .",160,0 |
|
161,"The << framework >> is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms , an independence model , and a [[ parametric affinity model ]] .",161,4 |
|
162,"In comparison with previous models , which either use arbitrary windows to compute similarity between words or use [[ lexical affinity ]] to create << sequential models >> , in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus .",162,3 |
|
163,"In comparison with previous << models >> , which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models , in this paper we focus on [[ models ]] intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus .",163,5 |
|
164,"In comparison with previous models , which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models , in this paper we focus on [[ models ]] intended to capture the << co-occurrence patterns >> of any pair of words or phrases at any distance in the corpus .",164,3 |
|
165,"We apply [[ it ]] in combination with a terabyte corpus to answer << natural language tests >> , achieving encouraging results .",165,3 |
|
166,"We apply << it >> in combination with a [[ terabyte corpus ]] to answer natural language tests , achieving encouraging results .",166,6 |
|
167,This paper introduces a [[ system ]] for << categorizing unknown words >> .,167,3 |
|
168,The << system >> is based on a [[ multi-component architecture ]] where each component is responsible for identifying one class of unknown words .,168,3 |
|
169,The system is based on a << multi-component architecture >> where each [[ component ]] is responsible for identifying one class of unknown words .,169,4 |
|
170,The system is based on a multi-component architecture where each [[ component ]] is responsible for identifying one class of << unknown words >> .,170,3 |
|
171,The focus of this paper is the [[ components ]] that identify << names >> and spelling errors .,171,3 |
|
172,The focus of this paper is the [[ components ]] that identify names and << spelling errors >> .,172,3 |
|
173,The focus of this paper is the components that identify [[ names ]] and << spelling errors >> .,173,0 |
|
174,Each << component >> uses a [[ decision tree architecture ]] to combine multiple types of evidence about the unknown word .,174,3 |
|
175,The << system >> is evaluated using data from [[ live closed captions ]] - a genre replete with a wide variety of unknown words .,175,6 |
|
176,"At MIT Lincoln Laboratory , we have been developing a << Korean-to-English machine translation system >> [[ CCLINC -LRB- Common Coalition Language System at Lincoln Laboratory -RRB- ]] .",176,2 |
|
177,"The << CCLINC Korean-to-English translation system >> consists of two [[ core modules ]] , language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame .",177,4 |
|
178,"The CCLINC Korean-to-English translation system consists of two core modules , << language understanding and generation modules >> mediated by a [[ language neutral meaning representation ]] called a semantic frame .",178,3 |
|
179,"The CCLINC Korean-to-English translation system consists of two core modules , language understanding and generation modules mediated by a << language neutral meaning representation >> called a [[ semantic frame ]] .",179,2 |
|
180,"The key features of the system include : -LRB- i -RRB- Robust efficient parsing of [[ Korean ]] -LRB- a << verb final language >> with overt case markers , relatively free word order , and frequent omissions of arguments -RRB- .",180,2 |
|
181,"The key features of the system include : -LRB- i -RRB- Robust efficient parsing of Korean -LRB- a << verb final language >> with [[ overt case markers ]] , relatively free word order , and frequent omissions of arguments -RRB- .",181,1 |
|
182,-LRB- ii -RRB- High quality << translation >> via [[ word sense disambiguation ]] and accurate word order generation of the target language .,182,3 |
|
183,-LRB- ii -RRB- High quality translation via [[ word sense disambiguation ]] and accurate << word order generation >> of the target language .,183,0 |
|
184,-LRB- ii -RRB- High quality << translation >> via word sense disambiguation and accurate [[ word order generation ]] of the target language .,184,3 |
|
185,"Having been trained on [[ Korean newspaper articles ]] on missiles and chemical biological warfare , the << system >> produces the translation output sufficient for content understanding of the original document .",185,3 |
|
186,"Having been trained on << Korean newspaper articles >> on [[ missiles and chemical biological warfare ]] , the system produces the translation output sufficient for content understanding of the original document .",186,1 |
|
187,"The [[ JAVELIN system ]] integrates a flexible , planning-based architecture with a variety of language processing modules to provide an << open-domain question answering capability >> on free text .",187,3 |
|
188,"The << JAVELIN system >> integrates a flexible , [[ planning-based architecture ]] with a variety of language processing modules to provide an open-domain question answering capability on free text .",188,4 |
|
189,"The << JAVELIN system >> integrates a flexible , planning-based architecture with a variety of [[ language processing modules ]] to provide an open-domain question answering capability on free text .",189,4 |
|
190,"The JAVELIN system integrates a flexible , << planning-based architecture >> with a variety of [[ language processing modules ]] to provide an open-domain question answering capability on free text .",190,0 |
|
191,We present the first application of the [[ head-driven statistical parsing model ]] of Collins -LRB- 1999 -RRB- as a << simultaneous language model >> and parser for large-vocabulary speech recognition .,191,3 |
|
192,We present the first application of the [[ head-driven statistical parsing model ]] of Collins -LRB- 1999 -RRB- as a simultaneous language model and << parser >> for large-vocabulary speech recognition .,192,3 |
|
193,We present the first application of the head-driven statistical parsing model of Collins -LRB- 1999 -RRB- as a [[ simultaneous language model ]] and << parser >> for large-vocabulary speech recognition .,193,0 |
|
194,We present the first application of the head-driven statistical parsing model of Collins -LRB- 1999 -RRB- as a [[ simultaneous language model ]] and parser for << large-vocabulary speech recognition >> .,194,3 |
|
195,We present the first application of the head-driven statistical parsing model of Collins -LRB- 1999 -RRB- as a simultaneous language model and [[ parser ]] for << large-vocabulary speech recognition >> .,195,3 |
|
196,"The [[ model ]] is adapted to an << online left to right chart-parser >> for word lattices , integrating acoustic , n-gram , and parser probabilities .",196,3 |
|
197,"The model is adapted to an [[ online left to right chart-parser ]] for << word lattices >> , integrating acoustic , n-gram , and parser probabilities .",197,3 |
|
198,"The model is adapted to an << online left to right chart-parser >> for word lattices , integrating [[ acoustic , n-gram , and parser probabilities ]] .",198,4 |
|
199,"The << parser >> uses [[ structural and lexical dependencies ]] not considered by n-gram models , conditioning recognition on more linguistically-grounded relationships .",199,3 |
|
200,Experiments on the [[ Wall Street Journal treebank ]] and << lattice corpora >> show word error rates competitive with the standard n-gram language model while extracting additional structural information useful for speech understanding .,200,0 |
|
201,Experiments on the [[ Wall Street Journal treebank ]] and lattice corpora show word error rates competitive with the standard << n-gram language model >> while extracting additional structural information useful for speech understanding .,201,6 |
|
202,Experiments on the Wall Street Journal treebank and [[ lattice corpora ]] show word error rates competitive with the standard << n-gram language model >> while extracting additional structural information useful for speech understanding .,202,6 |
|
203,Experiments on the Wall Street Journal treebank and lattice corpora show [[ word error rates ]] competitive with the standard << n-gram language model >> while extracting additional structural information useful for speech understanding .,203,6 |
|
204,Experiments on the Wall Street Journal treebank and lattice corpora show word error rates competitive with the standard n-gram language model while extracting additional [[ structural information ]] useful for << speech understanding >> .,204,3 |
|
205,[[ Image composition -LRB- or mosaicing -RRB- ]] has attracted a growing attention in recent years as one of the main elements in << video analysis and representation >> .,205,4 |
|
206,In this paper we deal with the problem of [[ global alignment ]] and << super-resolution >> .,206,0 |
|
207,We also propose to evaluate the quality of the resulting << mosaic >> by measuring the [[ amount of blurring ]] .,207,6 |
|
208,<< Global registration >> is achieved by combining a [[ graph-based technique ]] -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a bundle adjustment which uses only the homographies computed in the previous steps .,208,3 |
|
209,Global registration is achieved by combining a [[ graph-based technique ]] -- that exploits the << topological structure >> of the sequence induced by the spatial overlap -- with a bundle adjustment which uses only the homographies computed in the previous steps .,209,3 |
|
210,Global registration is achieved by combining a [[ graph-based technique ]] -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a << bundle adjustment >> which uses only the homographies computed in the previous steps .,210,0 |
|
211,<< Global registration >> is achieved by combining a graph-based technique -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a [[ bundle adjustment ]] which uses only the homographies computed in the previous steps .,211,3 |
|
212,Global registration is achieved by combining a graph-based technique -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a << bundle adjustment >> which uses only the [[ homographies ]] computed in the previous steps .,212,3 |
|
213,Experimental comparison with other << techniques >> shows the effectiveness of our [[ approach ]] .,213,5 |
|
214,The main of this project is << computer-assisted acquisition and morpho-syntactic description of verb-noun collocations >> in [[ Polish ]] .,214,3 |
|
215,"We present methodology and resources obtained in three main project << phases >> which are : [[ dictionary-based acquisition of collocation lexicon ]] , feasibility study for corpus-based lexicon enlargement phase , corpus-based lexicon enlargement and collocation description .",215,2 |
|
216,"We present methodology and resources obtained in three main project phases which are : [[ dictionary-based acquisition of collocation lexicon ]] , << feasibility study >> for corpus-based lexicon enlargement phase , corpus-based lexicon enlargement and collocation description .",216,0 |
|
217,"We present methodology and resources obtained in three main project << phases >> which are : dictionary-based acquisition of collocation lexicon , [[ feasibility study ]] for corpus-based lexicon enlargement phase , corpus-based lexicon enlargement and collocation description .",217,2 |
|
218,"We present methodology and resources obtained in three main project phases which are : dictionary-based acquisition of collocation lexicon , [[ feasibility study ]] for << corpus-based lexicon enlargement phase >> , corpus-based lexicon enlargement and collocation description .",218,3 |
|
219,"We present methodology and resources obtained in three main project << phases >> which are : dictionary-based acquisition of collocation lexicon , feasibility study for corpus-based lexicon enlargement phase , [[ corpus-based lexicon enlargement and collocation description ]] .",219,2 |
|
220,"We present methodology and resources obtained in three main project phases which are : dictionary-based acquisition of collocation lexicon , << feasibility study >> for corpus-based lexicon enlargement phase , [[ corpus-based lexicon enlargement and collocation description ]] .",220,0 |
|
221,The presented here [[ corpus-based approach ]] permitted us to triple the size the << verb-noun collocation dictionary >> for Polish .,221,3 |
|
222,The presented here corpus-based approach permitted us to triple the size the << verb-noun collocation dictionary >> for [[ Polish ]] .,222,1 |
|
223,"Along with the increasing requirements , the [[ hash-tag recommendation task ]] for << microblogs >> has been receiving considerable attention in recent years .",223,3 |
|
224,"Motivated by the successful use of [[ convolutional neural networks -LRB- CNNs -RRB- ]] for many << natural language processing tasks >> , in this paper , we adopt CNNs to perform the hashtag recommendation problem .",224,3 |
|
225,"To incorporate the << trigger words >> whose effectiveness have been experimentally evaluated in several previous works , we propose a novel [[ architecture ]] with an attention mechanism .",225,3 |
|
226,"To incorporate the trigger words whose effectiveness have been experimentally evaluated in several previous works , we propose a novel << architecture >> with an [[ attention mechanism ]] .",226,1 |
|
227,The results of experiments on the [[ data ]] collected from a real world microblogging service demonstrated that the proposed << model >> outperforms state-of-the-art methods .,227,6 |
|
228,The results of experiments on the data collected from a real world microblogging service demonstrated that the proposed [[ model ]] outperforms << state-of-the-art methods >> .,228,5 |
|
229,"By incorporating trigger words into the consideration , the relative improvement of the proposed [[ method ]] over the << state-of-the-art method >> is around 9.4 % in the F1-score .",229,5 |
|
230,"By incorporating trigger words into the consideration , the relative improvement of the proposed method over the << state-of-the-art method >> is around 9.4 % in the [[ F1-score ]] .",230,6 |
|
231,"In this paper , we improve an << unsupervised learning method >> using the [[ Expectation-Maximization -LRB- EM -RRB- algorithm ]] proposed by Nigam et al. for text classification problems in order to apply it to word sense disambiguation -LRB- WSD -RRB- problems .",231,3 |
|
232,"In this paper , we improve an unsupervised learning method using the [[ Expectation-Maximization -LRB- EM -RRB- algorithm ]] proposed by Nigam et al. for << text classification problems >> in order to apply it to word sense disambiguation -LRB- WSD -RRB- problems .",232,3 |
|
233,"In this paper , we improve an unsupervised learning method using the Expectation-Maximization -LRB- EM -RRB- algorithm proposed by Nigam et al. for text classification problems in order to apply [[ it ]] to << word sense disambiguation -LRB- WSD -RRB- problems >> .",233,3 |
|
234,"In experiments , we solved 50 noun WSD problems in the [[ Japanese Dictionary Task ]] in << SENSEVAL2 >> .",234,1 |
|
235,"Furthermore , our [[ methods ]] were confirmed to be effective also for << verb WSD problems >> .",235,3 |
|
236,"[[ Dividing sentences in chunks of words ]] is a useful preprocessing step for << parsing >> , information extraction and information retrieval .",236,3 |
|
237,"[[ Dividing sentences in chunks of words ]] is a useful preprocessing step for parsing , << information extraction >> and information retrieval .",237,3 |
|
238,"[[ Dividing sentences in chunks of words ]] is a useful preprocessing step for parsing , information extraction and << information retrieval >> .",238,3 |
|
239,"Dividing sentences in chunks of words is a useful preprocessing step for [[ parsing ]] , << information extraction >> and information retrieval .",239,0 |
|
240,"Dividing sentences in chunks of words is a useful preprocessing step for parsing , [[ information extraction ]] and << information retrieval >> .",240,0 |
|
241,"-LRB- Ramshaw and Marcus , 1995 -RRB- have introduced a `` convenient '' [[ data representation ]] for << chunking >> by converting it to a tagging task .",241,3 |
|
242,In this paper we will examine seven different [[ data representations ]] for the problem of << recognizing noun phrase chunks >> .,242,3 |
|
243,"However , equipped with the most suitable [[ data representation ]] , our << memory-based learning chunker >> was able to improve the best published chunking results for a standard data set .",243,3 |
|
244,"However , equipped with the most suitable data representation , our << memory-based learning chunker >> was able to improve the best published chunking results for a standard [[ data set ]] .",244,6 |
|
245,"We focus on << FAQ-like questions and answers >> , and build our [[ system ]] around a noisy-channel architecture which exploits both a language model for answers and a transformation model for answer/question terms , trained on a corpus of 1 million question/answer pairs collected from the Web .",245,3 |
|
246,"We focus on FAQ-like questions and answers , and build our << system >> around a [[ noisy-channel architecture ]] which exploits both a language model for answers and a transformation model for answer/question terms , trained on a corpus of 1 million question/answer pairs collected from the Web .",246,3 |
|
247,"We focus on FAQ-like questions and answers , and build our system around a [[ noisy-channel architecture ]] which exploits both a << language model >> for answers and a transformation model for answer/question terms , trained on a corpus of 1 million question/answer pairs collected from the Web .",247,3 |
|
248,"We focus on FAQ-like questions and answers , and build our system around a [[ noisy-channel architecture ]] which exploits both a language model for answers and a << transformation model >> for answer/question terms , trained on a corpus of 1 million question/answer pairs collected from the Web .",248,3 |
|
249,In this paper we evaluate four objective [[ measures of speech ]] with regards to << intelligibility prediction >> of synthesized speech in diverse noisy situations .,249,6 |
|
250,In this paper we evaluate four objective measures of speech with regards to << intelligibility prediction >> of [[ synthesized speech ]] in diverse noisy situations .,250,3 |
|
251,In this paper we evaluate four objective measures of speech with regards to intelligibility prediction of << synthesized speech >> in [[ diverse noisy situations ]] .,251,1 |
|
252,"We evaluated three [[ intel-ligibility measures ]] , the Dau measure , the glimpse proportion and the Speech Intelligibility Index -LRB- SII -RRB- and a << quality measure >> , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .",252,0 |
|
253,"We evaluated three << intel-ligibility measures >> , the [[ Dau measure ]] , the glimpse proportion and the Speech Intelligibility Index -LRB- SII -RRB- and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .",253,2 |
|
254,"We evaluated three intel-ligibility measures , the [[ Dau measure ]] , the << glimpse proportion >> and the Speech Intelligibility Index -LRB- SII -RRB- and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .",254,0 |
|
255,"We evaluated three << intel-ligibility measures >> , the Dau measure , the [[ glimpse proportion ]] and the Speech Intelligibility Index -LRB- SII -RRB- and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .",255,2 |
|
256,"We evaluated three intel-ligibility measures , the Dau measure , the [[ glimpse proportion ]] and the << Speech Intelligibility Index -LRB- SII -RRB- >> and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .",256,0 |
|
257,"We evaluated three << intel-ligibility measures >> , the Dau measure , the glimpse proportion and the [[ Speech Intelligibility Index -LRB- SII -RRB- ]] and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .",257,2 |
|
258,"We evaluated three intel-ligibility measures , the Dau measure , the glimpse proportion and the Speech Intelligibility Index -LRB- SII -RRB- and a << quality measure >> , the [[ Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- ]] .",258,2 |
|
259,For the << generation of synthesized speech >> we used a state of the art [[ HMM-based speech synthesis system ]] .,259,3 |
|
260,The << noisy conditions >> comprised four [[ additive noises ]] .,260,4 |
|
261,The [[ measures ]] were compared with << subjective intelligibility scores >> obtained in listening tests .,261,5 |
|
262,"The results show the [[ Dau ]] and the << glimpse measures >> to be the best predictors of intelligibility , with correlations of around 0.83 to subjective scores .",262,0 |
|
263,"The results show the [[ Dau ]] and the glimpse measures to be the best << predictors of intelligibility >> , with correlations of around 0.83 to subjective scores .",263,2 |
|
264,"The results show the [[ Dau ]] and the glimpse measures to be the best predictors of intelligibility , with correlations of around 0.83 to << subjective scores >> .",264,5 |
|
265,"The results show the Dau and the [[ glimpse measures ]] to be the best << predictors of intelligibility >> , with correlations of around 0.83 to subjective scores .",265,2 |
|
266,"The results show the Dau and the [[ glimpse measures ]] to be the best predictors of intelligibility , with correlations of around 0.83 to << subjective scores >> .",266,5 |
|
267,"The results show the << Dau >> and the glimpse measures to be the best predictors of intelligibility , with [[ correlations ]] of around 0.83 to subjective scores .",267,6 |
|
268,"The results show the Dau and the << glimpse measures >> to be the best predictors of intelligibility , with [[ correlations ]] of around 0.83 to subjective scores .",268,6 |
|
269,All [[ measures ]] gave less accurate << predictions of intelligibility >> for synthetic speech than have previously been found for natural speech ; in particular the SII measure .,269,6 |
|
270,All measures gave less accurate << predictions of intelligibility >> for [[ synthetic speech ]] than have previously been found for natural speech ; in particular the SII measure .,270,3 |
|
271,All measures gave less accurate predictions of intelligibility for [[ synthetic speech ]] than have previously been found for << natural speech >> ; in particular the SII measure .,271,5 |
|
272,All << measures >> gave less accurate predictions of intelligibility for synthetic speech than have previously been found for natural speech ; in particular the [[ SII measure ]] .,272,2 |
|
273,"In additional experiments , we processed the << synthesized speech >> by an [[ ideal binary mask ]] before adding noise .",273,3 |
|
274,The [[ Glimpse measure ]] gave the most accurate << intelligibility predictions >> in this situation .,274,3 |
|
275,"A [[ '' graphics for vision '' approach ]] is proposed to address the problem of << reconstruction >> from a large and imperfect data set : reconstruction on demand by tensor voting , or ROD-TV .",275,3 |
|
276,"A '' graphics for vision '' approach is proposed to address the problem of << reconstruction >> from a [[ large and imperfect data set ]] : reconstruction on demand by tensor voting , or ROD-TV .",276,3 |
|
277,"A '' graphics for vision '' approach is proposed to address the problem of reconstruction from a large and imperfect data set : << reconstruction >> on demand by [[ tensor voting ]] , or ROD-TV .",277,3 |
|
278,"A '' graphics for vision '' approach is proposed to address the problem of reconstruction from a large and imperfect data set : reconstruction on demand by [[ tensor voting ]] , or << ROD-TV >> .",278,0 |
|
279,"A '' graphics for vision '' approach is proposed to address the problem of reconstruction from a large and imperfect data set : << reconstruction >> on demand by tensor voting , or [[ ROD-TV ]] .",279,3 |
|
280,"<< ROD-TV >> simultaneously delivers good [[ efficiency ]] and robust-ness , by adapting to a continuum of primitive connectivity , view dependence , and levels of detail -LRB- LOD -RRB- .",280,6 |
|
281,"<< ROD-TV >> simultaneously delivers good efficiency and [[ robust-ness ]] , by adapting to a continuum of primitive connectivity , view dependence , and levels of detail -LRB- LOD -RRB- .",281,6 |
|
282,"ROD-TV simultaneously delivers good << efficiency >> and [[ robust-ness ]] , by adapting to a continuum of primitive connectivity , view dependence , and levels of detail -LRB- LOD -RRB- .",282,0 |
|
283,"ROD-TV simultaneously delivers good efficiency and robust-ness , by adapting to a continuum of << primitive connectivity >> , [[ view dependence ]] , and levels of detail -LRB- LOD -RRB- .",283,0 |
|
284,"ROD-TV simultaneously delivers good efficiency and robust-ness , by adapting to a continuum of primitive connectivity , << view dependence >> , and [[ levels of detail -LRB- LOD -RRB- ]] .",284,0 |
|
285,[[ Locally inferred surface elements ]] are robust to noise and better capture << local shapes >> .,285,3 |
|
286,"By inferring [[ per-vertex normals ]] at sub-voxel precision on the fly , we can achieve << interpolative shading >> .",286,3 |
|
287,"By inferring << per-vertex normals >> at [[ sub-voxel precision ]] on the fly , we can achieve interpolative shading .",287,1 |
|
288,"By relaxing the [[ mesh connectivity requirement ]] , we extend ROD-TV and propose a simple but effective << multiscale feature extraction algorithm >> .",288,3 |
|
289,"By relaxing the mesh connectivity requirement , we extend [[ ROD-TV ]] and propose a simple but effective << multiscale feature extraction algorithm >> .",289,3 |
|
290,<< ROD-TV >> consists of a [[ hierarchical data structure ]] that encodes different levels of detail .,290,4 |
|
291,The << local reconstruction algorithm >> is [[ tensor voting ]] .,291,2 |
|
292,"<< It >> is applied on demand to the visible subset of data at a desired level of detail , by [[ traversing the data hierarchy ]] and collecting tensorial support in a neighborhood .",292,3 |
|
293,"It is applied on demand to the visible subset of data at a desired level of detail , by [[ traversing the data hierarchy ]] and << collecting tensorial support >> in a neighborhood .",293,0 |
|
294,"<< It >> is applied on demand to the visible subset of data at a desired level of detail , by traversing the data hierarchy and [[ collecting tensorial support ]] in a neighborhood .",294,3 |
|
295,Both [[ rhetorical structure ]] and << punctuation >> have been helpful in discourse processing .,295,0 |
|
296,Both [[ rhetorical structure ]] and punctuation have been helpful in << discourse processing >> .,296,3 |
|
297,Both rhetorical structure and [[ punctuation ]] have been helpful in << discourse processing >> .,297,3 |
|
298,"Based on a corpus annotation project , this paper reports the discursive usage of 6 [[ Chinese punctuation marks ]] in << news commentary texts >> : Colon , Dash , Ellipsis , Exclamation Mark , Question Mark , and Semicolon .",298,4 |
|
299,"Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : [[ Colon ]] , Dash , Ellipsis , Exclamation Mark , Question Mark , and Semicolon .",299,2 |
|
300,"Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : [[ Colon ]] , << Dash >> , Ellipsis , Exclamation Mark , Question Mark , and Semicolon .",300,0 |
|
301,"Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , [[ Dash ]] , Ellipsis , Exclamation Mark , Question Mark , and Semicolon .",301,2 |
|
302,"Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : Colon , [[ Dash ]] , << Ellipsis >> , Exclamation Mark , Question Mark , and Semicolon .",302,0 |
|
303,"Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , Dash , [[ Ellipsis ]] , Exclamation Mark , Question Mark , and Semicolon .",303,2 |
|
304,"Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : Colon , Dash , [[ Ellipsis ]] , << Exclamation Mark >> , Question Mark , and Semicolon .",304,0 |
|
305,"Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , Dash , Ellipsis , [[ Exclamation Mark ]] , Question Mark , and Semicolon .",305,2 |
|
306,"Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : Colon , Dash , Ellipsis , [[ Exclamation Mark ]] , << Question Mark >> , and Semicolon .",306,0 |
|
307,"Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , Dash , Ellipsis , Exclamation Mark , [[ Question Mark ]] , and Semicolon .",307,2 |
|
308,"Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : Colon , Dash , Ellipsis , Exclamation Mark , [[ Question Mark ]] , and << Semicolon >> .",308,0 |
|
309,"Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , Dash , Ellipsis , Exclamation Mark , Question Mark , and [[ Semicolon ]] .",309,2 |
|
310,The [[ rhetorical patterns ]] of these << marks >> are compared against patterns around cue phrases in general .,310,1 |
|
311,The [[ rhetorical patterns ]] of these marks are compared against << patterns around cue phrases >> in general .,311,5 |
|
312,"Results show that these [[ Chinese punctuation marks ]] , though fewer in number than << cue phrases >> , are easy to identify , have strong correlation with certain relations , and can be used as distinctive indicators of nuclearity in Chinese texts .",312,5 |
|
313,"Results show that these [[ Chinese punctuation marks ]] , though fewer in number than cue phrases , are easy to identify , have strong correlation with certain relations , and can be used as distinctive << indicators of nuclearity >> in Chinese texts .",313,3 |
|
314,"Results show that these Chinese punctuation marks , though fewer in number than cue phrases , are easy to identify , have strong correlation with certain relations , and can be used as distinctive << indicators of nuclearity >> in [[ Chinese texts ]] .",314,1 |
|
315,The << features >> based on [[ Markov random field -LRB- MRF -RRB- models ]] are usually sensitive to the rotation of image textures .,315,3 |
|
316,This paper develops an [[ anisotropic circular Gaussian MRF -LRB- ACGMRF -RRB- model ]] for << modelling rotated image textures >> and retrieving rotation-invariant texture features .,316,3 |
|
317,This paper develops an [[ anisotropic circular Gaussian MRF -LRB- ACGMRF -RRB- model ]] for modelling rotated image textures and << retrieving rotation-invariant texture features >> .,317,3 |
|
318,This paper develops an anisotropic circular Gaussian MRF -LRB- ACGMRF -RRB- model for [[ modelling rotated image textures ]] and << retrieving rotation-invariant texture features >> .,318,0 |
|
319,"To overcome the [[ singularity problem ]] of the << least squares estimate -LRB- LSE -RRB- method >> , an approximate least squares estimate -LRB- ALSE -RRB- method is proposed to estimate the parameters of the ACGMRF model .",319,1 |
|
320,"To overcome the singularity problem of the least squares estimate -LRB- LSE -RRB- method , an [[ approximate least squares estimate -LRB- ALSE -RRB- method ]] is proposed to estimate the << parameters of the ACGMRF model >> .",320,3 |
|
321,The << rotation-invariant features >> can be obtained from the [[ parameters of the ACGMRF model ]] by the one-dimensional -LRB- 1-D -RRB- discrete Fourier transform -LRB- DFT -RRB- .,321,3 |
|
322,The << rotation-invariant features >> can be obtained from the parameters of the ACGMRF model by the [[ one-dimensional -LRB- 1-D -RRB- discrete Fourier transform -LRB- DFT -RRB- ]] .,322,3 |
|
323,Significantly improved accuracy can be achieved by applying the [[ rotation-invariant features ]] to classify << SAR -LRB- synthetic aperture radar >> -RRB- sea ice and Brodatz imagery .,323,3 |
|
324,"Despite much recent progress on accurate << semantic role labeling >> , previous work has largely used [[ independent classifiers ]] , possibly combined with separate label sequence models via Viterbi decoding .",324,3 |
|
325,"Despite much recent progress on accurate semantic role labeling , previous work has largely used [[ independent classifiers ]] , possibly combined with separate << label sequence models >> via Viterbi decoding .",325,0 |
|
326,"Despite much recent progress on accurate semantic role labeling , previous work has largely used independent classifiers , possibly combined with separate << label sequence models >> via [[ Viterbi decoding ]] .",326,3 |
|
327,"We show how to build a joint model of argument frames , incorporating novel [[ features ]] that model these interactions into << discriminative log-linear models >> .",327,4 |
|
328,This << system >> achieves an [[ error reduction ]] of 22 % on all arguments and 32 % on core arguments over a state-of-the art independent classifier for gold-standard parse trees on PropBank .,328,6 |
|
329,This system achieves an [[ error reduction ]] of 22 % on all arguments and 32 % on core arguments over a state-of-the art << independent classifier >> for gold-standard parse trees on PropBank .,329,6 |
|
330,This << system >> achieves an error reduction of 22 % on all arguments and 32 % on core arguments over a state-of-the art [[ independent classifier ]] for gold-standard parse trees on PropBank .,330,5 |
|
331,This << system >> achieves an error reduction of 22 % on all arguments and 32 % on core arguments over a state-of-the art independent classifier for [[ gold-standard parse trees ]] on PropBank .,331,6 |
|
332,This system achieves an error reduction of 22 % on all arguments and 32 % on core arguments over a state-of-the art << independent classifier >> for [[ gold-standard parse trees ]] on PropBank .,332,6 |
|
333,This system achieves an error reduction of 22 % on all arguments and 32 % on core arguments over a state-of-the art independent classifier for [[ gold-standard parse trees ]] on << PropBank >> .,333,4 |
|
334,"In order to deal with << ambiguity >> , the [[ MORphological PArser MORPA ]] is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. it combines a `` conventional '' context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse .",334,3 |
|
335,"In order to deal with ambiguity , the << MORphological PArser MORPA >> is provided with a [[ probabilistic context-free grammar -LRB- PCFG -RRB- ]] , i.e. it combines a `` conventional '' context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse .",335,3 |
|
336,"In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. << it >> combines a [[ `` conventional '' context-free morphological grammar ]] to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse .",336,3 |
|
337,"In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. it combines a [[ `` conventional '' context-free morphological grammar ]] to filter out << ungrammatical segmentations >> with a probability-based scoring function which determines the likelihood of each successful parse .",337,3 |
|
338,"In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. << it >> combines a `` conventional '' context-free morphological grammar to filter out ungrammatical segmentations with a [[ probability-based scoring function ]] which determines the likelihood of each successful parse .",338,3 |
|
339,"In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. it combines a << `` conventional '' context-free morphological grammar >> to filter out ungrammatical segmentations with a [[ probability-based scoring function ]] which determines the likelihood of each successful parse .",339,0 |
|
340,"In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. it combines a `` conventional '' context-free morphological grammar to filter out ungrammatical segmentations with a [[ probability-based scoring function ]] which determines the likelihood of each successful << parse >> .",340,3 |
|
341,Test performance data will show that a [[ PCFG ]] yields good results in << morphological parsing >> .,341,3 |
|
342,[[ MORPA ]] is a fully implemented << parser >> developed for use in a text-to-speech conversion system .,342,2 |
|
343,[[ MORPA ]] is a fully implemented parser developed for use in a << text-to-speech conversion system >> .,343,3 |
|
344,MORPA is a fully implemented [[ parser ]] developed for use in a << text-to-speech conversion system >> .,344,3 |
|
345,This paper describes the framework of a << Korean phonological knowledge base system >> using the [[ unification-based grammar formalism ]] : Korean Phonology Structure Grammar -LRB- KPSG -RRB- .,345,3 |
|
346,This paper describes the framework of a Korean phonological knowledge base system using the << unification-based grammar formalism >> : [[ Korean Phonology Structure Grammar -LRB- KPSG -RRB- ]] .,346,2 |
|
347,The [[ approach ]] of << KPSG >> provides an explicit development model for constructing a computational phonological system : speech recognition and synthesis system .,347,3 |
|
348,The approach of [[ KPSG ]] provides an explicit development model for constructing a computational << phonological system >> : speech recognition and synthesis system .,348,3 |
|
349,We show that the proposed [[ approach ]] is more describable than other << approaches >> such as those employing a traditional generative phonological approach .,349,5 |
|
350,We show that the proposed approach is more describable than other approaches such as << those >> employing a traditional [[ generative phonological approach ]] .,350,3 |
|
351,"In this paper , we study the [[ design of core-selecting payment rules ]] for such << domains >> .",351,3 |
|
352,We design two [[ core-selecting rules ]] that always satisfy << IR >> in expectation .,352,3 |
|
353,To study the performance of our << rules >> we perform a [[ computational Bayes-Nash equilibrium analysis ]] .,353,3 |
|
354,"We show that , in equilibrium , our new [[ rules ]] have better incentives , higher efficiency , and a lower rate of ex-post IR violations than standard << core-selecting rules >> .",354,5 |
|
355,"We show that , in equilibrium , our new << rules >> have better incentives , higher efficiency , and a lower [[ rate of ex-post IR violations ]] than standard core-selecting rules .",355,6 |
|
356,"We show that , in equilibrium , our new rules have better incentives , higher efficiency , and a lower [[ rate of ex-post IR violations ]] than standard << core-selecting rules >> .",356,6 |
|
357,"In this paper , we will describe a [[ search tool ]] for a huge set of << ngrams >> .",357,3 |
|
358,This system can be a very useful [[ tool ]] for << linguistic knowledge discovery >> and other NLP tasks .,358,3 |
|
359,This system can be a very useful [[ tool ]] for linguistic knowledge discovery and other << NLP tasks >> .,359,3 |
|
360,This system can be a very useful tool for [[ linguistic knowledge discovery ]] and other << NLP tasks >> .,360,0 |
|
361,This paper explores the role of [[ user modeling ]] in such << systems >> .,361,4 |
|
362,"Since acquiring the knowledge for a [[ user model ]] is a fundamental problem in << user modeling >> , a section is devoted to this topic .",362,3 |
|
363,"Next , the benefits and costs of implementing a [[ user modeling component ]] for a << system >> are weighed in light of several aspects of the interaction requirements that may be imposed by the system .",363,4 |
|
364,"[[ Information extraction techniques ]] automatically create << structured databases >> from unstructured data sources , such as the Web or newswire documents .",364,3 |
|
365,"<< Information extraction techniques >> automatically create structured databases from [[ unstructured data sources ]] , such as the Web or newswire documents .",365,3 |
|
366,"Information extraction techniques automatically create structured databases from << unstructured data sources >> , such as the [[ Web ]] or newswire documents .",366,2 |
|
367,"Information extraction techniques automatically create structured databases from unstructured data sources , such as the [[ Web ]] or << newswire documents >> .",367,0 |
|
368,"Information extraction techniques automatically create structured databases from << unstructured data sources >> , such as the Web or [[ newswire documents ]] .",368,2 |
|
369,"Despite the successes of these << systems >> , [[ accuracy ]] will always be imperfect .",369,6 |
|
370,"The << information extraction system >> we evaluate is based on a [[ linear-chain conditional random field -LRB- CRF -RRB- ]] , a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary , overlapping features of the input in a Markov model .",370,3 |
|
371,"The information extraction system we evaluate is based on a [[ linear-chain conditional random field -LRB- CRF -RRB- ]] , a << probabilistic model >> which has performed well on information extraction tasks because of its ability to capture arbitrary , overlapping features of the input in a Markov model .",371,2 |
|
372,"The information extraction system we evaluate is based on a linear-chain conditional random field -LRB- CRF -RRB- , a [[ probabilistic model ]] which has performed well on << information extraction tasks >> because of its ability to capture arbitrary , overlapping features of the input in a Markov model .",372,3 |
|
373,"The information extraction system we evaluate is based on a linear-chain conditional random field -LRB- CRF -RRB- , a [[ probabilistic model ]] which has performed well on information extraction tasks because of its ability to capture << arbitrary , overlapping features >> of the input in a Markov model .",373,3 |
|
374,"The information extraction system we evaluate is based on a linear-chain conditional random field -LRB- CRF -RRB- , a probabilistic model which has performed well on information extraction tasks because of its ability to capture [[ arbitrary , overlapping features ]] of the << input >> in a Markov model .",374,1 |
|
375,"The information extraction system we evaluate is based on a linear-chain conditional random field -LRB- CRF -RRB- , a probabilistic model which has performed well on information extraction tasks because of its ability to capture [[ arbitrary , overlapping features ]] of the input in a << Markov model >> .",375,4 |
|
376,"We implement several techniques to estimate the confidence of both [[ extracted fields ]] and entire << multi-field records >> , obtaining an average precision of 98 % for retrieving correct fields and 87 % for multi-field records .",376,0 |
|
377,"We implement several << techniques >> to estimate the confidence of both extracted fields and entire multi-field records , obtaining an [[ average precision ]] of 98 % for retrieving correct fields and 87 % for multi-field records .",377,6 |
|
378,"In this paper , we use the [[ information redundancy in multilingual input ]] to correct errors in << machine translation >> and thus improve the quality of multilingual summaries .",378,3 |
|
379,"In this paper , we use the [[ information redundancy in multilingual input ]] to correct errors in machine translation and thus improve the quality of << multilingual summaries >> .",379,3 |
|
380,"We demonstrate how errors in the << machine translations >> of the input [[ Arabic documents ]] can be corrected by identifying and generating from such redundancy , focusing on noun phrases .",380,3 |
|
381,"In this paper , we propose a new [[ approach ]] to generate << oriented object proposals -LRB- OOPs -RRB- >> to reduce the detection error caused by various orientations of the object .",381,3 |
|
382,"In this paper , we propose a new approach to generate << oriented object proposals -LRB- OOPs -RRB- >> to reduce the [[ detection error ]] caused by various orientations of the object .",382,6 |
|
383,"To this end , we propose to efficiently locate << object regions >> according to [[ pixelwise object probability ]] , rather than measuring the objectness from a set of sampled windows .",383,3 |
|
384,"To this end , we propose to efficiently locate object regions according to [[ pixelwise object probability ]] , rather than measuring the << objectness >> from a set of sampled windows .",384,5 |
|
385,"We formulate the << proposal generation problem >> as a [[ generative proba-bilistic model ]] such that object proposals of different shapes -LRB- i.e. , sizes and orientations -RRB- can be produced by locating the local maximum likelihoods .",385,3 |
|
386,"We formulate the proposal generation problem as a generative proba-bilistic model such that << object proposals >> of different [[ shapes ]] -LRB- i.e. , sizes and orientations -RRB- can be produced by locating the local maximum likelihoods .",386,1 |
|
387,"We formulate the proposal generation problem as a generative proba-bilistic model such that object proposals of different << shapes >> -LRB- i.e. , [[ sizes ]] and orientations -RRB- can be produced by locating the local maximum likelihoods .",387,2 |
|
388,"We formulate the proposal generation problem as a generative proba-bilistic model such that object proposals of different shapes -LRB- i.e. , [[ sizes ]] and << orientations >> -RRB- can be produced by locating the local maximum likelihoods .",388,0 |
|
389,"We formulate the proposal generation problem as a generative proba-bilistic model such that object proposals of different << shapes >> -LRB- i.e. , sizes and [[ orientations ]] -RRB- can be produced by locating the local maximum likelihoods .",389,2 |
|
390,"We formulate the proposal generation problem as a generative proba-bilistic model such that << object proposals >> of different shapes -LRB- i.e. , sizes and orientations -RRB- can be produced by locating the [[ local maximum likelihoods ]] .",390,3 |
|
391,"First , it helps the [[ object detector ]] handle objects of different << orientations >> .",391,3 |
|
392,"Third , [[ it ]] avoids massive window sampling , and thereby reducing the << number of proposals >> while maintaining a high recall .",392,3 |
|
393,"Third , << it >> avoids massive window sampling , and thereby reducing the number of proposals while maintaining a high [[ recall ]] .",393,6 |
|
394,Experiments on the [[ PASCAL VOC 2007 dataset ]] show that the proposed << OOP >> outperforms the state-of-the-art fast methods .,394,6 |
|
395,Experiments on the PASCAL VOC 2007 dataset show that the proposed [[ OOP ]] outperforms the << state-of-the-art fast methods >> .,395,5 |
|
396,Further experiments show that the [[ rotation invariant property ]] helps a << class-specific object detector >> achieve better performance than the state-of-the-art proposal generation methods in either object rotation scenarios or general scenarios .,396,3 |
|
397,Further experiments show that the rotation invariant property helps a [[ class-specific object detector ]] achieve better performance than the state-of-the-art << proposal generation methods >> in either object rotation scenarios or general scenarios .,397,5 |
|
398,Further experiments show that the rotation invariant property helps a << class-specific object detector >> achieve better performance than the state-of-the-art proposal generation methods in either [[ object rotation scenarios ]] or general scenarios .,398,6 |
|
399,Further experiments show that the rotation invariant property helps a class-specific object detector achieve better performance than the state-of-the-art << proposal generation methods >> in either [[ object rotation scenarios ]] or general scenarios .,399,6 |
|
400,Further experiments show that the rotation invariant property helps a class-specific object detector achieve better performance than the state-of-the-art proposal generation methods in either [[ object rotation scenarios ]] or << general scenarios >> .,400,0 |
|
401,Further experiments show that the rotation invariant property helps a << class-specific object detector >> achieve better performance than the state-of-the-art proposal generation methods in either object rotation scenarios or [[ general scenarios ]] .,401,6 |
|
402,Further experiments show that the rotation invariant property helps a class-specific object detector achieve better performance than the state-of-the-art << proposal generation methods >> in either object rotation scenarios or [[ general scenarios ]] .,402,6 |
|
403,"This paper describes three relatively [[ domain-independent capabilities ]] recently added to the << Paramax spoken language understanding system >> : non-monotonic reasoning , implicit reference resolution , and database query paraphrase .",403,4 |
|
404,"This paper describes three relatively << domain-independent capabilities >> recently added to the Paramax spoken language understanding system : [[ non-monotonic reasoning ]] , implicit reference resolution , and database query paraphrase .",404,2 |
|
405,"This paper describes three relatively << domain-independent capabilities >> recently added to the Paramax spoken language understanding system : non-monotonic reasoning , [[ implicit reference resolution ]] , and database query paraphrase .",405,2 |
|
406,"This paper describes three relatively << domain-independent capabilities >> recently added to the Paramax spoken language understanding system : non-monotonic reasoning , implicit reference resolution , and [[ database query paraphrase ]] .",406,2 |
|
407,"Finally , we briefly describe an experiment which we have done in extending the << n-best speech/language integration architecture >> to improving [[ OCR accuracy ]] .",407,6 |
|
408,"We investigate the problem of fine-grained sketch-based image retrieval -LRB- SBIR -RRB- , where [[ free-hand human sketches ]] are used as queries to perform << instance-level retrieval of images >> .",408,3 |
|
409,"This is an extremely challenging task because -LRB- i -RRB- visual comparisons not only need to be fine-grained but also executed cross-domain , -LRB- ii -RRB- free-hand -LRB- finger -RRB- sketches are highly abstract , making fine-grained matching harder , and most importantly -LRB- iii -RRB- [[ annotated cross-domain sketch-photo datasets ]] required for training are scarce , challenging many state-of-the-art << machine learning techniques >> .",409,3 |
|
410,We then develop a [[ deep triplet-ranking model ]] for << instance-level SBIR >> with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data .,410,3 |
|
411,We then develop a [[ deep triplet-ranking model ]] for instance-level SBIR with a novel data augmentation and staged pre-training strategy to alleviate the issue of << insufficient fine-grained training data >> .,411,3 |
|
412,We then develop a << deep triplet-ranking model >> for instance-level SBIR with a novel [[ data augmentation ]] and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data .,412,3 |
|
413,We then develop a deep triplet-ranking model for instance-level SBIR with a novel [[ data augmentation ]] and << staged pre-training strategy >> to alleviate the issue of insufficient fine-grained training data .,413,0 |
|
414,We then develop a << deep triplet-ranking model >> for instance-level SBIR with a novel data augmentation and [[ staged pre-training strategy ]] to alleviate the issue of insufficient fine-grained training data .,414,3 |
|
415,Extensive experiments are carried out to contribute a variety of insights into the challenges of [[ data sufficiency ]] and << over-fitting avoidance >> when training deep networks for fine-grained cross-domain ranking tasks .,415,0 |
|
416,Extensive experiments are carried out to contribute a variety of insights into the challenges of data sufficiency and over-fitting avoidance when training [[ deep networks ]] for << fine-grained cross-domain ranking tasks >> .,416,3 |
|
417,In this paper we target at generating << generic action proposals >> in [[ unconstrained videos ]] .,417,3 |
|
418,"Each action proposal corresponds to a << temporal series of spatial bounding boxes >> , i.e. , a [[ spatio-temporal video tube ]] , which has a good potential to locate one human action .",418,2 |
|
419,"Each action proposal corresponds to a temporal series of spatial bounding boxes , i.e. , a [[ spatio-temporal video tube ]] , which has a good potential to locate one << human action >> .",419,3 |
|
420,"Assuming each action is performed by a human with meaningful motion , both [[ appearance and motion cues ]] are utilized to measure the << ac-tionness >> of the video tubes .",420,3 |
|
421,"Assuming each action is performed by a human with meaningful motion , both appearance and motion cues are utilized to measure the [[ ac-tionness ]] of the << video tubes >> .",421,6 |
|
422,"After picking those spatiotem-poral paths of high actionness scores , our << action proposal generation >> is formulated as a [[ maximum set coverage problem ]] , where greedy search is performed to select a set of action proposals that can maximize the overall actionness score .",422,3 |
|
423,"After picking those spatiotem-poral paths of high actionness scores , our action proposal generation is formulated as a maximum set coverage problem , where [[ greedy search ]] is performed to select a set of << action proposals >> that can maximize the overall actionness score .",423,3 |
|
424,"After picking those spatiotem-poral paths of high actionness scores , our action proposal generation is formulated as a maximum set coverage problem , where greedy search is performed to select a set of << action proposals >> that can maximize the overall [[ actionness score ]] .",424,6 |
|
425,"Compared with existing [[ action proposal approaches ]] , our << action proposals >> do not rely on video segmentation and can be generated in nearly real-time .",425,5 |
|
426,"Experimental results on two challenging [[ datasets ]] , MSRII and UCF 101 , validate the superior performance of our << action proposals >> as well as competitive results on action detection and search .",426,6 |
|
427,"Experimental results on two challenging << datasets >> , [[ MSRII ]] and UCF 101 , validate the superior performance of our action proposals as well as competitive results on action detection and search .",427,2 |
|
428,"Experimental results on two challenging datasets , [[ MSRII ]] and << UCF 101 >> , validate the superior performance of our action proposals as well as competitive results on action detection and search .",428,0 |
|
429,"Experimental results on two challenging << datasets >> , MSRII and [[ UCF 101 ]] , validate the superior performance of our action proposals as well as competitive results on action detection and search .",429,2 |
|
430,"Experimental results on two challenging datasets , MSRII and UCF 101 , validate the superior performance of our << action proposals >> as well as competitive results on [[ action detection and search ]] .",430,6 |
|
431,This paper reports recent research into [[ methods ]] for << creating natural language text >> .,431,3 |
|
432,"<< KDS -LRB- Knowledge Delivery System -RRB- >> , which embodies this [[ paradigm ]] , has distinct parts devoted to creation of the propositional units , to organization of the text , to prevention of excess redundancy , to creation of combinations of units , to evaluation of these combinations as potential sentences , to selection of the best among competing combinations , and to creation of the final text .",432,4 |
|
433,The Fragment-and-Compose paradigm and the [[ computational methods ]] of << KDS >> are described .,433,3 |
|
434,This paper explores the issue of using different [[ co-occurrence similarities ]] between terms for separating << query terms >> that are useful for retrieval from those that are harmful .,434,3 |
|
435,This paper explores the issue of using different co-occurrence similarities between terms for separating [[ query terms ]] that are useful for << retrieval >> from those that are harmful .,435,3 |
|
436,This paper explores the issue of using different co-occurrence similarities between terms for separating << query terms >> that are useful for retrieval from [[ those ]] that are harmful .,436,5 |
|
437,The hypothesis under examination is that [[ useful terms ]] tend to be more similar to each other than to other << query terms >> .,437,5 |
|
438,Preliminary experiments with << similarities >> computed using [[ first-order and second-order co-occurrence ]] seem to confirm the hypothesis .,438,3 |
|
439,"We propose a new [[ phrase-based translation model ]] and << decoding algorithm >> that enables us to evaluate and compare several , previously proposed phrase-based translation models .",439,0 |
|
440,"Within our framework , we carry out a large number of experiments to understand better and explain why [[ phrase-based models ]] outperform << word-based models >> .",440,5 |
|
441,"Our empirical results , which hold for all examined language pairs , suggest that the highest levels of performance can be obtained through relatively simple << means >> : [[ heuristic learning of phrase translations ]] from word-based alignments and lexical weighting of phrase translations .",441,2 |
|
442,"Our empirical results , which hold for all examined language pairs , suggest that the highest levels of performance can be obtained through relatively simple means : << heuristic learning of phrase translations >> from [[ word-based alignments ]] and lexical weighting of phrase translations .",442,3 |
|
443,"Our empirical results , which hold for all examined language pairs , suggest that the highest levels of performance can be obtained through relatively simple << means >> : heuristic learning of phrase translations from word-based alignments and [[ lexical weighting of phrase translations ]] .",443,2 |
|
444,"Traditional [[ methods ]] for << color constancy >> can improve surface re-flectance estimates from such uncalibrated images , but their output depends significantly on the background scene .",444,3 |
|
445,"Traditional [[ methods ]] for color constancy can improve << surface re-flectance estimates >> from such uncalibrated images , but their output depends significantly on the background scene .",445,3 |
|
446,"Traditional methods for color constancy can improve << surface re-flectance estimates >> from such [[ uncalibrated images ]] , but their output depends significantly on the background scene .",446,3 |
|
447,"We introduce the multi-view color constancy problem , and present a [[ method ]] to recover << estimates of underlying surface re-flectance >> based on joint estimation of these surface properties and the illuminants present in multiple images .",447,3 |
|
448,"The [[ method ]] can exploit << image correspondences >> obtained by various alignment techniques , and we show examples based on matching local region features .",448,3 |
|
449,"The method can exploit << image correspondences >> obtained by various [[ alignment techniques ]] , and we show examples based on matching local region features .",449,3 |
|
450,Our results show that [[ multi-view constraints ]] can significantly improve << estimates of both scene illuminants and object color -LRB- surface reflectance -RRB- >> when compared to a baseline single-view method .,450,3 |
|
451,Our results show that << multi-view constraints >> can significantly improve estimates of both scene illuminants and object color -LRB- surface reflectance -RRB- when compared to a [[ baseline single-view method ]] .,451,5 |
|
452,"Our contributions include a [[ concise , modular architecture ]] with reversible processes of << understanding >> and generation , an information-state model of reference , and flexible links between semantics and collaborative problem solving .",452,3 |
|
453,"Our contributions include a [[ concise , modular architecture ]] with reversible processes of understanding and << generation >> , an information-state model of reference , and flexible links between semantics and collaborative problem solving .",453,3 |
|
454,"Our contributions include a concise , modular architecture with reversible processes of [[ understanding ]] and << generation >> , an information-state model of reference , and flexible links between semantics and collaborative problem solving .",454,0 |
|
|