Unnamed: 0
int64 0
3.22k
| text
stringlengths 49
577
| id
int64 0
3.22k
| label
int64 0
6
|
---|---|---|---|
3,100 | The extension is shown to be sufficient to provide a strongly adequate [[ grammar ]] for << crossed serial dependencies >> , as found in e.g. Dutch subordinate clauses . | 3,100 | 3 |
3,101 | The << extension >> is shown to be parseable by a simple [[ extension ]] to an existing parsing method for GPSG . | 3,101 | 3 |
3,102 | The extension is shown to be parseable by a simple << extension >> to an existing [[ parsing method ]] for GPSG . | 3,102 | 3 |
3,103 | The extension is shown to be parseable by a simple extension to an existing [[ parsing method ]] for << GPSG >> . | 3,103 | 3 |
3,104 | This paper presents an [[ approach ]] to << localizing functional objects >> in surveillance videos without domain knowledge about semantic object classes that may appear in the scene . | 3,104 | 3 |
3,105 | This paper presents an approach to << localizing functional objects >> in [[ surveillance videos ]] without domain knowledge about semantic object classes that may appear in the scene . | 3,105 | 3 |
3,106 | This paper presents an approach to localizing functional objects in surveillance videos without << domain knowledge >> about [[ semantic object classes ]] that may appear in the scene . | 3,106 | 1 |
3,107 | A [[ Bayesian framework ]] is used to probabilistically model : << people 's trajectories and intents >> , constraint map of the scene , and locations of functional objects . | 3,107 | 3 |
3,108 | A [[ Bayesian framework ]] is used to probabilistically model : people 's trajectories and intents , << constraint map of the scene >> , and locations of functional objects . | 3,108 | 3 |
3,109 | A [[ Bayesian framework ]] is used to probabilistically model : people 's trajectories and intents , constraint map of the scene , and << locations of functional objects >> . | 3,109 | 3 |
3,110 | A Bayesian framework is used to probabilistically model : [[ people 's trajectories and intents ]] , << constraint map of the scene >> , and locations of functional objects . | 3,110 | 0 |
3,111 | A Bayesian framework is used to probabilistically model : people 's trajectories and intents , [[ constraint map of the scene ]] , and << locations of functional objects >> . | 3,111 | 0 |
3,112 | A [[ data-driven Markov Chain Monte Carlo -LRB- MCMC -RRB- process ]] is used for << inference >> . | 3,112 | 3 |
3,113 | Our evaluation on [[ videos of public squares and courtyards ]] demonstrates our effectiveness in << localizing functional objects >> and predicting people 's trajectories in unobserved parts of the video footage . | 3,113 | 6 |
3,114 | Our evaluation on [[ videos of public squares and courtyards ]] demonstrates our effectiveness in localizing functional objects and << predicting people 's trajectories >> in unobserved parts of the video footage . | 3,114 | 6 |
3,115 | Our evaluation on videos of public squares and courtyards demonstrates our effectiveness in [[ localizing functional objects ]] and << predicting people 's trajectories >> in unobserved parts of the video footage . | 3,115 | 0 |
3,116 | We propose a [[ process model ]] for << hierarchical perceptual sound organization >> , which recognizes perceptual sounds included in incoming sound signals . | 3,116 | 3 |
3,117 | We propose a process model for hierarchical perceptual sound organization , which recognizes [[ perceptual sounds ]] included in << incoming sound signals >> . | 3,117 | 4 |
3,118 | We consider << perceptual sound organization >> as a [[ scene analysis problem ]] in the auditory domain . | 3,118 | 3 |
3,119 | We consider perceptual sound organization as a << scene analysis problem >> in the [[ auditory domain ]] . | 3,119 | 1 |
3,120 | Our << model >> consists of multiple [[ processing modules ]] and a hypothesis network for quantitative integration of multiple sources of information . | 3,120 | 4 |
3,121 | Our model consists of multiple [[ processing modules ]] and a << hypothesis network >> for quantitative integration of multiple sources of information . | 3,121 | 0 |
3,122 | Our << model >> consists of multiple processing modules and a [[ hypothesis network ]] for quantitative integration of multiple sources of information . | 3,122 | 4 |
3,123 | On the << hypothesis network >> , individual information is integrated and an optimal [[ internal model ]] of perceptual sounds is automatically constructed . | 3,123 | 4 |
3,124 | On the hypothesis network , individual information is integrated and an optimal [[ internal model ]] of << perceptual sounds >> is automatically constructed . | 3,124 | 3 |
3,125 | Based on the model , a [[ music scene analysis system ]] has been developed for << acoustic signals of ensemble music >> , which recognizes rhythm , chords , and source-separated musical notes . | 3,125 | 3 |
3,126 | Based on the model , a [[ music scene analysis system ]] has been developed for acoustic signals of ensemble music , which recognizes << rhythm >> , chords , and source-separated musical notes . | 3,126 | 3 |
3,127 | Based on the model , a [[ music scene analysis system ]] has been developed for acoustic signals of ensemble music , which recognizes rhythm , << chords >> , and source-separated musical notes . | 3,127 | 3 |
3,128 | Based on the model , a [[ music scene analysis system ]] has been developed for acoustic signals of ensemble music , which recognizes rhythm , chords , and << source-separated musical notes >> . | 3,128 | 3 |
3,129 | Based on the model , a music scene analysis system has been developed for acoustic signals of ensemble music , which recognizes [[ rhythm ]] , << chords >> , and source-separated musical notes . | 3,129 | 0 |
3,130 | Based on the model , a music scene analysis system has been developed for acoustic signals of ensemble music , which recognizes rhythm , [[ chords ]] , and << source-separated musical notes >> . | 3,130 | 0 |
3,131 | Experimental results show that our << method >> has permitted autonomous , stable and effective [[ information integration ]] to construct the internal model of hierarchical perceptual sounds . | 3,131 | 1 |
3,132 | Experimental results show that our method has permitted autonomous , stable and effective [[ information integration ]] to construct the << internal model >> of hierarchical perceptual sounds . | 3,132 | 3 |
3,133 | Experimental results show that our method has permitted autonomous , stable and effective information integration to construct the [[ internal model ]] of << hierarchical perceptual sounds >> . | 3,133 | 3 |
3,134 | We directly investigate a subject of much recent debate : do [[ word sense disambigation models ]] help << statistical machine translation quality >> ? | 3,134 | 3 |
3,135 | Using a state-of-the-art [[ Chinese word sense disambiguation model ]] to choose << translation candidates >> for a typical IBM statistical MT system , we find that word sense disambiguation does not yield significantly better translation quality than the statistical machine translation system alone . | 3,135 | 3 |
3,136 | Using a state-of-the-art Chinese word sense disambiguation model to choose [[ translation candidates ]] for a typical << IBM statistical MT system >> , we find that word sense disambiguation does not yield significantly better translation quality than the statistical machine translation system alone . | 3,136 | 3 |
3,137 | Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system , we find that [[ word sense disambiguation ]] does not yield significantly better translation quality than the << statistical machine translation system >> alone . | 3,137 | 5 |
3,138 | Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system , we find that << word sense disambiguation >> does not yield significantly better [[ translation quality ]] than the statistical machine translation system alone . | 3,138 | 6 |
3,139 | Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system , we find that word sense disambiguation does not yield significantly better [[ translation quality ]] than the << statistical machine translation system >> alone . | 3,139 | 6 |
3,140 | [[ Image sequence processing techniques ]] are used to study << exchange , growth , and transport processes >> and to tackle key questions in environmental physics and biology . | 3,140 | 3 |
3,141 | Image sequence processing techniques are used to study exchange , growth , and transport processes and to tackle key questions in [[ environmental physics ]] and << biology >> . | 3,141 | 0 |
3,142 | These applications require high [[ accuracy ]] for the << estimation of the motion field >> since the most interesting parameters of the dynamical processes studied are contained in first-order derivatives of the motion field or in dynamical changes of the moving objects . | 3,142 | 6 |
3,143 | These << applications >> require high accuracy for the [[ estimation of the motion field ]] since the most interesting parameters of the dynamical processes studied are contained in first-order derivatives of the motion field or in dynamical changes of the moving objects . | 3,143 | 3 |
3,144 | These applications require high accuracy for the estimation of the motion field since the most interesting parameters of the dynamical processes studied are contained in [[ first-order derivatives of the motion field ]] or in << dynamical changes of the moving objects >> . | 3,144 | 0 |
3,145 | A << tensor method >> tuned with carefully optimized [[ derivative filters ]] yields reliable and dense displacement vector fields -LRB- DVF -RRB- with an accuracy of up to a few hundredth pixels/frame for real-world images . | 3,145 | 3 |
3,146 | A tensor method tuned with carefully optimized derivative filters yields reliable and dense << displacement vector fields -LRB- DVF -RRB- >> with an accuracy of up to a few hundredth [[ pixels/frame ]] for real-world images . | 3,146 | 6 |
3,147 | A tensor method tuned with carefully optimized derivative filters yields reliable and dense displacement vector fields -LRB- DVF -RRB- with an accuracy of up to a few hundredth << pixels/frame >> for [[ real-world images ]] . | 3,147 | 3 |
3,148 | The [[ accuracy ]] of the << tensor method >> is verified with computer-generated sequences and a calibrated image sequence . | 3,148 | 6 |
3,149 | The accuracy of the << tensor method >> is verified with [[ computer-generated sequences ]] and a calibrated image sequence . | 3,149 | 6 |
3,150 | The accuracy of the tensor method is verified with [[ computer-generated sequences ]] and a << calibrated image sequence >> . | 3,150 | 0 |
3,151 | The accuracy of the << tensor method >> is verified with computer-generated sequences and a [[ calibrated image sequence ]] . | 3,151 | 6 |
3,152 | With the improvements in [[ accuracy ]] the << motion estimation >> is now rather limited by imperfections in the CCD sensors , especially the spatial nonuni-formity in the responsivity . | 3,152 | 6 |
3,153 | With the improvements in accuracy the << motion estimation >> is now rather limited by imperfections in the [[ CCD sensors ]] , especially the spatial nonuni-formity in the responsivity . | 3,153 | 3 |
3,154 | With the improvements in accuracy the motion estimation is now rather limited by imperfections in the CCD sensors , especially the [[ spatial nonuni-formity ]] in the << responsivity >> . | 3,154 | 1 |
3,155 | With the improvements in accuracy the motion estimation is now rather limited by imperfections in the << CCD sensors >> , especially the spatial nonuni-formity in the [[ responsivity ]] . | 3,155 | 1 |
3,156 | The application of the [[ techniques ]] to the << analysis of plant growth >> , to ocean surface microturbulence in IR image sequences , and to sediment transport is demonstrated . | 3,156 | 3 |
3,157 | The application of the [[ techniques ]] to the analysis of plant growth , to << ocean surface microturbulence in IR image sequences >> , and to sediment transport is demonstrated . | 3,157 | 3 |
3,158 | The application of the [[ techniques ]] to the analysis of plant growth , to ocean surface microturbulence in IR image sequences , and to << sediment transport >> is demonstrated . | 3,158 | 3 |
3,159 | The application of the techniques to the [[ analysis of plant growth ]] , to << ocean surface microturbulence in IR image sequences >> , and to sediment transport is demonstrated . | 3,159 | 0 |
3,160 | The application of the techniques to the analysis of plant growth , to [[ ocean surface microturbulence in IR image sequences ]] , and to << sediment transport >> is demonstrated . | 3,160 | 0 |
3,161 | We present a [[ Czech-English statistical machine translation system ]] which performs << tree-to-tree translation of dependency structures >> . | 3,161 | 3 |
3,162 | The only << bilingual resource >> required is a [[ sentence-aligned parallel corpus ]] . | 3,162 | 3 |
3,163 | We also refer to an evaluation method and plan to compare our [[ system ]] 's output with a << benchmark system >> . | 3,163 | 5 |
3,164 | This paper describes the understanding process of the << spatial descriptions >> in [[ Japanese ]] . | 3,164 | 1 |
3,165 | To reconstruct the model , the authors extract the qualitative spatial constraints from the text , and represent them as the << numerical constraints >> on the [[ spatial attributes of the entities ]] . | 3,165 | 3 |
3,166 | Such [[ context information ]] is therefore important to characterize the << intrinsic representation of a video frame >> . | 3,166 | 3 |
3,167 | In this paper , we present a novel [[ approach ]] to learn the << deep video representation >> by exploring both local and holistic contexts . | 3,167 | 3 |
3,168 | In this paper , we present a novel << approach >> to learn the deep video representation by exploring both [[ local and holistic contexts ]] . | 3,168 | 3 |
3,169 | Specifically , we propose a [[ triplet sampling mechanism ]] to encode the << local temporal relationship of adjacent frames >> based on their deep representations . | 3,169 | 3 |
3,170 | Specifically , we propose a << triplet sampling mechanism >> to encode the local temporal relationship of adjacent frames based on their [[ deep representations ]] . | 3,170 | 3 |
3,171 | In addition , we incorporate the [[ graph structure of the video ]] , as a << priori >> , to holistically preserve the inherent correlations among video frames . | 3,171 | 3 |
3,172 | Our << approach >> is fully unsupervised and trained in an [[ end-to-end deep convolutional neu-ral network architecture ]] . | 3,172 | 3 |
3,173 | By extensive experiments , we show that our [[ learned representation ]] can significantly boost several video recognition tasks -LRB- retrieval , classification , and highlight detection -RRB- over traditional << video representations >> . | 3,173 | 5 |
3,174 | By extensive experiments , we show that our << learned representation >> can significantly boost several [[ video recognition tasks ]] -LRB- retrieval , classification , and highlight detection -RRB- over traditional video representations . | 3,174 | 6 |
3,175 | By extensive experiments , we show that our learned representation can significantly boost several [[ video recognition tasks ]] -LRB- retrieval , classification , and highlight detection -RRB- over traditional << video representations >> . | 3,175 | 6 |
3,176 | By extensive experiments , we show that our learned representation can significantly boost several << video recognition tasks >> -LRB- [[ retrieval ]] , classification , and highlight detection -RRB- over traditional video representations . | 3,176 | 2 |
3,177 | By extensive experiments , we show that our learned representation can significantly boost several video recognition tasks -LRB- [[ retrieval ]] , << classification >> , and highlight detection -RRB- over traditional video representations . | 3,177 | 0 |
3,178 | By extensive experiments , we show that our learned representation can significantly boost several << video recognition tasks >> -LRB- retrieval , [[ classification ]] , and highlight detection -RRB- over traditional video representations . | 3,178 | 2 |
3,179 | By extensive experiments , we show that our learned representation can significantly boost several video recognition tasks -LRB- retrieval , [[ classification ]] , and << highlight detection >> -RRB- over traditional video representations . | 3,179 | 0 |
3,180 | By extensive experiments , we show that our learned representation can significantly boost several << video recognition tasks >> -LRB- retrieval , classification , and [[ highlight detection ]] -RRB- over traditional video representations . | 3,180 | 2 |
3,181 | For << mobile speech application >> , [[ speaker DOA estimation accuracy ]] , interference robustness and compact physical size are three key factors . | 3,181 | 1 |
3,182 | For mobile speech application , [[ speaker DOA estimation accuracy ]] , << interference robustness >> and compact physical size are three key factors . | 3,182 | 0 |
3,183 | For << mobile speech application >> , speaker DOA estimation accuracy , [[ interference robustness ]] and compact physical size are three key factors . | 3,183 | 1 |
3,184 | For mobile speech application , speaker DOA estimation accuracy , [[ interference robustness ]] and << compact physical size >> are three key factors . | 3,184 | 0 |
3,185 | For << mobile speech application >> , speaker DOA estimation accuracy , interference robustness and [[ compact physical size ]] are three key factors . | 3,185 | 1 |
3,186 | [[ It ]] is achieved by deriving the inter-sensor data ratio model of an AVS in bispectrum domain -LRB- BISDR -RRB- and exploring the << favorable properties >> of bispectrum , such as zero value of Gaussian process and different distribution of speech and NSI . | 3,186 | 3 |
3,187 | It is achieved by deriving the [[ inter-sensor data ratio model ]] of an << AVS >> in bispectrum domain -LRB- BISDR -RRB- and exploring the favorable properties of bispectrum , such as zero value of Gaussian process and different distribution of speech and NSI . | 3,187 | 3 |
3,188 | It is achieved by deriving the inter-sensor data ratio model of an << AVS >> in [[ bispectrum domain -LRB- BISDR -RRB- ]] and exploring the favorable properties of bispectrum , such as zero value of Gaussian process and different distribution of speech and NSI . | 3,188 | 3 |
3,189 | It is achieved by deriving the inter-sensor data ratio model of an AVS in bispectrum domain -LRB- BISDR -RRB- and exploring the << favorable properties >> of bispectrum , such as [[ zero value of Gaussian process ]] and different distribution of speech and NSI . | 3,189 | 2 |
3,190 | It is achieved by deriving the inter-sensor data ratio model of an AVS in bispectrum domain -LRB- BISDR -RRB- and exploring the favorable properties of bispectrum , such as [[ zero value of Gaussian process ]] and different << distribution of speech and NSI >> . | 3,190 | 0 |
3,191 | It is achieved by deriving the inter-sensor data ratio model of an AVS in bispectrum domain -LRB- BISDR -RRB- and exploring the << favorable properties >> of bispectrum , such as zero value of Gaussian process and different [[ distribution of speech and NSI ]] . | 3,191 | 2 |
3,192 | Specifically , a reliable [[ bispectrum mask ]] is generated to guarantee that the << speaker DOA cues >> , derived from BISDR , are robust to NSI in terms of speech sparsity and large bispectrum amplitude of the captured signals . | 3,192 | 3 |
3,193 | Specifically , a reliable bispectrum mask is generated to guarantee that the << speaker DOA cues >> , derived from [[ BISDR ]] , are robust to NSI in terms of speech sparsity and large bispectrum amplitude of the captured signals . | 3,193 | 3 |
3,194 | Intensive experiments demonstrate an improved performance of our proposed [[ algorithm ]] under various << NSI conditions >> even when SIR is smaller than 0dB . | 3,194 | 3 |
3,195 | In this paper , we want to show how the [[ morphological component ]] of an existing << NLP-system for Dutch -LRB- Dutch Medical Language Processor - DMLP -RRB- >> has been extended in order to produce output that is compatible with the language independent modules of the LSP-MLP system -LRB- Linguistic String Project - Medical Language Processor -RRB- of the New York University . | 3,195 | 4 |
3,196 | In this paper , we want to show how the morphological component of an existing NLP-system for Dutch -LRB- Dutch Medical Language Processor - DMLP -RRB- has been extended in order to produce output that is compatible with the [[ language independent modules ]] of the << LSP-MLP system -LRB- Linguistic String Project - Medical Language Processor -RRB- >> of the New York University . | 3,196 | 4 |
3,197 | The << former >> can take advantage of the language independent developments of the [[ latter ]] , while focusing on idiosyncrasies for Dutch . | 3,197 | 3 |
3,198 | The former can take advantage of the language independent developments of the latter , while focusing on << idiosyncrasies >> for [[ Dutch ]] . | 3,198 | 3 |
3,199 | This general strategy will be illustrated by a practical application , namely the highlighting of [[ relevant information ]] in a << patient discharge summary -LRB- PDS -RRB- >> by means of modern HyperText Mark-Up Language -LRB- HTML -RRB- technology . | 3,199 | 4 |