Unnamed: 0
int64 0
3.22k
| text
stringlengths 49
577
| id
int64 0
3.22k
| label
int64 0
6
|
---|---|---|---|
200 | Nevertheless , this approach required [[ user initialization ]] of the << tracking process >> . | 200 | 3 |
201 | This paper solves the << automatic initial-ization problem >> by performing [[ boosted shape detection ]] as a generic measurement process and integrating it in our tracking framework . | 201 | 3 |
202 | This paper solves the automatic initial-ization problem by performing << boosted shape detection >> as a [[ generic measurement process ]] and integrating it in our tracking framework . | 202 | 3 |
203 | This paper solves the automatic initial-ization problem by performing boosted shape detection as a generic measurement process and integrating [[ it ]] in our << tracking framework >> . | 203 | 4 |
204 | As a result , we treat all sources of information in a unified way and derive the << posterior shape model >> as the shape with the [[ maximum likelihood ]] . | 204 | 3 |
205 | Our [[ framework ]] is applied for the << automatic tracking of endocardium >> in ultrasound sequences of the human heart . | 205 | 3 |
206 | Our framework is applied for the automatic tracking of [[ endocardium ]] in << ultrasound sequences of the human heart >> . | 206 | 4 |
207 | Reliable [[ detection ]] and robust << tracking >> results are achieved when compared to existing approaches and inter-expert variations . | 207 | 0 |
208 | Reliable detection and robust tracking results are achieved when compared to existing [[ approaches ]] and << inter-expert variations >> . | 208 | 0 |
209 | We present a [[ syntax-based constraint ]] for << word alignment >> , known as the cohesion constraint . | 209 | 3 |
210 | We present a << syntax-based constraint >> for word alignment , known as the [[ cohesion constraint ]] . | 210 | 2 |
211 | << It >> requires disjoint [[ English phrases ]] to be mapped to non-overlapping intervals in the French sentence . | 211 | 3 |
212 | We evaluate the utility of this << constraint >> in two different [[ algorithms ]] . | 212 | 6 |
213 | The results show that << it >> can provide a significant improvement in [[ alignment quality ]] . | 213 | 6 |
214 | We present a novel << entity-based representation of discourse >> which is inspired by [[ Centering Theory ]] and can be computed automatically from raw text . | 214 | 3 |
215 | We present a novel << entity-based representation of discourse >> which is inspired by Centering Theory and can be computed automatically from [[ raw text ]] . | 215 | 3 |
216 | We view << coherence assessment >> as a [[ ranking learning problem ]] and show that the proposed discourse representation supports the effective learning of a ranking function . | 216 | 3 |
217 | We view coherence assessment as a ranking learning problem and show that the proposed [[ discourse representation ]] supports the effective learning of a << ranking function >> . | 217 | 3 |
218 | Our experiments demonstrate that the [[ induced model ]] achieves significantly higher accuracy than a state-of-the-art << coherence model >> . | 218 | 5 |
219 | Our experiments demonstrate that the << induced model >> achieves significantly higher [[ accuracy ]] than a state-of-the-art coherence model . | 219 | 6 |
220 | Our experiments demonstrate that the induced model achieves significantly higher [[ accuracy ]] than a state-of-the-art << coherence model >> . | 220 | 6 |
221 | This paper introduces a [[ robust interactive method ]] for << speech understanding >> . | 221 | 3 |
222 | The << generalized LR parsing >> is enhanced in this [[ approach ]] . | 222 | 3 |
223 | When a very noisy portion is detected , the << parser >> skips that portion using a fake [[ non-terminal symbol ]] . | 223 | 3 |
224 | This [[ method ]] is also capable of handling << unknown words >> , which is important in practical systems . | 224 | 3 |
225 | This paper shows that it is very often possible to identify the source language of [[ medium-length speeches ]] in the << EUROPARL corpus >> on the basis of frequency counts of word n-grams -LRB- 87.2 % -96.7 % accuracy depending on classification method -RRB- . | 225 | 4 |
226 | This paper shows that it is very often possible to identify the source language of medium-length speeches in the EUROPARL corpus on the basis of frequency counts of word n-grams -LRB- 87.2 % -96.7 % [[ accuracy ]] depending on << classification method >> -RRB- . | 226 | 6 |
227 | We investigated whether [[ automatic phonetic transcriptions -LRB- APTs -RRB- ]] can replace << manually verified phonetic transcriptions >> -LRB- MPTs -RRB- in a large corpus-based study on pronunciation variation . | 227 | 5 |
228 | We investigated whether [[ automatic phonetic transcriptions -LRB- APTs -RRB- ]] can replace manually verified phonetic transcriptions -LRB- MPTs -RRB- in a large corpus-based study on << pronunciation variation >> . | 228 | 3 |
229 | We investigated whether automatic phonetic transcriptions -LRB- APTs -RRB- can replace [[ manually verified phonetic transcriptions ]] -LRB- MPTs -RRB- in a large corpus-based study on << pronunciation variation >> . | 229 | 3 |
230 | We trained << classifiers >> on the [[ speech processes ]] extracted from the alignments of an APT and an MPT with a canonical transcription . | 230 | 3 |
231 | We trained classifiers on the << speech processes >> extracted from the [[ alignments ]] of an APT and an MPT with a canonical transcription . | 231 | 3 |
232 | We trained classifiers on the speech processes extracted from the [[ alignments ]] of an << APT >> and an MPT with a canonical transcription . | 232 | 3 |
233 | We trained classifiers on the speech processes extracted from the [[ alignments ]] of an APT and an << MPT >> with a canonical transcription . | 233 | 3 |
234 | We trained classifiers on the speech processes extracted from the alignments of an [[ APT ]] and an << MPT >> with a canonical transcription . | 234 | 0 |
235 | We trained classifiers on the speech processes extracted from the << alignments >> of an APT and an MPT with a [[ canonical transcription ]] . | 235 | 3 |
236 | We tested whether the [[ classifiers ]] were equally good at verifying whether << unknown transcriptions >> represent read speech or telephone dialogues , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings . | 236 | 3 |
237 | We tested whether the classifiers were equally good at verifying whether [[ unknown transcriptions ]] represent << read speech >> or telephone dialogues , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings . | 237 | 3 |
238 | We tested whether the classifiers were equally good at verifying whether [[ unknown transcriptions ]] represent read speech or << telephone dialogues >> , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings . | 238 | 3 |
239 | We tested whether the classifiers were equally good at verifying whether unknown transcriptions represent [[ read speech ]] or << telephone dialogues >> , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings . | 239 | 0 |
240 | Our results not only show that similar distinguishing speech processes were identified ; our [[ APT-based classifier ]] yielded better classification accuracy than the << MPT-based classifier >> whilst using fewer classification features . | 240 | 5 |
241 | Our results not only show that similar distinguishing speech processes were identified ; our << APT-based classifier >> yielded better [[ classification accuracy ]] than the MPT-based classifier whilst using fewer classification features . | 241 | 6 |
242 | Our results not only show that similar distinguishing speech processes were identified ; our APT-based classifier yielded better [[ classification accuracy ]] than the << MPT-based classifier >> whilst using fewer classification features . | 242 | 6 |
243 | Our results not only show that similar distinguishing speech processes were identified ; our << APT-based classifier >> yielded better classification accuracy than the MPT-based classifier whilst using fewer [[ classification features ]] . | 243 | 3 |
244 | Our results not only show that similar distinguishing speech processes were identified ; our APT-based classifier yielded better classification accuracy than the << MPT-based classifier >> whilst using fewer [[ classification features ]] . | 244 | 3 |
245 | Machine reading is a relatively new field that features [[ computer programs ]] designed to read << flowing text >> and extract fact assertions expressed by the narrative content . | 245 | 3 |
246 | Machine reading is a relatively new field that features [[ computer programs ]] designed to read flowing text and extract << fact assertions >> expressed by the narrative content . | 246 | 3 |
247 | Machine reading is a relatively new field that features computer programs designed to read flowing text and extract [[ fact assertions ]] expressed by the << narrative content >> . | 247 | 1 |
248 | This << task >> involves two core technologies : [[ natural language processing -LRB- NLP -RRB- ]] and information extraction -LRB- IE -RRB- . | 248 | 4 |
249 | This << task >> involves two core technologies : natural language processing -LRB- NLP -RRB- and [[ information extraction -LRB- IE -RRB- ]] . | 249 | 4 |
250 | In this paper we describe a << machine reading system >> that we have developed within a [[ cognitive architecture ]] . | 250 | 1 |
251 | We show how we have integrated into the framework several levels of knowledge for a particular domain , ideas from [[ cognitive semantics ]] and << construction grammar >> , plus tools from prior NLP and IE research . | 251 | 0 |
252 | We show how we have integrated into the framework several levels of knowledge for a particular domain , ideas from cognitive semantics and construction grammar , plus tools from [[ prior NLP ]] and << IE research >> . | 252 | 0 |
253 | The result is a [[ system ]] that is capable of reading and interpreting complex and fairly << idiosyncratic texts >> in the family history domain . | 253 | 3 |
254 | The result is a system that is capable of reading and interpreting complex and fairly << idiosyncratic texts >> in the [[ family history domain ]] . | 254 | 1 |
255 | We present two [[ methods ]] for capturing << nonstationary chaos >> , then present a few examples including biological signals , ocean waves and traffic flow . | 255 | 3 |
256 | We present two methods for capturing nonstationary chaos , then present a few << examples >> including [[ biological signals ]] , ocean waves and traffic flow . | 256 | 2 |
257 | We present two methods for capturing nonstationary chaos , then present a few examples including [[ biological signals ]] , << ocean waves >> and traffic flow . | 257 | 0 |
258 | We present two methods for capturing nonstationary chaos , then present a few << examples >> including biological signals , [[ ocean waves ]] and traffic flow . | 258 | 2 |
259 | We present two methods for capturing nonstationary chaos , then present a few examples including biological signals , [[ ocean waves ]] and << traffic flow >> . | 259 | 0 |
260 | We present two methods for capturing nonstationary chaos , then present a few << examples >> including biological signals , ocean waves and [[ traffic flow ]] . | 260 | 2 |
261 | This paper presents a [[ formal analysis ]] for a large class of words called << alternative markers >> , which includes other -LRB- than -RRB- , such -LRB- as -RRB- , and besides . | 261 | 3 |
262 | These [[ words ]] appear frequently enough in << dialog >> to warrant serious attention , yet present natural language search engines perform poorly on queries containing them . | 262 | 4 |
263 | I show that the performance of a << search engine >> can be improved dramatically by incorporating an [[ approximation of the formal analysis ]] that is compatible with the search engine 's operational semantics . | 263 | 4 |
264 | I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the << search engine >> 's [[ operational semantics ]] . | 264 | 4 |
265 | The value of this approach is that as the [[ operational semantics ]] of << natural language applications >> improve , even larger improvements are possible . | 265 | 4 |
266 | We find that simple << interpolation methods >> , like [[ log-linear and linear interpolation ]] , improve the performance but fall short of the performance of an oracle . | 266 | 2 |
267 | Actually , the oracle acts like a << dynamic combiner >> with [[ hard decisions ]] using the reference . | 267 | 1 |
268 | We suggest a << method >> that mimics the behavior of the oracle using a [[ neural network ]] or a decision tree . | 268 | 3 |
269 | We suggest a << method >> that mimics the behavior of the oracle using a neural network or a [[ decision tree ]] . | 269 | 3 |
270 | We suggest a method that mimics the behavior of the oracle using a << neural network >> or a [[ decision tree ]] . | 270 | 0 |
271 | The [[ method ]] amounts to tagging << LMs >> with confidence measures and picking the best hypothesis corresponding to the LM with the best confidence . | 271 | 3 |
272 | The << method >> amounts to tagging LMs with [[ confidence measures ]] and picking the best hypothesis corresponding to the LM with the best confidence . | 272 | 3 |
273 | We describe a new [[ method ]] for the representation of << NLP structures >> within reranking approaches . | 273 | 3 |
274 | We describe a new method for the representation of << NLP structures >> within [[ reranking approaches ]] . | 274 | 1 |
275 | We make use of a << conditional log-linear model >> , with [[ hidden variables ]] representing the assignment of lexical items to word clusters or word senses . | 275 | 3 |
276 | We make use of a conditional log-linear model , with hidden variables representing the assignment of lexical items to [[ word clusters ]] or << word senses >> . | 276 | 0 |
277 | The << model >> learns to automatically make these assignments based on a [[ discriminative training criterion ]] . | 277 | 3 |
278 | Training and decoding with the model requires summing over an exponential number of hidden-variable assignments : the required << summations >> can be computed efficiently and exactly using [[ dynamic programming ]] . | 278 | 3 |
279 | As a case study , we apply the [[ model ]] to << parse reranking >> . | 279 | 3 |
280 | The [[ model ]] gives an F-measure improvement of ~ 1.25 % beyond the << base parser >> , and an ~ 0.25 % improvement beyond Collins -LRB- 2000 -RRB- reranker . | 280 | 5 |
281 | The << model >> gives an [[ F-measure ]] improvement of ~ 1.25 % beyond the base parser , and an ~ 0.25 % improvement beyond Collins -LRB- 2000 -RRB- reranker . | 281 | 6 |
282 | The model gives an F-measure improvement of ~ 1.25 % beyond the [[ base parser ]] , and an ~ 0.25 % improvement beyond << Collins -LRB- 2000 -RRB- reranker >> . | 282 | 5 |
283 | Although our experiments are focused on << parsing >> , the [[ techniques ]] described generalize naturally to NLP structures other than parse trees . | 283 | 3 |
284 | Although our experiments are focused on parsing , the [[ techniques ]] described generalize naturally to << NLP structures >> other than parse trees . | 284 | 3 |
285 | Although our experiments are focused on parsing , the [[ techniques ]] described generalize naturally to NLP structures other than << parse trees >> . | 285 | 3 |
286 | Although our experiments are focused on parsing , the techniques described generalize naturally to << NLP structures >> other than [[ parse trees ]] . | 286 | 0 |
287 | This paper presents an [[ algorithm ]] for << learning the time-varying shape of a non-rigid 3D object >> from uncalibrated 2D tracking data . | 287 | 3 |
288 | We constrain the problem by assuming that the << object shape >> at each time instant is drawn from a [[ Gaussian distribution ]] . | 288 | 3 |
289 | Based on this assumption , the [[ algorithm ]] simultaneously estimates << 3D shape and motion >> for each time frame , learns the parameters of the Gaussian , and robustly fills-in missing data points . | 289 | 3 |
290 | We then extend the [[ algorithm ]] to model << temporal smoothness in object shape >> , thus allowing it to handle severe cases of missing data . | 290 | 3 |
291 | We then extend the algorithm to model temporal smoothness in object shape , thus allowing [[ it ]] to handle severe cases of << missing data >> . | 291 | 3 |
292 | [[ Automatic summarization ]] and << information extraction >> are two important Internet services . | 292 | 0 |
293 | [[ MUC ]] and << SUMMAC >> play their appropriate roles in the next generation Internet . | 293 | 0 |
294 | This paper focuses on the automatic summarization and proposes two different [[ models ]] to extract sentences for << summary generation >> under two tasks initiated by SUMMAC-1 . | 294 | 3 |
295 | This paper focuses on the automatic summarization and proposes two different [[ models ]] to extract sentences for summary generation under two << tasks >> initiated by SUMMAC-1 . | 295 | 3 |
296 | This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two [[ tasks ]] initiated by << SUMMAC-1 >> . | 296 | 4 |
297 | For << categorization task >> , [[ positive feature vectors ]] and negative feature vectors are used cooperatively to construct generic , indicative summaries . | 297 | 3 |
298 | For categorization task , [[ positive feature vectors ]] and << negative feature vectors >> are used cooperatively to construct generic , indicative summaries . | 298 | 0 |
299 | For categorization task , [[ positive feature vectors ]] and negative feature vectors are used cooperatively to construct << generic , indicative summaries >> . | 299 | 3 |