Unnamed: 0
int64
0
3.22k
text
stringlengths
49
577
id
int64
0
3.22k
label
int64
0
6
100
The << compact description of a video sequence >> through a single image map and a [[ dominant motion ]] has applications in several domains , including video browsing and retrieval , compression , mosaicing , and visual summarization .
100
3
101
The compact description of a video sequence through a single image map and a dominant motion has applications in several << domains >> , including [[ video browsing and retrieval ]] , compression , mosaicing , and visual summarization .
101
2
102
The compact description of a video sequence through a single image map and a dominant motion has applications in several domains , including [[ video browsing and retrieval ]] , << compression >> , mosaicing , and visual summarization .
102
0
103
The compact description of a video sequence through a single image map and a dominant motion has applications in several << domains >> , including video browsing and retrieval , [[ compression ]] , mosaicing , and visual summarization .
103
2
104
The compact description of a video sequence through a single image map and a dominant motion has applications in several domains , including video browsing and retrieval , [[ compression ]] , << mosaicing >> , and visual summarization .
104
0
105
The compact description of a video sequence through a single image map and a dominant motion has applications in several << domains >> , including video browsing and retrieval , compression , [[ mosaicing ]] , and visual summarization .
105
2
106
The compact description of a video sequence through a single image map and a dominant motion has applications in several domains , including video browsing and retrieval , compression , [[ mosaicing ]] , and << visual summarization >> .
106
0
107
Building such a representation requires the capability to register all the frames with respect to the dominant object in the scene , a << task >> which has been , in the past , addressed through temporally [[ localized motion estimates ]] .
107
3
108
To avoid this oscillation , we augment the << motion model >> with a [[ generic temporal constraint ]] which increases the robustness against competing interpretations , leading to more meaningful content summarization .
108
3
109
To avoid this oscillation , we augment the motion model with a [[ generic temporal constraint ]] which increases the robustness against competing interpretations , leading to more meaningful << content summarization >> .
109
3
110
To avoid this oscillation , we augment the motion model with a << generic temporal constraint >> which increases the [[ robustness ]] against competing interpretations , leading to more meaningful content summarization .
110
6
111
In cross-domain learning , there is a more challenging problem that the << domain divergence >> involves more than one [[ dominant factors ]] , e.g. , different viewpoints , various resolutions and changing illuminations .
111
4
112
In cross-domain learning , there is a more challenging problem that the domain divergence involves more than one << dominant factors >> , e.g. , different [[ viewpoints ]] , various resolutions and changing illuminations .
112
2
113
In cross-domain learning , there is a more challenging problem that the domain divergence involves more than one dominant factors , e.g. , different [[ viewpoints ]] , various << resolutions >> and changing illuminations .
113
0
114
In cross-domain learning , there is a more challenging problem that the domain divergence involves more than one << dominant factors >> , e.g. , different viewpoints , various [[ resolutions ]] and changing illuminations .
114
2
115
In cross-domain learning , there is a more challenging problem that the domain divergence involves more than one dominant factors , e.g. , different viewpoints , various [[ resolutions ]] and changing << illuminations >> .
115
0
116
Fortunately , an [[ intermediate domain ]] could often be found to build a bridge across them to facilitate the << learning problem >> .
116
3
117
In this paper , we propose a [[ Coupled Marginalized Denoising Auto-encoders framework ]] to address the << cross-domain problem >> .
117
3
118
Specifically , we design two << marginalized denoising auto-encoders >> , [[ one ]] for the target and the other for source as well as the intermediate one .
118
2
119
Specifically , we design two marginalized denoising auto-encoders , [[ one ]] for the target and the << other >> for source as well as the intermediate one .
119
0
120
Specifically , we design two << marginalized denoising auto-encoders >> , one for the target and the [[ other ]] for source as well as the intermediate one .
120
2
121
To better couple the two << denoising auto-encoders learning >> , we incorporate a [[ feature mapping ]] , which tends to transfer knowledge between the intermediate domain and the target one .
121
4
122
To better couple the two denoising auto-encoders learning , we incorporate a [[ feature mapping ]] , which tends to transfer knowledge between the << intermediate domain >> and the target one .
122
3
123
Furthermore , the << maximum margin criterion >> , e.g. , [[ intra-class com-pactness ]] and inter-class penalty , on the output layer is imposed to seek more discriminative features across different domains .
123
2
124
Furthermore , the maximum margin criterion , e.g. , [[ intra-class com-pactness ]] and << inter-class penalty >> , on the output layer is imposed to seek more discriminative features across different domains .
124
0
125
Furthermore , the << maximum margin criterion >> , e.g. , intra-class com-pactness and [[ inter-class penalty ]] , on the output layer is imposed to seek more discriminative features across different domains .
125
2
126
Extensive experiments on two [[ tasks ]] have demonstrated the superiority of our << method >> over the state-of-the-art methods .
126
6
127
Extensive experiments on two tasks have demonstrated the superiority of our [[ method ]] over the << state-of-the-art methods >> .
127
5
128
Basically , a set of << age-group specific dictionaries >> are learned , where the [[ dictionary bases ]] corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups , and a linear combination of these patterns expresses a particular personalized aging process .
128
4
129
Basically , a set of age-group specific dictionaries are learned , where the dictionary bases corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups , and a [[ linear combination ]] of these patterns expresses a particular << personalized aging process >> .
129
3
130
Basically , a set of age-group specific dictionaries are learned , where the dictionary bases corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups , and a << linear combination >> of these [[ patterns ]] expresses a particular personalized aging process .
130
3
131
First , beyond the aging dictionaries , each subject may have extra << personalized facial characteristics >> , e.g. [[ mole ]] , which are invariant in the aging process .
131
2
132
Thus a [[ personality-aware coupled reconstruction loss ]] is utilized to learn the << dictionaries >> based on face pairs from neighboring age groups .
132
3
133
Extensive experiments well demonstrate the advantages of our proposed [[ solution ]] over other << state-of-the-arts >> in term of personalized aging progression , as well as the performance gain for cross-age face verification by synthesizing aging faces .
133
5
134
Extensive experiments well demonstrate the advantages of our proposed [[ solution ]] over other state-of-the-arts in term of << personalized aging progression >> , as well as the performance gain for cross-age face verification by synthesizing aging faces .
134
3
135
Extensive experiments well demonstrate the advantages of our proposed solution over other [[ state-of-the-arts ]] in term of << personalized aging progression >> , as well as the performance gain for cross-age face verification by synthesizing aging faces .
135
3
136
Extensive experiments well demonstrate the advantages of our proposed solution over other state-of-the-arts in term of personalized aging progression , as well as the performance gain for << cross-age face verification >> by [[ synthesizing aging faces ]] .
136
3
137
We propose a draft scheme of the [[ model ]] formalizing the << structure of communicative context >> in dialogue interaction .
137
3
138
We propose a draft scheme of the model formalizing the << structure of communicative context >> in [[ dialogue interaction ]] .
138
1
139
Visitors who browse the web from wireless PDAs , cell phones , and pagers are frequently stymied by [[ web interfaces ]] optimized for << desktop PCs >> .
139
3
140
In this paper we develop an [[ algorithm ]] , MINPATH , that automatically improves << wireless web navigation >> by suggesting useful shortcut links in real time .
140
3
141
In this paper we develop an [[ algorithm ]] , MINPATH , that automatically improves << wireless web navigation >> by suggesting useful shortcut links in real time .
141
3
142
<< MINPATH >> finds shortcuts by using a learned [[ model ]] of web visitor behavior to estimate the savings of shortcut links , and suggests only the few best links .
142
3
143
MINPATH finds shortcuts by using a learned [[ model ]] of << web visitor behavior >> to estimate the savings of shortcut links , and suggests only the few best links .
143
3
144
MINPATH finds shortcuts by using a learned [[ model ]] of web visitor behavior to estimate the << savings of shortcut links >> , and suggests only the few best links .
144
3
145
We explore a variety of << predictive models >> , including [[ Na ¨ ıve Bayes mixture models ]] and mixtures of Markov models , and report empirical evidence that MINPATH finds useful shortcuts that save substantial navigational effort .
145
2
146
We explore a variety of predictive models , including [[ Na ¨ ıve Bayes mixture models ]] and << mixtures of Markov models >> , and report empirical evidence that MINPATH finds useful shortcuts that save substantial navigational effort .
146
0
147
We explore a variety of << predictive models >> , including Na ¨ ıve Bayes mixture models and [[ mixtures of Markov models ]] , and report empirical evidence that MINPATH finds useful shortcuts that save substantial navigational effort .
147
2
148
This paper describes a particular [[ approach ]] to << parsing >> that utilizes recent advances in unification-based parsing and in classification-based knowledge representation .
148
3
149
This paper describes a particular << approach >> to parsing that utilizes recent advances in [[ unification-based parsing ]] and in classification-based knowledge representation .
149
3
150
This paper describes a particular << approach >> to parsing that utilizes recent advances in unification-based parsing and in [[ classification-based knowledge representation ]] .
150
3
151
This paper describes a particular approach to parsing that utilizes recent advances in << unification-based parsing >> and in [[ classification-based knowledge representation ]] .
151
0
152
As [[ unification-based grammatical frameworks ]] are extended to handle richer descriptions of << linguistic information >> , they begin to share many of the properties that have been developed in KL-ONE-like knowledge representation systems .
152
3
153
As unification-based grammatical frameworks are extended to handle richer descriptions of linguistic information , << they >> begin to share many of the properties that have been developed in [[ KL-ONE-like knowledge representation systems ]] .
153
3
154
This commonality suggests that some of the [[ classification-based representation techniques ]] can be applied to << unification-based linguistic descriptions >> .
154
3
155
This merging supports the integration of [[ semantic and syntactic information ]] into the same << system >> , simultaneously subject to the same types of processes , in an efficient manner .
155
3
156
The use of a [[ KL-ONE style representation ]] for << parsing >> and semantic interpretation was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which parsing is characterized as an inference process called incremental description refinement .
156
3
157
The use of a [[ KL-ONE style representation ]] for parsing and << semantic interpretation >> was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which parsing is characterized as an inference process called incremental description refinement .
157
3
158
The use of a KL-ONE style representation for [[ parsing ]] and << semantic interpretation >> was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which parsing is characterized as an inference process called incremental description refinement .
158
0
159
The use of a << KL-ONE style representation >> for parsing and semantic interpretation was first explored in the [[ PSI-KLONE system ]] -LSB- 2 -RSB- , in which parsing is characterized as an inference process called incremental description refinement .
159
3
160
The use of a KL-ONE style representation for parsing and semantic interpretation was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which << parsing >> is characterized as an inference process called [[ incremental description refinement ]] .
160
3
161
The use of a KL-ONE style representation for parsing and semantic interpretation was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which parsing is characterized as an << inference process >> called [[ incremental description refinement ]] .
161
2
162
In this paper we discuss a proposed [[ user knowledge modeling architecture ]] for the << ICICLE system >> , a language tutoring application for deaf learners of written English .
162
3
163
In this paper we discuss a proposed user knowledge modeling architecture for the [[ ICICLE system ]] , a << language tutoring application >> for deaf learners of written English .
163
2
164
In this paper we discuss a proposed user knowledge modeling architecture for the ICICLE system , a [[ language tutoring application ]] for << deaf learners >> of written English .
164
3
165
In this paper we discuss a proposed user knowledge modeling architecture for the ICICLE system , a << language tutoring application >> for deaf learners of [[ written English ]] .
165
3
166
The [[ model ]] will represent the language proficiency of the user and is designed to be referenced during both << writing analysis >> and feedback production .
166
3
167
The [[ model ]] will represent the language proficiency of the user and is designed to be referenced during both writing analysis and << feedback production >> .
167
3
168
The model will represent the language proficiency of the user and is designed to be referenced during both [[ writing analysis ]] and << feedback production >> .
168
0
169
We motivate our << model design >> by citing relevant research on [[ second language and cognitive skill acquisition ]] , and briefly discuss preliminary empirical evidence supporting the design .
169
3
170
We conclude by showing how our [[ design ]] can provide a rich and robust information base to a << language assessment / correction application >> by modeling user proficiency at a high level of granularity and specificity .
170
3
171
We conclude by showing how our [[ design ]] can provide a rich and robust information base to a language assessment / correction application by modeling << user proficiency >> at a high level of granularity and specificity .
171
3
172
We conclude by showing how our design can provide a rich and robust information base to a language assessment / correction application by modeling << user proficiency >> at a high level of [[ granularity ]] and specificity .
172
6
173
We conclude by showing how our design can provide a rich and robust information base to a language assessment / correction application by modeling user proficiency at a high level of [[ granularity ]] and << specificity >> .
173
0
174
We conclude by showing how our design can provide a rich and robust information base to a language assessment / correction application by modeling << user proficiency >> at a high level of granularity and [[ specificity ]] .
174
6
175
[[ Constraint propagation ]] is one of the key techniques in << constraint programming >> , and a large body of work has built up around it .
175
4
176
In this paper we present << SHORTSTR2 >> , a development of the [[ Simple Tabular Reduction algorithm STR2 + ]] .
176
3
177
We show that [[ SHORTSTR2 ]] is complementary to the existing algorithms << SHORTGAC >> and HAGGISGAC that exploit short supports , while being much simpler .
177
5
178
We show that [[ SHORTSTR2 ]] is complementary to the existing algorithms SHORTGAC and << HAGGISGAC >> that exploit short supports , while being much simpler .
178
5
179
We show that SHORTSTR2 is complementary to the existing algorithms [[ SHORTGAC ]] and << HAGGISGAC >> that exploit short supports , while being much simpler .
179
0
180
When a constraint is amenable to short supports , the [[ short support set ]] can be exponentially smaller than the << full-length support set >> .
180
5
181
We also show that [[ SHORTSTR2 ]] can be combined with a simple algorithm to identify << short supports >> from full-length supports , to provide a superior drop-in replacement for STR2 + .
181
3
182
We also show that [[ SHORTSTR2 ]] can be combined with a simple algorithm to identify short supports from full-length supports , to provide a superior << drop-in replacement >> for STR2 + .
182
3
183
We also show that << SHORTSTR2 >> can be combined with a simple [[ algorithm ]] to identify short supports from full-length supports , to provide a superior drop-in replacement for STR2 + .
183
0
184
We also show that SHORTSTR2 can be combined with a simple [[ algorithm ]] to identify << short supports >> from full-length supports , to provide a superior drop-in replacement for STR2 + .
184
3
185
We also show that << SHORTSTR2 >> can be combined with a simple algorithm to identify short supports from [[ full-length supports ]] , to provide a superior drop-in replacement for STR2 + .
185
3
186
We also show that << SHORTSTR2 >> can be combined with a simple algorithm to identify short supports from [[ full-length supports ]] , to provide a superior drop-in replacement for STR2 + .
186
3
187
We also show that SHORTSTR2 can be combined with a simple << algorithm >> to identify short supports from [[ full-length supports ]] , to provide a superior drop-in replacement for STR2 + .
187
3
188
We also show that SHORTSTR2 can be combined with a simple << algorithm >> to identify short supports from [[ full-length supports ]] , to provide a superior drop-in replacement for STR2 + .
188
3
189
We also show that SHORTSTR2 can be combined with a simple algorithm to identify short supports from full-length supports , to provide a superior [[ drop-in replacement ]] for << STR2 + >> .
189
3
190
We propose a [[ detection method ]] for << orthographic variants >> caused by transliteration in a large corpus .
190
3
191
The << method >> employs two [[ similarities ]] .
191
3
192
One is << string similarity >> based on [[ edit distance ]] .
192
3
193
The other is << contextual similarity >> by a [[ vector space model ]] .
193
3
194
Experimental results show that the << method >> performed a 0.889 [[ F-measure ]] in an open test .
194
6
195
[[ Uncertainty handling ]] plays an important role during << shape tracking >> .
195
3
196
We have recently shown that the [[ fusion of measurement information with system dynamics and shape priors ]] greatly improves the << tracking >> performance for very noisy images such as ultrasound sequences -LSB- 22 -RSB- .
196
3
197
We have recently shown that the fusion of measurement information with system dynamics and shape priors greatly improves the [[ tracking ]] performance for very << noisy images >> such as ultrasound sequences -LSB- 22 -RSB- .
197
3
198
We have recently shown that the fusion of measurement information with system dynamics and shape priors greatly improves the tracking performance for very << noisy images >> such as [[ ultrasound sequences ]] -LSB- 22 -RSB- .
198
2
199
Nevertheless , this << approach >> required [[ user initialization ]] of the tracking process .
199
3