diff --git "a/train.csv" "b/train.csv" new file mode 100644--- /dev/null +++ "b/train.csv" @@ -0,0 +1,3220 @@ +,text,id,label +0,"The agreement in question involves number in [[ nouns ]] and << reflexive pronouns >> and is syntactic rather than semantic in nature because grammatical number in English , like grammatical gender in languages such as French , is partly arbitrary .",0,0 +1,"The agreement in question involves number in nouns and reflexive pronouns and is syntactic rather than semantic in nature because grammatical number in English , like [[ grammatical gender ]] in << languages >> such as French , is partly arbitrary .",1,1 +2,"The agreement in question involves number in nouns and reflexive pronouns and is syntactic rather than semantic in nature because grammatical number in English , like grammatical gender in << languages >> such as [[ French ]] , is partly arbitrary .",2,2 +3,"In this paper , a novel [[ method ]] to learn the << intrinsic object structure >> for robust visual tracking is proposed .",3,3 +4,"In this paper , a novel method to learn the [[ intrinsic object structure ]] for << robust visual tracking >> is proposed .",4,3 +5,The basic assumption is that the << parameterized object state >> lies on a [[ low dimensional manifold ]] and can be learned from training data .,5,1 +6,"Based on this assumption , firstly we derived the [[ dimensionality reduction and density estimation algorithm ]] for << unsupervised learning of object intrinsic representation >> , the obtained non-rigid part of object state reduces even to 2 dimensions .",6,3 +7,Secondly the << dynamical model >> is derived and trained based on this [[ intrinsic representation ]] .,7,3 +8,Thirdly the learned [[ intrinsic object structure ]] is integrated into a << particle-filter style tracker >> .,8,4 +9,We will show that this intrinsic object representation has some interesting properties and based on which the newly derived [[ dynamical model ]] makes << particle-filter style tracker >> more robust and reliable .,9,3 +10,Experiments show that the learned [[ tracker ]] performs much better than existing << trackers >> on the tracking of complex non-rigid motions such as fish twisting with self-occlusion and large inter-frame lip motion .,10,5 +11,Experiments show that the learned [[ tracker ]] performs much better than existing trackers on the << tracking of complex non-rigid motions >> such as fish twisting with self-occlusion and large inter-frame lip motion .,11,3 +12,Experiments show that the learned tracker performs much better than existing [[ trackers ]] on the << tracking of complex non-rigid motions >> such as fish twisting with self-occlusion and large inter-frame lip motion .,12,3 +13,Experiments show that the learned tracker performs much better than existing trackers on the tracking of << complex non-rigid motions >> such as [[ fish twisting ]] with self-occlusion and large inter-frame lip motion .,13,2 +14,Experiments show that the learned tracker performs much better than existing trackers on the tracking of complex non-rigid motions such as << fish twisting >> with [[ self-occlusion ]] and large inter-frame lip motion .,14,1 +15,Experiments show that the learned tracker performs much better than existing trackers on the tracking of complex non-rigid motions such as fish twisting with [[ self-occlusion ]] and large << inter-frame lip motion >> .,15,0 +16,Experiments show that the learned tracker performs much better than existing trackers on the tracking of complex non-rigid motions such as << fish twisting >> with self-occlusion and large [[ inter-frame lip motion ]] .,16,1 +17,The proposed [[ method ]] also has the potential to solve other type of << tracking problems >> .,17,3 +18,"In this paper , we present a [[ digital signal processor -LRB- DSP -RRB- implementation ]] of << real-time statistical voice conversion -LRB- VC -RRB- >> for silent speech enhancement and electrolaryngeal speech enhancement .",18,3 +19,"In this paper , we present a digital signal processor -LRB- DSP -RRB- implementation of [[ real-time statistical voice conversion -LRB- VC -RRB- ]] for << silent speech enhancement >> and electrolaryngeal speech enhancement .",19,3 +20,"In this paper , we present a digital signal processor -LRB- DSP -RRB- implementation of [[ real-time statistical voice conversion -LRB- VC -RRB- ]] for silent speech enhancement and << electrolaryngeal speech enhancement >> .",20,3 +21,"In this paper , we present a digital signal processor -LRB- DSP -RRB- implementation of real-time statistical voice conversion -LRB- VC -RRB- for [[ silent speech enhancement ]] and << electrolaryngeal speech enhancement >> .",21,0 +22,[[ Electrolaryngeal speech ]] is one of the typical types of << alaryngeal speech >> produced by an alternative speaking method for laryngectomees .,22,2 +23,Electrolaryngeal speech is one of the typical types of << alaryngeal speech >> produced by an alternative [[ speaking method ]] for laryngectomees .,23,3 +24,Electrolaryngeal speech is one of the typical types of alaryngeal speech produced by an alternative [[ speaking method ]] for << laryngectomees >> .,24,3 +25,"However , the [[ sound quality ]] of << NAM and electrolaryngeal speech >> suffers from lack of naturalness .",25,6 +26,"VC has proven to be one of the promising approaches to address this problem , and << it >> has been successfully implemented on [[ devices ]] with sufficient computational resources .",26,3 +27,"VC has proven to be one of the promising approaches to address this problem , and it has been successfully implemented on << devices >> with [[ sufficient computational resources ]] .",27,1 +28,An implementation on << devices >> that are highly portable but have [[ limited computational resources ]] would greatly contribute to its practical use .,28,1 +29,In this paper we further implement << real-time VC >> on a [[ DSP ]] .,29,3 +30,"To implement the two << speech enhancement systems >> based on [[ real-time VC ]] , one from NAM to a whispered voice and the other from electrolaryngeal speech to a natural voice , we propose several methods for reducing computational cost while preserving conversion accuracy .",30,3 +31,"To implement the two << speech enhancement systems >> based on real-time VC , [[ one ]] from NAM to a whispered voice and the other from electrolaryngeal speech to a natural voice , we propose several methods for reducing computational cost while preserving conversion accuracy .",31,2 +32,"To implement the two speech enhancement systems based on real-time VC , [[ one ]] from NAM to a whispered voice and the << other >> from electrolaryngeal speech to a natural voice , we propose several methods for reducing computational cost while preserving conversion accuracy .",32,0 +33,"To implement the two << speech enhancement systems >> based on real-time VC , one from NAM to a whispered voice and the [[ other ]] from electrolaryngeal speech to a natural voice , we propose several methods for reducing computational cost while preserving conversion accuracy .",33,2 +34,"To implement the two speech enhancement systems based on real-time VC , one from NAM to a whispered voice and the other from electrolaryngeal speech to a natural voice , we propose several << methods >> for reducing [[ computational cost ]] while preserving conversion accuracy .",34,6 +35,"To implement the two speech enhancement systems based on real-time VC , one from NAM to a whispered voice and the other from electrolaryngeal speech to a natural voice , we propose several methods for reducing [[ computational cost ]] while preserving << conversion accuracy >> .",35,0 +36,"To implement the two speech enhancement systems based on real-time VC , one from NAM to a whispered voice and the other from electrolaryngeal speech to a natural voice , we propose several << methods >> for reducing computational cost while preserving [[ conversion accuracy ]] .",36,6 +37,We conduct experimental evaluations and show that << real-time VC >> is capable of running on a [[ DSP ]] with little degradation .,37,3 +38,We propose a [[ method ]] that automatically generates << paraphrase >> sets from seed sentences to be used as reference sets in objective machine translation evaluation measures like BLEU and NIST .,38,3 +39,We propose a method that automatically generates [[ paraphrase ]] sets from seed sentences to be used as reference sets in objective << machine translation evaluation measures >> like BLEU and NIST .,39,3 +40,We propose a method that automatically generates paraphrase sets from seed sentences to be used as reference sets in objective << machine translation evaluation measures >> like [[ BLEU ]] and NIST .,40,2 +41,We propose a method that automatically generates paraphrase sets from seed sentences to be used as reference sets in objective machine translation evaluation measures like [[ BLEU ]] and << NIST >> .,41,0 +42,We propose a method that automatically generates paraphrase sets from seed sentences to be used as reference sets in objective << machine translation evaluation measures >> like BLEU and [[ NIST ]] .,42,2 +43,"We measured the quality of the paraphrases produced in an experiment , i.e. , -LRB- i -RRB- their << grammaticality >> : at least 99 % correct sentences ; -LRB- ii -RRB- their [[ equivalence in meaning ]] : at least 96 % correct paraphrases either by meaning equivalence or entailment ; and , -LRB- iii -RRB- the amount of internal lexical and syntactical variation in a set of paraphrases : slightly superior to that of hand-produced sets .",43,0 +44,"We measured the quality of the paraphrases produced in an experiment , i.e. , -LRB- i -RRB- their grammaticality : at least 99 % correct sentences ; -LRB- ii -RRB- their equivalence in meaning : at least 96 % correct << paraphrases >> either by [[ meaning equivalence ]] or entailment ; and , -LRB- iii -RRB- the amount of internal lexical and syntactical variation in a set of paraphrases : slightly superior to that of hand-produced sets .",44,3 +45,"We measured the quality of the paraphrases produced in an experiment , i.e. , -LRB- i -RRB- their grammaticality : at least 99 % correct sentences ; -LRB- ii -RRB- their equivalence in meaning : at least 96 % correct paraphrases either by [[ meaning equivalence ]] or << entailment >> ; and , -LRB- iii -RRB- the amount of internal lexical and syntactical variation in a set of paraphrases : slightly superior to that of hand-produced sets .",45,0 +46,"We measured the quality of the paraphrases produced in an experiment , i.e. , -LRB- i -RRB- their grammaticality : at least 99 % correct sentences ; -LRB- ii -RRB- their equivalence in meaning : at least 96 % correct << paraphrases >> either by meaning equivalence or [[ entailment ]] ; and , -LRB- iii -RRB- the amount of internal lexical and syntactical variation in a set of paraphrases : slightly superior to that of hand-produced sets .",46,3 +47,"We measured the quality of the paraphrases produced in an experiment , i.e. , -LRB- i -RRB- their grammaticality : at least 99 % correct sentences ; -LRB- ii -RRB- their << equivalence in meaning >> : at least 96 % correct paraphrases either by meaning equivalence or entailment ; and , -LRB- iii -RRB- the amount of [[ internal lexical and syntactical variation ]] in a set of paraphrases : slightly superior to that of hand-produced sets .",47,0 +48,"We measured the quality of the paraphrases produced in an experiment , i.e. , -LRB- i -RRB- their grammaticality : at least 99 % correct sentences ; -LRB- ii -RRB- their equivalence in meaning : at least 96 % correct paraphrases either by meaning equivalence or entailment ; and , -LRB- iii -RRB- the amount of internal lexical and syntactical variation in a set of [[ paraphrases ]] : slightly superior to that of << hand-produced sets >> .",48,5 +49,The << paraphrase >> sets produced by this [[ method ]] thus seem adequate as reference sets to be used for MT evaluation .,49,3 +50,[[ Graph unification ]] remains the most expensive part of << unification-based grammar parsing >> .,50,4 +51,We focus on one [[ speed-up element ]] in the design of << unification algorithms >> : avoidance of copying of unmodified subgraphs .,51,4 +52,We propose a << method >> of attaining such a design through a method of [[ structure-sharing ]] which avoids log -LRB- d -RRB- overheads often associated with structure-sharing of graphs without any use of costly dependency pointers .,52,3 +53,The proposed [[ scheme ]] eliminates redundant copying while maintaining the quasi-destructive scheme 's ability to avoid over copying and early copying combined with its ability to handle << cyclic structures >> without algorithmic additions .,53,3 +54,The proposed << scheme >> eliminates redundant copying while maintaining the [[ quasi-destructive scheme 's ability ]] to avoid over copying and early copying combined with its ability to handle cyclic structures without algorithmic additions .,54,1 +55,The proposed scheme eliminates redundant copying while maintaining the quasi-destructive scheme 's ability to avoid [[ over copying ]] and << early copying >> combined with its ability to handle cyclic structures without algorithmic additions .,55,0 +56,We describe a novel technique and implemented [[ system ]] for constructing a << subcategorization dictionary >> from textual corpora .,56,3 +57,We describe a novel technique and implemented << system >> for constructing a subcategorization dictionary from [[ textual corpora ]] .,57,3 +58,We also demonstrate that a << subcategorization dictionary >> built with the [[ system ]] improves the accuracy of a parser by an appreciable amount,58,3 +59,We also demonstrate that a subcategorization dictionary built with the system improves the [[ accuracy ]] of a << parser >> by an appreciable amount,59,6 +60,We also demonstrate that a << subcategorization dictionary >> built with the system improves the accuracy of a [[ parser ]] by an appreciable amount,60,6 +61,"A number of powerful << registration criteria >> have been developed in the last decade , most prominently the criterion of [[ maximum mutual information ]] .",61,2 +62,"Although this criterion provides for good registration results in many applications , << it >> remains a purely [[ low-level criterion ]] .",62,1 +63,"In this paper , we will develop a [[ Bayesian framework ]] that allows to impose statistically learned prior knowledge about the joint intensity distribution into << image registration methods >> .",63,3 +64,"In this paper , we will develop a Bayesian framework that allows to impose [[ statistically learned prior knowledge ]] about the joint intensity distribution into << image registration methods >> .",64,3 +65,"In this paper , we will develop a Bayesian framework that allows to impose << statistically learned prior knowledge >> about the [[ joint intensity distribution ]] into image registration methods .",65,1 +66,The << prior >> is given by a [[ kernel density estimate ]] on the space of joint intensity distributions computed from a representative set of pre-registered image pairs .,66,3 +67,The prior is given by a [[ kernel density estimate ]] on the space of << joint intensity distributions >> computed from a representative set of pre-registered image pairs .,67,3 +68,The prior is given by a kernel density estimate on the space of << joint intensity distributions >> computed from a representative set of [[ pre-registered image pairs ]] .,68,3 +69,Experimental results demonstrate that the resulting [[ registration process ]] is more robust to << missing low-level information >> as it favors intensity correspondences statistically consistent with the learned intensity distributions .,69,3 +70,Experimental results demonstrate that the resulting registration process is more robust to missing low-level information as [[ it ]] favors << intensity correspondences >> statistically consistent with the learned intensity distributions .,70,3 +71,"We present a [[ method ]] for << synthesizing complex , photo-realistic facade images >> , from a single example .",71,3 +72,"After parsing the example image into its << semantic components >> , a [[ tiling ]] for it is generated .",72,3 +73,"Novel tilings can then be created , yielding << facade textures >> with different dimensions or with [[ occluded parts inpainted ]] .",73,1 +74,"A [[ genetic algorithm ]] guides the novel << facades >> as well as inpainted parts to be consistent with the example , both in terms of their overall structure and their detailed textures .",74,3 +75,"A [[ genetic algorithm ]] guides the novel facades as well as << inpainted parts >> to be consistent with the example , both in terms of their overall structure and their detailed textures .",75,3 +76,Promising results for [[ multiple standard datasets ]] -- in particular for the different building styles they contain -- demonstrate the potential of the << method >> .,76,6 +77,We introduce a new << interactive corpus exploration tool >> called [[ InfoMagnets ]] .,77,2 +78,[[ InfoMagnets ]] aims at making << exploratory corpus analysis >> accessible to researchers who are not experts in text mining .,78,3 +79,"As evidence of its usefulness and usability , [[ it ]] has been used successfully in a research context to uncover relationships between language and behavioral patterns in two distinct << domains >> : tutorial dialogue -LRB- Kumar et al. , submitted -RRB- and on-line communities -LRB- Arguello et al. , 2006 -RRB- .",79,3 +80,"As evidence of its usefulness and usability , it has been used successfully in a research context to uncover relationships between language and behavioral patterns in two distinct << domains >> : [[ tutorial dialogue ]] -LRB- Kumar et al. , submitted -RRB- and on-line communities -LRB- Arguello et al. , 2006 -RRB- .",80,2 +81,"As evidence of its usefulness and usability , it has been used successfully in a research context to uncover relationships between language and behavioral patterns in two distinct domains : [[ tutorial dialogue ]] -LRB- Kumar et al. , submitted -RRB- and << on-line communities >> -LRB- Arguello et al. , 2006 -RRB- .",81,0 +82,"As evidence of its usefulness and usability , it has been used successfully in a research context to uncover relationships between language and behavioral patterns in two distinct << domains >> : tutorial dialogue -LRB- Kumar et al. , submitted -RRB- and [[ on-line communities ]] -LRB- Arguello et al. , 2006 -RRB- .",82,2 +83,"As an [[ educational tool ]] , it has been used as part of a unit on << protocol analysis >> in an Educational Research Methods course .",83,3 +84,Sources of training data suitable for << language modeling >> of [[ conversational speech ]] are limited .,84,3 +85,"In this paper , we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target << recognition task >> , but also that it is possible to get bigger performance gains from the data by using [[ class-dependent interpolation of N-grams ]] .",85,3 +86,We present a [[ method ]] for << detecting 3D objects >> using multi-modalities .,86,3 +87,We present a << method >> for detecting 3D objects using [[ multi-modalities ]] .,87,3 +88,"While [[ it ]] is generic , we demonstrate << it >> on the combination of an image and a dense depth map which give complementary object information .",88,3 +89,"While it is generic , we demonstrate << it >> on the combination of an [[ image ]] and a dense depth map which give complementary object information .",89,3 +90,"While it is generic , we demonstrate it on the combination of an [[ image ]] and a << dense depth map >> which give complementary object information .",90,0 +91,"While it is generic , we demonstrate << it >> on the combination of an image and a [[ dense depth map ]] which give complementary object information .",91,3 +92,"While it is generic , we demonstrate it on the combination of an image and a << dense depth map >> which give [[ complementary object information ]] .",92,1 +93,"It is based on an efficient representation of [[ templates ]] that capture the different << modalities >> , and we show in many experiments on commodity hardware that our approach significantly outperforms state-of-the-art methods on single modalities .",93,3 +94,"It is based on an efficient representation of templates that capture the different modalities , and we show in many experiments on commodity hardware that our [[ approach ]] significantly outperforms << state-of-the-art methods >> on single modalities .",94,5 +95,"It is based on an efficient representation of templates that capture the different modalities , and we show in many experiments on commodity hardware that our [[ approach ]] significantly outperforms state-of-the-art methods on << single modalities >> .",95,3 +96,"It is based on an efficient representation of templates that capture the different modalities , and we show in many experiments on commodity hardware that our approach significantly outperforms [[ state-of-the-art methods ]] on << single modalities >> .",96,3 +97,"The [[ compact description of a video sequence ]] through a single image map and a dominant motion has applications in several << domains >> , including video browsing and retrieval , compression , mosaicing , and visual summarization .",97,3 +98,"The << compact description of a video sequence >> through a single [[ image map ]] and a dominant motion has applications in several domains , including video browsing and retrieval , compression , mosaicing , and visual summarization .",98,3 +99,"The compact description of a video sequence through a single [[ image map ]] and a << dominant motion >> has applications in several domains , including video browsing and retrieval , compression , mosaicing , and visual summarization .",99,0 +100,"The << compact description of a video sequence >> through a single image map and a [[ dominant motion ]] has applications in several domains , including video browsing and retrieval , compression , mosaicing , and visual summarization .",100,3 +101,"The compact description of a video sequence through a single image map and a dominant motion has applications in several << domains >> , including [[ video browsing and retrieval ]] , compression , mosaicing , and visual summarization .",101,2 +102,"The compact description of a video sequence through a single image map and a dominant motion has applications in several domains , including [[ video browsing and retrieval ]] , << compression >> , mosaicing , and visual summarization .",102,0 +103,"The compact description of a video sequence through a single image map and a dominant motion has applications in several << domains >> , including video browsing and retrieval , [[ compression ]] , mosaicing , and visual summarization .",103,2 +104,"The compact description of a video sequence through a single image map and a dominant motion has applications in several domains , including video browsing and retrieval , [[ compression ]] , << mosaicing >> , and visual summarization .",104,0 +105,"The compact description of a video sequence through a single image map and a dominant motion has applications in several << domains >> , including video browsing and retrieval , compression , [[ mosaicing ]] , and visual summarization .",105,2 +106,"The compact description of a video sequence through a single image map and a dominant motion has applications in several domains , including video browsing and retrieval , compression , [[ mosaicing ]] , and << visual summarization >> .",106,0 +107,"Building such a representation requires the capability to register all the frames with respect to the dominant object in the scene , a << task >> which has been , in the past , addressed through temporally [[ localized motion estimates ]] .",107,3 +108,"To avoid this oscillation , we augment the << motion model >> with a [[ generic temporal constraint ]] which increases the robustness against competing interpretations , leading to more meaningful content summarization .",108,3 +109,"To avoid this oscillation , we augment the motion model with a [[ generic temporal constraint ]] which increases the robustness against competing interpretations , leading to more meaningful << content summarization >> .",109,3 +110,"To avoid this oscillation , we augment the motion model with a << generic temporal constraint >> which increases the [[ robustness ]] against competing interpretations , leading to more meaningful content summarization .",110,6 +111,"In cross-domain learning , there is a more challenging problem that the << domain divergence >> involves more than one [[ dominant factors ]] , e.g. , different viewpoints , various resolutions and changing illuminations .",111,4 +112,"In cross-domain learning , there is a more challenging problem that the domain divergence involves more than one << dominant factors >> , e.g. , different [[ viewpoints ]] , various resolutions and changing illuminations .",112,2 +113,"In cross-domain learning , there is a more challenging problem that the domain divergence involves more than one dominant factors , e.g. , different [[ viewpoints ]] , various << resolutions >> and changing illuminations .",113,0 +114,"In cross-domain learning , there is a more challenging problem that the domain divergence involves more than one << dominant factors >> , e.g. , different viewpoints , various [[ resolutions ]] and changing illuminations .",114,2 +115,"In cross-domain learning , there is a more challenging problem that the domain divergence involves more than one dominant factors , e.g. , different viewpoints , various [[ resolutions ]] and changing << illuminations >> .",115,0 +116,"Fortunately , an [[ intermediate domain ]] could often be found to build a bridge across them to facilitate the << learning problem >> .",116,3 +117,"In this paper , we propose a [[ Coupled Marginalized Denoising Auto-encoders framework ]] to address the << cross-domain problem >> .",117,3 +118,"Specifically , we design two << marginalized denoising auto-encoders >> , [[ one ]] for the target and the other for source as well as the intermediate one .",118,2 +119,"Specifically , we design two marginalized denoising auto-encoders , [[ one ]] for the target and the << other >> for source as well as the intermediate one .",119,0 +120,"Specifically , we design two << marginalized denoising auto-encoders >> , one for the target and the [[ other ]] for source as well as the intermediate one .",120,2 +121,"To better couple the two << denoising auto-encoders learning >> , we incorporate a [[ feature mapping ]] , which tends to transfer knowledge between the intermediate domain and the target one .",121,4 +122,"To better couple the two denoising auto-encoders learning , we incorporate a [[ feature mapping ]] , which tends to transfer knowledge between the << intermediate domain >> and the target one .",122,3 +123,"Furthermore , the << maximum margin criterion >> , e.g. , [[ intra-class com-pactness ]] and inter-class penalty , on the output layer is imposed to seek more discriminative features across different domains .",123,2 +124,"Furthermore , the maximum margin criterion , e.g. , [[ intra-class com-pactness ]] and << inter-class penalty >> , on the output layer is imposed to seek more discriminative features across different domains .",124,0 +125,"Furthermore , the << maximum margin criterion >> , e.g. , intra-class com-pactness and [[ inter-class penalty ]] , on the output layer is imposed to seek more discriminative features across different domains .",125,2 +126,Extensive experiments on two [[ tasks ]] have demonstrated the superiority of our << method >> over the state-of-the-art methods .,126,6 +127,Extensive experiments on two tasks have demonstrated the superiority of our [[ method ]] over the << state-of-the-art methods >> .,127,5 +128,"Basically , a set of << age-group specific dictionaries >> are learned , where the [[ dictionary bases ]] corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups , and a linear combination of these patterns expresses a particular personalized aging process .",128,4 +129,"Basically , a set of age-group specific dictionaries are learned , where the dictionary bases corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups , and a [[ linear combination ]] of these patterns expresses a particular << personalized aging process >> .",129,3 +130,"Basically , a set of age-group specific dictionaries are learned , where the dictionary bases corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups , and a << linear combination >> of these [[ patterns ]] expresses a particular personalized aging process .",130,3 +131,"First , beyond the aging dictionaries , each subject may have extra << personalized facial characteristics >> , e.g. [[ mole ]] , which are invariant in the aging process .",131,2 +132,Thus a [[ personality-aware coupled reconstruction loss ]] is utilized to learn the << dictionaries >> based on face pairs from neighboring age groups .,132,3 +133,"Extensive experiments well demonstrate the advantages of our proposed [[ solution ]] over other << state-of-the-arts >> in term of personalized aging progression , as well as the performance gain for cross-age face verification by synthesizing aging faces .",133,5 +134,"Extensive experiments well demonstrate the advantages of our proposed [[ solution ]] over other state-of-the-arts in term of << personalized aging progression >> , as well as the performance gain for cross-age face verification by synthesizing aging faces .",134,3 +135,"Extensive experiments well demonstrate the advantages of our proposed solution over other [[ state-of-the-arts ]] in term of << personalized aging progression >> , as well as the performance gain for cross-age face verification by synthesizing aging faces .",135,3 +136,"Extensive experiments well demonstrate the advantages of our proposed solution over other state-of-the-arts in term of personalized aging progression , as well as the performance gain for << cross-age face verification >> by [[ synthesizing aging faces ]] .",136,3 +137,We propose a draft scheme of the [[ model ]] formalizing the << structure of communicative context >> in dialogue interaction .,137,3 +138,We propose a draft scheme of the model formalizing the << structure of communicative context >> in [[ dialogue interaction ]] .,138,1 +139,"Visitors who browse the web from wireless PDAs , cell phones , and pagers are frequently stymied by [[ web interfaces ]] optimized for << desktop PCs >> .",139,3 +140,"In this paper we develop an [[ algorithm ]] , MINPATH , that automatically improves << wireless web navigation >> by suggesting useful shortcut links in real time .",140,3 +141,"In this paper we develop an [[ algorithm ]] , MINPATH , that automatically improves << wireless web navigation >> by suggesting useful shortcut links in real time .",141,3 +142,"<< MINPATH >> finds shortcuts by using a learned [[ model ]] of web visitor behavior to estimate the savings of shortcut links , and suggests only the few best links .",142,3 +143,"MINPATH finds shortcuts by using a learned [[ model ]] of << web visitor behavior >> to estimate the savings of shortcut links , and suggests only the few best links .",143,3 +144,"MINPATH finds shortcuts by using a learned [[ model ]] of web visitor behavior to estimate the << savings of shortcut links >> , and suggests only the few best links .",144,3 +145,"We explore a variety of << predictive models >> , including [[ Na ¨ ıve Bayes mixture models ]] and mixtures of Markov models , and report empirical evidence that MINPATH finds useful shortcuts that save substantial navigational effort .",145,2 +146,"We explore a variety of predictive models , including [[ Na ¨ ıve Bayes mixture models ]] and << mixtures of Markov models >> , and report empirical evidence that MINPATH finds useful shortcuts that save substantial navigational effort .",146,0 +147,"We explore a variety of << predictive models >> , including Na ¨ ıve Bayes mixture models and [[ mixtures of Markov models ]] , and report empirical evidence that MINPATH finds useful shortcuts that save substantial navigational effort .",147,2 +148,This paper describes a particular [[ approach ]] to << parsing >> that utilizes recent advances in unification-based parsing and in classification-based knowledge representation .,148,3 +149,This paper describes a particular << approach >> to parsing that utilizes recent advances in [[ unification-based parsing ]] and in classification-based knowledge representation .,149,3 +150,This paper describes a particular << approach >> to parsing that utilizes recent advances in unification-based parsing and in [[ classification-based knowledge representation ]] .,150,3 +151,This paper describes a particular approach to parsing that utilizes recent advances in << unification-based parsing >> and in [[ classification-based knowledge representation ]] .,151,0 +152,"As [[ unification-based grammatical frameworks ]] are extended to handle richer descriptions of << linguistic information >> , they begin to share many of the properties that have been developed in KL-ONE-like knowledge representation systems .",152,3 +153,"As unification-based grammatical frameworks are extended to handle richer descriptions of linguistic information , << they >> begin to share many of the properties that have been developed in [[ KL-ONE-like knowledge representation systems ]] .",153,3 +154,This commonality suggests that some of the [[ classification-based representation techniques ]] can be applied to << unification-based linguistic descriptions >> .,154,3 +155,"This merging supports the integration of [[ semantic and syntactic information ]] into the same << system >> , simultaneously subject to the same types of processes , in an efficient manner .",155,3 +156,"The use of a [[ KL-ONE style representation ]] for << parsing >> and semantic interpretation was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which parsing is characterized as an inference process called incremental description refinement .",156,3 +157,"The use of a [[ KL-ONE style representation ]] for parsing and << semantic interpretation >> was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which parsing is characterized as an inference process called incremental description refinement .",157,3 +158,"The use of a KL-ONE style representation for [[ parsing ]] and << semantic interpretation >> was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which parsing is characterized as an inference process called incremental description refinement .",158,0 +159,"The use of a << KL-ONE style representation >> for parsing and semantic interpretation was first explored in the [[ PSI-KLONE system ]] -LSB- 2 -RSB- , in which parsing is characterized as an inference process called incremental description refinement .",159,3 +160,"The use of a KL-ONE style representation for parsing and semantic interpretation was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which << parsing >> is characterized as an inference process called [[ incremental description refinement ]] .",160,3 +161,"The use of a KL-ONE style representation for parsing and semantic interpretation was first explored in the PSI-KLONE system -LSB- 2 -RSB- , in which parsing is characterized as an << inference process >> called [[ incremental description refinement ]] .",161,2 +162,"In this paper we discuss a proposed [[ user knowledge modeling architecture ]] for the << ICICLE system >> , a language tutoring application for deaf learners of written English .",162,3 +163,"In this paper we discuss a proposed user knowledge modeling architecture for the [[ ICICLE system ]] , a << language tutoring application >> for deaf learners of written English .",163,2 +164,"In this paper we discuss a proposed user knowledge modeling architecture for the ICICLE system , a [[ language tutoring application ]] for << deaf learners >> of written English .",164,3 +165,"In this paper we discuss a proposed user knowledge modeling architecture for the ICICLE system , a << language tutoring application >> for deaf learners of [[ written English ]] .",165,3 +166,The [[ model ]] will represent the language proficiency of the user and is designed to be referenced during both << writing analysis >> and feedback production .,166,3 +167,The [[ model ]] will represent the language proficiency of the user and is designed to be referenced during both writing analysis and << feedback production >> .,167,3 +168,The model will represent the language proficiency of the user and is designed to be referenced during both [[ writing analysis ]] and << feedback production >> .,168,0 +169,"We motivate our << model design >> by citing relevant research on [[ second language and cognitive skill acquisition ]] , and briefly discuss preliminary empirical evidence supporting the design .",169,3 +170,We conclude by showing how our [[ design ]] can provide a rich and robust information base to a << language assessment / correction application >> by modeling user proficiency at a high level of granularity and specificity .,170,3 +171,We conclude by showing how our [[ design ]] can provide a rich and robust information base to a language assessment / correction application by modeling << user proficiency >> at a high level of granularity and specificity .,171,3 +172,We conclude by showing how our design can provide a rich and robust information base to a language assessment / correction application by modeling << user proficiency >> at a high level of [[ granularity ]] and specificity .,172,6 +173,We conclude by showing how our design can provide a rich and robust information base to a language assessment / correction application by modeling user proficiency at a high level of [[ granularity ]] and << specificity >> .,173,0 +174,We conclude by showing how our design can provide a rich and robust information base to a language assessment / correction application by modeling << user proficiency >> at a high level of granularity and [[ specificity ]] .,174,6 +175,"[[ Constraint propagation ]] is one of the key techniques in << constraint programming >> , and a large body of work has built up around it .",175,4 +176,"In this paper we present << SHORTSTR2 >> , a development of the [[ Simple Tabular Reduction algorithm STR2 + ]] .",176,3 +177,"We show that [[ SHORTSTR2 ]] is complementary to the existing algorithms << SHORTGAC >> and HAGGISGAC that exploit short supports , while being much simpler .",177,5 +178,"We show that [[ SHORTSTR2 ]] is complementary to the existing algorithms SHORTGAC and << HAGGISGAC >> that exploit short supports , while being much simpler .",178,5 +179,"We show that SHORTSTR2 is complementary to the existing algorithms [[ SHORTGAC ]] and << HAGGISGAC >> that exploit short supports , while being much simpler .",179,0 +180,"When a constraint is amenable to short supports , the [[ short support set ]] can be exponentially smaller than the << full-length support set >> .",180,5 +181,"We also show that [[ SHORTSTR2 ]] can be combined with a simple algorithm to identify << short supports >> from full-length supports , to provide a superior drop-in replacement for STR2 + .",181,3 +182,"We also show that [[ SHORTSTR2 ]] can be combined with a simple algorithm to identify short supports from full-length supports , to provide a superior << drop-in replacement >> for STR2 + .",182,3 +183,"We also show that << SHORTSTR2 >> can be combined with a simple [[ algorithm ]] to identify short supports from full-length supports , to provide a superior drop-in replacement for STR2 + .",183,0 +184,"We also show that SHORTSTR2 can be combined with a simple [[ algorithm ]] to identify << short supports >> from full-length supports , to provide a superior drop-in replacement for STR2 + .",184,3 +185,"We also show that << SHORTSTR2 >> can be combined with a simple algorithm to identify short supports from [[ full-length supports ]] , to provide a superior drop-in replacement for STR2 + .",185,3 +186,"We also show that << SHORTSTR2 >> can be combined with a simple algorithm to identify short supports from [[ full-length supports ]] , to provide a superior drop-in replacement for STR2 + .",186,3 +187,"We also show that SHORTSTR2 can be combined with a simple << algorithm >> to identify short supports from [[ full-length supports ]] , to provide a superior drop-in replacement for STR2 + .",187,3 +188,"We also show that SHORTSTR2 can be combined with a simple << algorithm >> to identify short supports from [[ full-length supports ]] , to provide a superior drop-in replacement for STR2 + .",188,3 +189,"We also show that SHORTSTR2 can be combined with a simple algorithm to identify short supports from full-length supports , to provide a superior [[ drop-in replacement ]] for << STR2 + >> .",189,3 +190,We propose a [[ detection method ]] for << orthographic variants >> caused by transliteration in a large corpus .,190,3 +191,The << method >> employs two [[ similarities ]] .,191,3 +192,One is << string similarity >> based on [[ edit distance ]] .,192,3 +193,The other is << contextual similarity >> by a [[ vector space model ]] .,193,3 +194,Experimental results show that the << method >> performed a 0.889 [[ F-measure ]] in an open test .,194,6 +195,[[ Uncertainty handling ]] plays an important role during << shape tracking >> .,195,3 +196,We have recently shown that the [[ fusion of measurement information with system dynamics and shape priors ]] greatly improves the << tracking >> performance for very noisy images such as ultrasound sequences -LSB- 22 -RSB- .,196,3 +197,We have recently shown that the fusion of measurement information with system dynamics and shape priors greatly improves the [[ tracking ]] performance for very << noisy images >> such as ultrasound sequences -LSB- 22 -RSB- .,197,3 +198,We have recently shown that the fusion of measurement information with system dynamics and shape priors greatly improves the tracking performance for very << noisy images >> such as [[ ultrasound sequences ]] -LSB- 22 -RSB- .,198,2 +199,"Nevertheless , this << approach >> required [[ user initialization ]] of the tracking process .",199,3 +200,"Nevertheless , this approach required [[ user initialization ]] of the << tracking process >> .",200,3 +201,This paper solves the << automatic initial-ization problem >> by performing [[ boosted shape detection ]] as a generic measurement process and integrating it in our tracking framework .,201,3 +202,This paper solves the automatic initial-ization problem by performing << boosted shape detection >> as a [[ generic measurement process ]] and integrating it in our tracking framework .,202,3 +203,This paper solves the automatic initial-ization problem by performing boosted shape detection as a generic measurement process and integrating [[ it ]] in our << tracking framework >> .,203,4 +204,"As a result , we treat all sources of information in a unified way and derive the << posterior shape model >> as the shape with the [[ maximum likelihood ]] .",204,3 +205,Our [[ framework ]] is applied for the << automatic tracking of endocardium >> in ultrasound sequences of the human heart .,205,3 +206,Our framework is applied for the automatic tracking of [[ endocardium ]] in << ultrasound sequences of the human heart >> .,206,4 +207,Reliable [[ detection ]] and robust << tracking >> results are achieved when compared to existing approaches and inter-expert variations .,207,0 +208,Reliable detection and robust tracking results are achieved when compared to existing [[ approaches ]] and << inter-expert variations >> .,208,0 +209,"We present a [[ syntax-based constraint ]] for << word alignment >> , known as the cohesion constraint .",209,3 +210,"We present a << syntax-based constraint >> for word alignment , known as the [[ cohesion constraint ]] .",210,2 +211,<< It >> requires disjoint [[ English phrases ]] to be mapped to non-overlapping intervals in the French sentence .,211,3 +212,We evaluate the utility of this << constraint >> in two different [[ algorithms ]] .,212,6 +213,The results show that << it >> can provide a significant improvement in [[ alignment quality ]] .,213,6 +214,We present a novel << entity-based representation of discourse >> which is inspired by [[ Centering Theory ]] and can be computed automatically from raw text .,214,3 +215,We present a novel << entity-based representation of discourse >> which is inspired by Centering Theory and can be computed automatically from [[ raw text ]] .,215,3 +216,We view << coherence assessment >> as a [[ ranking learning problem ]] and show that the proposed discourse representation supports the effective learning of a ranking function .,216,3 +217,We view coherence assessment as a ranking learning problem and show that the proposed [[ discourse representation ]] supports the effective learning of a << ranking function >> .,217,3 +218,Our experiments demonstrate that the [[ induced model ]] achieves significantly higher accuracy than a state-of-the-art << coherence model >> .,218,5 +219,Our experiments demonstrate that the << induced model >> achieves significantly higher [[ accuracy ]] than a state-of-the-art coherence model .,219,6 +220,Our experiments demonstrate that the induced model achieves significantly higher [[ accuracy ]] than a state-of-the-art << coherence model >> .,220,6 +221,This paper introduces a [[ robust interactive method ]] for << speech understanding >> .,221,3 +222,The << generalized LR parsing >> is enhanced in this [[ approach ]] .,222,3 +223,"When a very noisy portion is detected , the << parser >> skips that portion using a fake [[ non-terminal symbol ]] .",223,3 +224,"This [[ method ]] is also capable of handling << unknown words >> , which is important in practical systems .",224,3 +225,This paper shows that it is very often possible to identify the source language of [[ medium-length speeches ]] in the << EUROPARL corpus >> on the basis of frequency counts of word n-grams -LRB- 87.2 % -96.7 % accuracy depending on classification method -RRB- .,225,4 +226,This paper shows that it is very often possible to identify the source language of medium-length speeches in the EUROPARL corpus on the basis of frequency counts of word n-grams -LRB- 87.2 % -96.7 % [[ accuracy ]] depending on << classification method >> -RRB- .,226,6 +227,We investigated whether [[ automatic phonetic transcriptions -LRB- APTs -RRB- ]] can replace << manually verified phonetic transcriptions >> -LRB- MPTs -RRB- in a large corpus-based study on pronunciation variation .,227,5 +228,We investigated whether [[ automatic phonetic transcriptions -LRB- APTs -RRB- ]] can replace manually verified phonetic transcriptions -LRB- MPTs -RRB- in a large corpus-based study on << pronunciation variation >> .,228,3 +229,We investigated whether automatic phonetic transcriptions -LRB- APTs -RRB- can replace [[ manually verified phonetic transcriptions ]] -LRB- MPTs -RRB- in a large corpus-based study on << pronunciation variation >> .,229,3 +230,We trained << classifiers >> on the [[ speech processes ]] extracted from the alignments of an APT and an MPT with a canonical transcription .,230,3 +231,We trained classifiers on the << speech processes >> extracted from the [[ alignments ]] of an APT and an MPT with a canonical transcription .,231,3 +232,We trained classifiers on the speech processes extracted from the [[ alignments ]] of an << APT >> and an MPT with a canonical transcription .,232,3 +233,We trained classifiers on the speech processes extracted from the [[ alignments ]] of an APT and an << MPT >> with a canonical transcription .,233,3 +234,We trained classifiers on the speech processes extracted from the alignments of an [[ APT ]] and an << MPT >> with a canonical transcription .,234,0 +235,We trained classifiers on the speech processes extracted from the << alignments >> of an APT and an MPT with a [[ canonical transcription ]] .,235,3 +236,"We tested whether the [[ classifiers ]] were equally good at verifying whether << unknown transcriptions >> represent read speech or telephone dialogues , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings .",236,3 +237,"We tested whether the classifiers were equally good at verifying whether [[ unknown transcriptions ]] represent << read speech >> or telephone dialogues , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings .",237,3 +238,"We tested whether the classifiers were equally good at verifying whether [[ unknown transcriptions ]] represent read speech or << telephone dialogues >> , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings .",238,3 +239,"We tested whether the classifiers were equally good at verifying whether unknown transcriptions represent [[ read speech ]] or << telephone dialogues >> , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings .",239,0 +240,Our results not only show that similar distinguishing speech processes were identified ; our [[ APT-based classifier ]] yielded better classification accuracy than the << MPT-based classifier >> whilst using fewer classification features .,240,5 +241,Our results not only show that similar distinguishing speech processes were identified ; our << APT-based classifier >> yielded better [[ classification accuracy ]] than the MPT-based classifier whilst using fewer classification features .,241,6 +242,Our results not only show that similar distinguishing speech processes were identified ; our APT-based classifier yielded better [[ classification accuracy ]] than the << MPT-based classifier >> whilst using fewer classification features .,242,6 +243,Our results not only show that similar distinguishing speech processes were identified ; our << APT-based classifier >> yielded better classification accuracy than the MPT-based classifier whilst using fewer [[ classification features ]] .,243,3 +244,Our results not only show that similar distinguishing speech processes were identified ; our APT-based classifier yielded better classification accuracy than the << MPT-based classifier >> whilst using fewer [[ classification features ]] .,244,3 +245,Machine reading is a relatively new field that features [[ computer programs ]] designed to read << flowing text >> and extract fact assertions expressed by the narrative content .,245,3 +246,Machine reading is a relatively new field that features [[ computer programs ]] designed to read flowing text and extract << fact assertions >> expressed by the narrative content .,246,3 +247,Machine reading is a relatively new field that features computer programs designed to read flowing text and extract [[ fact assertions ]] expressed by the << narrative content >> .,247,1 +248,This << task >> involves two core technologies : [[ natural language processing -LRB- NLP -RRB- ]] and information extraction -LRB- IE -RRB- .,248,4 +249,This << task >> involves two core technologies : natural language processing -LRB- NLP -RRB- and [[ information extraction -LRB- IE -RRB- ]] .,249,4 +250,In this paper we describe a << machine reading system >> that we have developed within a [[ cognitive architecture ]] .,250,1 +251,"We show how we have integrated into the framework several levels of knowledge for a particular domain , ideas from [[ cognitive semantics ]] and << construction grammar >> , plus tools from prior NLP and IE research .",251,0 +252,"We show how we have integrated into the framework several levels of knowledge for a particular domain , ideas from cognitive semantics and construction grammar , plus tools from [[ prior NLP ]] and << IE research >> .",252,0 +253,The result is a [[ system ]] that is capable of reading and interpreting complex and fairly << idiosyncratic texts >> in the family history domain .,253,3 +254,The result is a system that is capable of reading and interpreting complex and fairly << idiosyncratic texts >> in the [[ family history domain ]] .,254,1 +255,"We present two [[ methods ]] for capturing << nonstationary chaos >> , then present a few examples including biological signals , ocean waves and traffic flow .",255,3 +256,"We present two methods for capturing nonstationary chaos , then present a few << examples >> including [[ biological signals ]] , ocean waves and traffic flow .",256,2 +257,"We present two methods for capturing nonstationary chaos , then present a few examples including [[ biological signals ]] , << ocean waves >> and traffic flow .",257,0 +258,"We present two methods for capturing nonstationary chaos , then present a few << examples >> including biological signals , [[ ocean waves ]] and traffic flow .",258,2 +259,"We present two methods for capturing nonstationary chaos , then present a few examples including biological signals , [[ ocean waves ]] and << traffic flow >> .",259,0 +260,"We present two methods for capturing nonstationary chaos , then present a few << examples >> including biological signals , ocean waves and [[ traffic flow ]] .",260,2 +261,"This paper presents a [[ formal analysis ]] for a large class of words called << alternative markers >> , which includes other -LRB- than -RRB- , such -LRB- as -RRB- , and besides .",261,3 +262,"These [[ words ]] appear frequently enough in << dialog >> to warrant serious attention , yet present natural language search engines perform poorly on queries containing them .",262,4 +263,I show that the performance of a << search engine >> can be improved dramatically by incorporating an [[ approximation of the formal analysis ]] that is compatible with the search engine 's operational semantics .,263,4 +264,I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the << search engine >> 's [[ operational semantics ]] .,264,4 +265,"The value of this approach is that as the [[ operational semantics ]] of << natural language applications >> improve , even larger improvements are possible .",265,4 +266,"We find that simple << interpolation methods >> , like [[ log-linear and linear interpolation ]] , improve the performance but fall short of the performance of an oracle .",266,2 +267,"Actually , the oracle acts like a << dynamic combiner >> with [[ hard decisions ]] using the reference .",267,1 +268,We suggest a << method >> that mimics the behavior of the oracle using a [[ neural network ]] or a decision tree .,268,3 +269,We suggest a << method >> that mimics the behavior of the oracle using a neural network or a [[ decision tree ]] .,269,3 +270,We suggest a method that mimics the behavior of the oracle using a << neural network >> or a [[ decision tree ]] .,270,0 +271,The [[ method ]] amounts to tagging << LMs >> with confidence measures and picking the best hypothesis corresponding to the LM with the best confidence .,271,3 +272,The << method >> amounts to tagging LMs with [[ confidence measures ]] and picking the best hypothesis corresponding to the LM with the best confidence .,272,3 +273,We describe a new [[ method ]] for the representation of << NLP structures >> within reranking approaches .,273,3 +274,We describe a new method for the representation of << NLP structures >> within [[ reranking approaches ]] .,274,1 +275,"We make use of a << conditional log-linear model >> , with [[ hidden variables ]] representing the assignment of lexical items to word clusters or word senses .",275,3 +276,"We make use of a conditional log-linear model , with hidden variables representing the assignment of lexical items to [[ word clusters ]] or << word senses >> .",276,0 +277,The << model >> learns to automatically make these assignments based on a [[ discriminative training criterion ]] .,277,3 +278,Training and decoding with the model requires summing over an exponential number of hidden-variable assignments : the required << summations >> can be computed efficiently and exactly using [[ dynamic programming ]] .,278,3 +279,"As a case study , we apply the [[ model ]] to << parse reranking >> .",279,3 +280,"The [[ model ]] gives an F-measure improvement of ~ 1.25 % beyond the << base parser >> , and an ~ 0.25 % improvement beyond Collins -LRB- 2000 -RRB- reranker .",280,5 +281,"The << model >> gives an [[ F-measure ]] improvement of ~ 1.25 % beyond the base parser , and an ~ 0.25 % improvement beyond Collins -LRB- 2000 -RRB- reranker .",281,6 +282,"The model gives an F-measure improvement of ~ 1.25 % beyond the [[ base parser ]] , and an ~ 0.25 % improvement beyond << Collins -LRB- 2000 -RRB- reranker >> .",282,5 +283,"Although our experiments are focused on << parsing >> , the [[ techniques ]] described generalize naturally to NLP structures other than parse trees .",283,3 +284,"Although our experiments are focused on parsing , the [[ techniques ]] described generalize naturally to << NLP structures >> other than parse trees .",284,3 +285,"Although our experiments are focused on parsing , the [[ techniques ]] described generalize naturally to NLP structures other than << parse trees >> .",285,3 +286,"Although our experiments are focused on parsing , the techniques described generalize naturally to << NLP structures >> other than [[ parse trees ]] .",286,0 +287,This paper presents an [[ algorithm ]] for << learning the time-varying shape of a non-rigid 3D object >> from uncalibrated 2D tracking data .,287,3 +288,We constrain the problem by assuming that the << object shape >> at each time instant is drawn from a [[ Gaussian distribution ]] .,288,3 +289,"Based on this assumption , the [[ algorithm ]] simultaneously estimates << 3D shape and motion >> for each time frame , learns the parameters of the Gaussian , and robustly fills-in missing data points .",289,3 +290,"We then extend the [[ algorithm ]] to model << temporal smoothness in object shape >> , thus allowing it to handle severe cases of missing data .",290,3 +291,"We then extend the algorithm to model temporal smoothness in object shape , thus allowing [[ it ]] to handle severe cases of << missing data >> .",291,3 +292,[[ Automatic summarization ]] and << information extraction >> are two important Internet services .,292,0 +293,[[ MUC ]] and << SUMMAC >> play their appropriate roles in the next generation Internet .,293,0 +294,This paper focuses on the automatic summarization and proposes two different [[ models ]] to extract sentences for << summary generation >> under two tasks initiated by SUMMAC-1 .,294,3 +295,This paper focuses on the automatic summarization and proposes two different [[ models ]] to extract sentences for summary generation under two << tasks >> initiated by SUMMAC-1 .,295,3 +296,This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two [[ tasks ]] initiated by << SUMMAC-1 >> .,296,4 +297,"For << categorization task >> , [[ positive feature vectors ]] and negative feature vectors are used cooperatively to construct generic , indicative summaries .",297,3 +298,"For categorization task , [[ positive feature vectors ]] and << negative feature vectors >> are used cooperatively to construct generic , indicative summaries .",298,0 +299,"For categorization task , [[ positive feature vectors ]] and negative feature vectors are used cooperatively to construct << generic , indicative summaries >> .",299,3 +300,"For << categorization task >> , positive feature vectors and [[ negative feature vectors ]] are used cooperatively to construct generic , indicative summaries .",300,3 +301,"For categorization task , positive feature vectors and [[ negative feature vectors ]] are used cooperatively to construct << generic , indicative summaries >> .",301,3 +302,"For << adhoc task >> , a [[ text model ]] based on relationship between nouns and verbs is used to filter out irrelevant discourse segment , to rank relevant sentences , and to generate the user-directed summaries .",302,3 +303,"For adhoc task , a [[ text model ]] based on relationship between nouns and verbs is used to filter out irrelevant << discourse segment >> , to rank relevant sentences , and to generate the user-directed summaries .",303,3 +304,"For adhoc task , a [[ text model ]] based on relationship between nouns and verbs is used to filter out irrelevant discourse segment , to rank relevant sentences , and to generate the << user-directed summaries >> .",304,3 +305,The result shows that the [[ NormF ]] of the best summary and that of the fixed summary for << adhoc tasks >> are 0.456 and 0 .,305,6 +306,The [[ NormF ]] of the best summary and that of the fixed summary for << categorization task >> are 0.4090 and 0.4023 .,306,6 +307,Our [[ system ]] outperforms the average << system >> in categorization task but does a common job in adhoc task .,307,5 +308,Our << system >> outperforms the average system in [[ categorization task ]] but does a common job in adhoc task .,308,6 +309,Our system outperforms the average << system >> in [[ categorization task ]] but does a common job in adhoc task .,309,6 +310,Our << system >> outperforms the average system in categorization task but does a common job in [[ adhoc task ]] .,310,6 +311,Our system outperforms the average system in << categorization task >> but does a common job in [[ adhoc task ]] .,311,6 +312,"In real-world action recognition problems , low-level features can not adequately characterize the [[ rich spatial-temporal structures ]] in << action videos >> .",312,1 +313,"The second type is << data-driven attributes >> , which are learned from data using [[ dictionary learning methods ]] .",313,3 +314,We propose a << discriminative and compact attribute-based representation >> by selecting a subset of [[ discriminative attributes ]] from a large attribute set .,314,3 +315,Three << attribute selection criteria >> are proposed and formulated as a [[ submodular optimization problem ]] .,315,3 +316,Experimental results on the [[ Olympic Sports and UCF101 datasets ]] demonstrate that the proposed << attribute-based representation >> can significantly boost the performance of action recognition algorithms and outperform most recently proposed recognition approaches .,316,6 +317,Experimental results on the Olympic Sports and UCF101 datasets demonstrate that the proposed [[ attribute-based representation ]] can significantly boost the performance of << action recognition algorithms >> and outperform most recently proposed recognition approaches .,317,3 +318,Experimental results on the Olympic Sports and UCF101 datasets demonstrate that the proposed attribute-based representation can significantly boost the performance of [[ action recognition algorithms ]] and outperform most recently proposed << recognition approaches >> .,318,5 +319,Landsbergen 's advocacy of [[ analytical inverses ]] for << compositional syntax rules >> encourages the application of Definite Clause Grammar techniques to the construction of a parser returning Montague analysis trees .,319,3 +320,Landsbergen 's advocacy of [[ analytical inverses ]] for compositional syntax rules encourages the application of << Definite Clause Grammar techniques >> to the construction of a parser returning Montague analysis trees .,320,3 +321,Landsbergen 's advocacy of analytical inverses for compositional syntax rules encourages the application of [[ Definite Clause Grammar techniques ]] to the construction of a << parser returning Montague analysis trees >> .,321,3 +322,A << parser MDCC >> is presented which implements an [[ augmented Friedman - Warren algorithm ]] permitting post referencing * and interfaces with a language of intenslonal logic translator LILT so as to display the derivational history of corresponding reduced IL formulae .,322,3 +323,A parser MDCC is presented which implements an << augmented Friedman - Warren algorithm >> permitting [[ post referencing ]] * and interfaces with a language of intenslonal logic translator LILT so as to display the derivational history of corresponding reduced IL formulae .,323,1 +324,A parser MDCC is presented which implements an augmented Friedman - Warren algorithm permitting post referencing * and interfaces with a language of << intenslonal logic translator LILT >> so as to display the [[ derivational history ]] of corresponding reduced IL formulae .,324,3 +325,A parser MDCC is presented which implements an augmented Friedman - Warren algorithm permitting post referencing * and interfaces with a language of intenslonal logic translator LILT so as to display the << derivational history >> of corresponding [[ reduced IL formulae ]] .,325,1 +326,Some familiarity with [[ Montague 's PTQ ]] and the << basic DCG mechanism >> is assumed .,326,0 +327,"<< Stochastic attention-based models >> have been shown to improve [[ computational efficiency ]] at test time , but they remain difficult to train because of intractable posterior inference and high variance in the stochastic gradient estimates .",327,6 +328,"Stochastic attention-based models have been shown to improve computational efficiency at test time , but they remain difficult to train because of [[ intractable posterior inference ]] and high variance in the << stochastic gradient estimates >> .",328,0 +329,"[[ Borrowing techniques ]] from the literature on training << deep generative models >> , we present the Wake-Sleep Recurrent Attention Model , a method for training stochastic attention networks which improves posterior inference and which reduces the variability in the stochastic gradients .",329,3 +330,"Borrowing techniques from the literature on training deep generative models , we present the Wake-Sleep Recurrent Attention Model , a [[ method ]] for training << stochastic attention networks >> which improves posterior inference and which reduces the variability in the stochastic gradients .",330,3 +331,"Borrowing techniques from the literature on training deep generative models , we present the Wake-Sleep Recurrent Attention Model , a method for training [[ stochastic attention networks ]] which improves << posterior inference >> and which reduces the variability in the stochastic gradients .",331,3 +332,We show that our << method >> can greatly speed up the [[ training time ]] for stochastic attention networks in the domains of image classification and caption generation .,332,6 +333,We show that our method can greatly speed up the [[ training time ]] for << stochastic attention networks >> in the domains of image classification and caption generation .,333,1 +334,We show that our << method >> can greatly speed up the training time for stochastic attention networks in the domains of [[ image classification ]] and caption generation .,334,6 +335,We show that our method can greatly speed up the training time for stochastic attention networks in the domains of [[ image classification ]] and << caption generation >> .,335,0 +336,We show that our << method >> can greatly speed up the training time for stochastic attention networks in the domains of image classification and [[ caption generation ]] .,336,6 +337,"A new [[ exemplar-based framework ]] unifying << image completion >> , texture synthesis and image inpainting is presented in this work .",337,3 +338,"A new [[ exemplar-based framework ]] unifying image completion , << texture synthesis >> and image inpainting is presented in this work .",338,3 +339,"A new [[ exemplar-based framework ]] unifying image completion , texture synthesis and << image inpainting >> is presented in this work .",339,3 +340,"A new exemplar-based framework unifying [[ image completion ]] , << texture synthesis >> and image inpainting is presented in this work .",340,0 +341,"A new exemplar-based framework unifying image completion , [[ texture synthesis ]] and << image inpainting >> is presented in this work .",341,0 +342,"Contrary to existing [[ greedy techniques ]] , these << tasks >> are posed in the form of a discrete global optimization problem with a well defined objective function .",342,5 +343,"Contrary to existing greedy techniques , these << tasks >> are posed in the form of a [[ discrete global optimization problem ]] with a well defined objective function .",343,1 +344,"Contrary to existing greedy techniques , these tasks are posed in the form of a << discrete global optimization problem >> with a [[ well defined objective function ]] .",344,1 +345,"For solving this << problem >> a novel [[ optimization scheme ]] , called Priority-BP , is proposed which carries two very important extensions over standard belief propagation -LRB- BP -RRB- : '' priority-based message scheduling '' and '' dynamic label pruning '' .",345,3 +346,"For solving this problem a novel << optimization scheme >> , called [[ Priority-BP ]] , is proposed which carries two very important extensions over standard belief propagation -LRB- BP -RRB- : '' priority-based message scheduling '' and '' dynamic label pruning '' .",346,2 +347,"For solving this problem a novel << optimization scheme >> , called Priority-BP , is proposed which carries two very important [[ extensions ]] over standard belief propagation -LRB- BP -RRB- : '' priority-based message scheduling '' and '' dynamic label pruning '' .",347,4 +348,"For solving this problem a novel optimization scheme , called Priority-BP , is proposed which carries two very important << extensions >> over standard [[ belief propagation -LRB- BP -RRB- ]] : '' priority-based message scheduling '' and '' dynamic label pruning '' .",348,3 +349,"For solving this problem a novel optimization scheme , called Priority-BP , is proposed which carries two very important << extensions >> over standard belief propagation -LRB- BP -RRB- : '' [[ priority-based message scheduling ]] '' and '' dynamic label pruning '' .",349,2 +350,"For solving this problem a novel optimization scheme , called Priority-BP , is proposed which carries two very important extensions over standard belief propagation -LRB- BP -RRB- : '' [[ priority-based message scheduling ]] '' and '' << dynamic label pruning >> '' .",350,0 +351,"For solving this problem a novel optimization scheme , called Priority-BP , is proposed which carries two very important << extensions >> over standard belief propagation -LRB- BP -RRB- : '' priority-based message scheduling '' and '' [[ dynamic label pruning ]] '' .",351,2 +352,These two [[ extensions ]] work in cooperation to deal with the << intolerable computational cost of BP >> caused by the huge number of existing labels .,352,3 +353,"Moreover , both [[ extensions ]] are generic and can therefore be applied to any << MRF energy function >> as well .",353,3 +354,The effectiveness of our << method >> is demonstrated on a wide variety of [[ image completion examples ]] .,354,3 +355,"In this paper , we compare the relative effects of [[ segment order ]] , << segmentation >> and segment contiguity on the retrieval performance of a translation memory system .",355,0 +356,"In this paper , we compare the relative effects of [[ segment order ]] , segmentation and segment contiguity on the retrieval performance of a << translation memory system >> .",356,3 +357,"In this paper , we compare the relative effects of segment order , [[ segmentation ]] and << segment contiguity >> on the retrieval performance of a translation memory system .",357,0 +358,"In this paper , we compare the relative effects of segment order , [[ segmentation ]] and segment contiguity on the retrieval performance of a << translation memory system >> .",358,3 +359,"In this paper , we compare the relative effects of segment order , segmentation and [[ segment contiguity ]] on the retrieval performance of a << translation memory system >> .",359,3 +360,"In this paper , we compare the relative effects of segment order , segmentation and segment contiguity on the [[ retrieval ]] performance of a << translation memory system >> .",360,6 +361,"We take a selection of both << bag-of-words and segment order-sensitive string comparison methods >> , and run each over both [[ character - and word-segmented data ]] , in combination with a range of local segment contiguity models -LRB- in the form of N-grams -RRB- .",361,3 +362,"We take a selection of both << bag-of-words and segment order-sensitive string comparison methods >> , and run each over both character - and word-segmented data , in combination with a range of [[ local segment contiguity models ]] -LRB- in the form of N-grams -RRB- .",362,0 +363,"We take a selection of both bag-of-words and segment order-sensitive string comparison methods , and run each over both character - and word-segmented data , in combination with a range of << local segment contiguity models >> -LRB- in the form of [[ N-grams ]] -RRB- .",363,1 +364,"Over two distinct datasets , we find that << indexing >> according to simple [[ character bigrams ]] produces a retrieval accuracy superior to any of the tested word N-gram models .",364,3 +365,"Over two distinct datasets , we find that indexing according to simple [[ character bigrams ]] produces a retrieval accuracy superior to any of the tested << word N-gram models >> .",365,5 +366,"Over two distinct datasets , we find that indexing according to simple << character bigrams >> produces a [[ retrieval accuracy ]] superior to any of the tested word N-gram models .",366,6 +367,"Over two distinct datasets , we find that indexing according to simple character bigrams produces a [[ retrieval accuracy ]] superior to any of the tested << word N-gram models >> .",367,6 +368,"Further , in their optimum configuration , [[ bag-of-words methods ]] are shown to be equivalent to << segment order-sensitive methods >> in terms of retrieval accuracy , but much faster .",368,5 +369,"Further , in their optimum configuration , << bag-of-words methods >> are shown to be equivalent to segment order-sensitive methods in terms of [[ retrieval accuracy ]] , but much faster .",369,6 +370,"Further , in their optimum configuration , bag-of-words methods are shown to be equivalent to << segment order-sensitive methods >> in terms of [[ retrieval accuracy ]] , but much faster .",370,6 +371,In this paper we show how two standard [[ outputs ]] from information extraction -LRB- IE -RRB- systems - named entity annotations and scenario templates - can be used to enhance access to << text collections >> via a standard text browser .,371,3 +372,In this paper we show how two standard << outputs >> from information extraction -LRB- IE -RRB- systems - [[ named entity annotations ]] and scenario templates - can be used to enhance access to text collections via a standard text browser .,372,2 +373,In this paper we show how two standard outputs from information extraction -LRB- IE -RRB- systems - [[ named entity annotations ]] and << scenario templates >> - can be used to enhance access to text collections via a standard text browser .,373,0 +374,In this paper we show how two standard << outputs >> from information extraction -LRB- IE -RRB- systems - named entity annotations and [[ scenario templates ]] - can be used to enhance access to text collections via a standard text browser .,374,2 +375,In this paper we show how two standard outputs from information extraction -LRB- IE -RRB- systems - named entity annotations and scenario templates - can be used to enhance access to << text collections >> via a standard [[ text browser ]] .,375,3 +376,We describe how this information is used in a [[ prototype system ]] designed to support information workers ' access to a << pharmaceutical news archive >> as part of their industry watch function .,376,3 +377,"We also report results of a preliminary , [[ qualitative user evaluation ]] of the << system >> , which while broadly positive indicates further work needs to be done on the interface to make users aware of the increased potential of IE-enhanced text browsers .",377,6 +378,We present a new [[ model-based bundle adjustment algorithm ]] to recover the << 3D model >> of a scene/object from a sequence of images with unknown motions .,378,3 +379,We present a new model-based bundle adjustment algorithm to recover the << 3D model >> of a scene/object from a sequence of [[ images ]] with unknown motions .,379,3 +380,We present a new model-based bundle adjustment algorithm to recover the 3D model of a scene/object from a sequence of << images >> with [[ unknown motions ]] .,380,4 +381,"Instead of representing scene/object by a collection of isolated 3D features -LRB- usually points -RRB- , our << algorithm >> uses a [[ surface ]] controlled by a small set of parameters .",381,3 +382,"Compared with previous [[ model-based approaches ]] , our << approach >> has the following advantages .",382,5 +383,"First , instead of using the [[ model space ]] as a << regular-izer >> , we directly use it as our search space , thus resulting in a more elegant formulation with fewer unknowns and fewer equations .",383,3 +384,"First , instead of using the model space as a [[ regular-izer ]] , we directly use it as our << search space >> , thus resulting in a more elegant formulation with fewer unknowns and fewer equations .",384,5 +385,"First , instead of using the model space as a regular-izer , we directly use [[ it ]] as our << search space >> , thus resulting in a more elegant formulation with fewer unknowns and fewer equations .",385,3 +386,"Third , regarding << face modeling >> , we use a very small set of [[ face metrics ]] -LRB- meaningful deformations -RRB- to parame-terize the face geometry , resulting in a smaller search space and a better posed system .",386,3 +387,"Third , regarding face modeling , we use a very small set of [[ face metrics ]] -LRB- meaningful deformations -RRB- to parame-terize the << face geometry >> , resulting in a smaller search space and a better posed system .",387,3 +388,"Third , regarding face modeling , we use a very small set of [[ face metrics ]] -LRB- meaningful deformations -RRB- to parame-terize the face geometry , resulting in a smaller << search space >> and a better posed system .",388,3 +389,"Third , regarding face modeling , we use a very small set of [[ face metrics ]] -LRB- meaningful deformations -RRB- to parame-terize the face geometry , resulting in a smaller search space and a better << posed system >> .",389,3 +390,"Experiments with both [[ synthetic and real data ]] show that this new << algorithm >> is faster , more accurate and more stable than existing ones .",390,6 +391,"Experiments with both [[ synthetic and real data ]] show that this new algorithm is faster , more accurate and more stable than existing << ones >> .",391,6 +392,"Experiments with both synthetic and real data show that this new [[ algorithm ]] is faster , more accurate and more stable than existing << ones >> .",392,5 +393,This paper presents an [[ approach ]] to the << unsupervised learning of parts of speech >> which uses both morphological and syntactic information .,393,3 +394,This paper presents an << approach >> to the unsupervised learning of parts of speech which uses both [[ morphological and syntactic information ]] .,394,3 +395,"While the [[ model ]] is more complex than << those >> which have been employed for unsupervised learning of POS tags in English , which use only syntactic information , the variety of languages in the world requires that we consider morphology as well .",395,5 +396,"While the model is more complex than [[ those ]] which have been employed for << unsupervised learning of POS tags in English >> , which use only syntactic information , the variety of languages in the world requires that we consider morphology as well .",396,3 +397,"While the model is more complex than << those >> which have been employed for unsupervised learning of POS tags in English , which use only [[ syntactic information ]] , the variety of languages in the world requires that we consider morphology as well .",397,3 +398,"In many languages , [[ morphology ]] provides better clues to a word 's category than << word order >> .",398,5 +399,"We present the [[ computational model ]] for << POS learning >> , and present results for applying it to Bulgarian , a Slavic language with relatively free word order and rich morphology .",399,3 +400,"We present the computational model for POS learning , and present results for applying << it >> to [[ Bulgarian ]] , a Slavic language with relatively free word order and rich morphology .",400,3 +401,"We present the computational model for POS learning , and present results for applying it to [[ Bulgarian ]] , a << Slavic language >> with relatively free word order and rich morphology .",401,2 +402,"We present the computational model for POS learning , and present results for applying it to << Bulgarian >> , a Slavic language with relatively [[ free word order ]] and rich morphology .",402,1 +403,"We present the computational model for POS learning , and present results for applying it to Bulgarian , a Slavic language with relatively [[ free word order ]] and << rich morphology >> .",403,0 +404,"We present the computational model for POS learning , and present results for applying it to << Bulgarian >> , a Slavic language with relatively free word order and [[ rich morphology ]] .",404,1 +405,"In << MT >> , the widely used approach is to apply a [[ Chinese word segmenter ]] trained from manually annotated data , using a fixed lexicon .",405,3 +406,"In MT , the widely used approach is to apply a << Chinese word segmenter >> trained from [[ manually annotated data ]] , using a fixed lexicon .",406,3 +407,Such [[ word segmentation ]] is not necessarily optimal for << translation >> .,407,3 +408,We propose a [[ Bayesian semi-supervised Chinese word segmentation model ]] which uses both monolingual and bilingual information to derive a << segmentation >> suitable for MT .,408,3 +409,We propose a << Bayesian semi-supervised Chinese word segmentation model >> which uses both [[ monolingual and bilingual information ]] to derive a segmentation suitable for MT .,409,3 +410,We propose a Bayesian semi-supervised Chinese word segmentation model which uses both monolingual and bilingual information to derive a [[ segmentation ]] suitable for << MT >> .,410,3 +411,Experiments show that our [[ method ]] improves a state-of-the-art << MT system >> in a small and a large data environment .,411,5 +412,"In this paper we compare two competing [[ approaches ]] to << part-of-speech tagging >> , statistical and constraint-based disambiguation , using French as our test language .",412,3 +413,"In this paper we compare two competing << approaches >> to part-of-speech tagging , statistical and constraint-based disambiguation , using [[ French ]] as our test language .",413,3 +414,We imposed a time limit on our experiment : the amount of time spent on the design of our [[ constraint system ]] was about the same as the time we used to train and test the easy-to-implement << statistical model >> .,414,5 +415,"The [[ accuracy ]] of the << statistical method >> is reasonably good , comparable to taggers for English .",415,6 +416,"The [[ accuracy ]] of the statistical method is reasonably good , comparable to << taggers >> for English .",416,6 +417,"The accuracy of the [[ statistical method ]] is reasonably good , comparable to << taggers >> for English .",417,5 +418,"The accuracy of the statistical method is reasonably good , comparable to [[ taggers ]] for << English >> .",418,3 +419,[[ Structured-light methods ]] actively generate << geometric correspondence data >> between projectors and cameras in order to facilitate robust 3D reconstruction .,419,3 +420,Structured-light methods actively generate [[ geometric correspondence data ]] between projectors and cameras in order to facilitate << robust 3D reconstruction >> .,420,3 +421,"In this paper , we present << Photogeometric Structured Light >> whereby a standard [[ structured light method ]] is extended to include photometric methods .",421,4 +422,"In this paper , we present << Photogeometric Structured Light >> whereby a standard structured light method is extended to include [[ photometric methods ]] .",422,4 +423,[[ Photometric processing ]] serves the double purpose of increasing the amount of << recovered surface detail >> and of enabling the structured-light setup to be robustly self-calibrated .,423,3 +424,[[ Photometric processing ]] serves the double purpose of increasing the amount of recovered surface detail and of enabling the << structured-light setup >> to be robustly self-calibrated .,424,3 +425,"Further , our << framework >> uses a [[ photogeometric optimization ]] that supports the simultaneous use of multiple cameras and projectors and yields a single and accurate multi-view 3D model which best complies with photometric and geometric data .",425,3 +426,"Further , our framework uses a photogeometric optimization that supports the simultaneous use of multiple cameras and projectors and yields a single and accurate << multi-view 3D model >> which best complies with [[ photometric and geometric data ]] .",426,3 +427,"In this paper , a discrimination and robustness oriented [[ adaptive learning procedure ]] is proposed to deal with the task of << syntactic ambiguity resolution >> .",427,3 +428,"Owing to the problem of [[ insufficient training data ]] and << approximation error >> introduced by the language model , traditional statistical approaches , which resolve ambiguities by indirectly and implicitly using maximum likelihood method , fail to achieve high performance in real applications .",428,0 +429,"Owing to the problem of insufficient training data and approximation error introduced by the language model , traditional [[ statistical approaches ]] , which resolve << ambiguities >> by indirectly and implicitly using maximum likelihood method , fail to achieve high performance in real applications .",429,3 +430,"Owing to the problem of insufficient training data and approximation error introduced by the language model , traditional << statistical approaches >> , which resolve ambiguities by indirectly and implicitly using [[ maximum likelihood method ]] , fail to achieve high performance in real applications .",430,3 +431,The [[ accuracy rate ]] of << syntactic disambiguation >> is raised from 46.0 % to 60.62 % by using this novel approach .,431,6 +432,The accuracy rate of [[ syntactic disambiguation ]] is raised from 46.0 % to 60.62 % by using this novel << approach >> .,432,6 +433,"This paper presents a new [[ approach ]] to << statistical sentence generation >> in which alternative phrases are represented as packed sets of trees , or forests , and then ranked statistically to choose the best one .",433,3 +434,[[ It ]] also facilitates more efficient << statistical ranking >> than a previous approach to statistical generation .,434,3 +435,[[ It ]] also facilitates more efficient statistical ranking than a previous << approach >> to statistical generation .,435,5 +436,It also facilitates more efficient statistical ranking than a previous [[ approach ]] to << statistical generation >> .,436,3 +437,"An efficient [[ ranking algorithm ]] is described , together with experimental results showing significant improvements over simple << enumeration >> or a lattice-based approach .",437,5 +438,"An efficient [[ ranking algorithm ]] is described , together with experimental results showing significant improvements over simple enumeration or a << lattice-based approach >> .",438,5 +439,"An efficient ranking algorithm is described , together with experimental results showing significant improvements over simple [[ enumeration ]] or a << lattice-based approach >> .",439,0 +440,This article deals with the interpretation of conceptual operations underlying the communicative use of [[ natural language -LRB- NL -RRB- ]] within the << Structured Inheritance Network -LRB- SI-Nets -RRB- paradigm >> .,440,3 +441,"The operations are reduced to functions of a formal language , thus changing the level of abstraction of the [[ operations ]] to be performed on << SI-Nets >> .",441,3 +442,"In this sense , [[ operations ]] on << SI-Nets >> are not merely isomorphic to single epistemological objects , but can be viewed as a simulation of processes on a different level , that pertaining to the conceptual system of NL .",442,3 +443,"In this sense , operations on SI-Nets are not merely isomorphic to single epistemological objects , but can be viewed as a simulation of processes on a different level , that pertaining to the << conceptual system >> of [[ NL ]] .",443,3 +444,"For this purpose , we have designed a version of [[ KL-ONE ]] which represents the epistemological level , while the new experimental language , << KL-Conc >> , represents the conceptual level .",444,5 +445,"For this purpose , we have designed a version of << KL-ONE >> which represents the [[ epistemological level ]] , while the new experimental language , KL-Conc , represents the conceptual level .",445,1 +446,"For this purpose , we have designed a version of KL-ONE which represents the epistemological level , while the new experimental language , << KL-Conc >> , represents the [[ conceptual level ]] .",446,1 +447,We present an [[ algorithm ]] for << calibrated camera relative pose estimation >> from lines .,447,3 +448,We evaluate the performance of the << algorithm >> using [[ synthetic and real data ]] .,448,3 +449,The intended use of the [[ algorithm ]] is with robust << hypothesize-and-test frameworks >> such as RANSAC .,449,0 +450,The intended use of the algorithm is with robust << hypothesize-and-test frameworks >> such as [[ RANSAC ]] .,450,2 +451,Our [[ approach ]] is suitable for << urban and indoor environments >> where most lines are either parallel or orthogonal to each other .,451,3 +452,"In this paper , we present a [[ fully automated extraction system ]] , named IntEx , to identify << gene and protein interactions >> in biomedical text .",452,3 +453,"In this paper , we present a << fully automated extraction system >> , named [[ IntEx ]] , to identify gene and protein interactions in biomedical text .",453,2 +454,"In this paper , we present a fully automated extraction system , named IntEx , to identify << gene and protein interactions >> in [[ biomedical text ]] .",454,3 +455,"Then , tagging << biological entities >> with the help of [[ biomedical and linguistic ontologies ]] .",455,3 +456,Our [[ extraction system ]] handles complex sentences and extracts << multiple and nested interactions >> specified in a sentence .,456,3 +457,Experimental evaluations with two other state of the art << extraction systems >> indicate that the [[ IntEx system ]] achieves better performance without the labor intensive pattern engineering requirement .,457,5 +458,This paper introduces a [[ method ]] for << computational analysis of move structures >> in abstracts of research articles .,458,3 +459,This paper introduces a method for << computational analysis of move structures >> in [[ abstracts of research articles ]] .,459,3 +460,The method involves automatically gathering a large number of << abstracts >> from the [[ Web ]] and building a language model of abstract moves .,460,3 +461,The method involves automatically gathering a large number of abstracts from the Web and building a << language model >> of [[ abstract moves ]] .,461,3 +462,"We also present a << prototype concordancer >> , [[ CARE ]] , which exploits the move-tagged abstracts for digital learning .",462,2 +463,"We also present a prototype concordancer , [[ CARE ]] , which exploits the << move-tagged abstracts >> for digital learning .",463,3 +464,"We also present a prototype concordancer , CARE , which exploits the [[ move-tagged abstracts ]] for << digital learning >> .",464,3 +465,This [[ system ]] provides a promising << approach >> to Web-based computer-assisted academic writing .,465,3 +466,This system provides a promising [[ approach ]] to << Web-based computer-assisted academic writing >> .,466,3 +467,This work presents a [[ real-time system ]] for << multiple object tracking in dynamic scenes >> .,467,3 +468,A unique characteristic of the [[ system ]] is its ability to cope with << long-duration and complete occlusion >> without a prior knowledge about the shape or motion of objects .,468,3 +469,A unique characteristic of the system is its ability to cope with long-duration and complete occlusion without a [[ prior knowledge ]] about the << shape >> or motion of objects .,469,1 +470,A unique characteristic of the system is its ability to cope with long-duration and complete occlusion without a [[ prior knowledge ]] about the shape or << motion of objects >> .,470,1 +471,A unique characteristic of the system is its ability to cope with long-duration and complete occlusion without a prior knowledge about the [[ shape ]] or << motion of objects >> .,471,0 +472,"The << system >> produces good segment and [[ tracking ]] results at a frame rate of 15-20 fps for image size of 320x240 , as demonstrated by extensive experiments performed using video sequences under different conditions indoor and outdoor with long-duration and complete occlusions in changing background .",472,6 +473,We propose a [[ method ]] of << organizing reading materials >> for vocabulary learning .,473,3 +474,We propose a method of [[ organizing reading materials ]] for << vocabulary learning >> .,474,3 +475,"We used a specialized vocabulary for an English certification test as the target vocabulary and used [[ English Wikipedia ]] , a << free-content encyclopedia >> , as the target corpus .",475,2 +476,A novel [[ bootstrapping approach ]] to << Named Entity -LRB- NE -RRB- tagging >> using concept-based seeds and successive learners is presented .,476,3 +477,A novel << bootstrapping approach >> to Named Entity -LRB- NE -RRB- tagging using [[ concept-based seeds ]] and successive learners is presented .,477,3 +478,A novel bootstrapping approach to Named Entity -LRB- NE -RRB- tagging using [[ concept-based seeds ]] and << successive learners >> is presented .,478,0 +479,A novel << bootstrapping approach >> to Named Entity -LRB- NE -RRB- tagging using concept-based seeds and [[ successive learners ]] is presented .,479,3 +480,"This approach only requires a few common noun or pronoun seeds that correspond to the concept for the targeted << NE >> , e.g. he/she/man / woman for [[ PERSON NE ]] .",480,2 +481,The << bootstrapping procedure >> is implemented as training two [[ successive learners ]] .,481,3 +482,"First , [[ decision list ]] is used to learn the << parsing-based NE rules >> .",482,3 +483,The resulting [[ NE system ]] approaches << supervised NE >> performance for some NE types .,483,3 +484,"We present the first known empirical test of an increasingly common speculative claim , by evaluating a representative << Chinese-to-English SMT model >> directly on [[ word sense disambiguation ]] performance , using standard WSD evaluation methodology and datasets from the Senseval-3 Chinese lexical sample task .",484,6 +485,"We present the first known empirical test of an increasingly common speculative claim , by evaluating a representative << Chinese-to-English SMT model >> directly on word sense disambiguation performance , using standard [[ WSD evaluation methodology ]] and datasets from the Senseval-3 Chinese lexical sample task .",485,6 +486,"We present the first known empirical test of an increasingly common speculative claim , by evaluating a representative << Chinese-to-English SMT model >> directly on word sense disambiguation performance , using standard WSD evaluation methodology and datasets from the [[ Senseval-3 Chinese lexical sample task ]] .",486,6 +487,"Much effort has been put in designing and evaluating << dedicated word sense disambiguation -LRB- WSD -RRB- models >> , in particular with the [[ Senseval series of workshops ]] .",487,6 +488,"At the same time , the recent improvements in the [[ BLEU scores ]] of << statistical machine translation -LRB- SMT -RRB- >> suggests that SMT models are good at predicting the right translation of the words in source language sentences .",488,6 +489,"At the same time , the recent improvements in the BLEU scores of statistical machine translation -LRB- SMT -RRB- suggests that [[ SMT models ]] are good at predicting the right << translation >> of the words in source language sentences .",489,3 +490,"Surprisingly however , the [[ WSD accuracy ]] of << SMT models >> has never been evaluated and compared with that of the dedicated WSD models .",490,6 +491,"Surprisingly however , the << WSD accuracy >> of SMT models has never been evaluated and compared with [[ that ]] of the dedicated WSD models .",491,5 +492,We present controlled experiments showing the [[ WSD accuracy ]] of current typical << SMT models >> to be significantly lower than that of all the dedicated WSD models considered .,492,6 +493,We present controlled experiments showing the << WSD accuracy >> of current typical SMT models to be significantly lower than [[ that ]] of all the dedicated WSD models considered .,493,5 +494,"This tends to support the view that despite recent speculative claims to the contrary , current [[ SMT models ]] do have limitations in comparison with << dedicated WSD models >> , and that SMT should benefit from the better predictions made by the WSD models .",494,5 +495,"This tends to support the view that despite recent speculative claims to the contrary , current SMT models do have limitations in comparison with dedicated WSD models , and that << SMT >> should benefit from the better predictions made by the [[ WSD models ]] .",495,3 +496,"In this paper we present a novel , customizable : << IE paradigm >> that takes advantage of [[ predicate-argument structures ]] .",496,3 +497,<< It >> is based on : -LRB- 1 -RRB- an extended set of [[ features ]] ; and -LRB- 2 -RRB- inductive decision tree learning .,497,3 +498,It is based on : -LRB- 1 -RRB- an extended set of [[ features ]] ; and -LRB- 2 -RRB- << inductive decision tree learning >> .,498,0 +499,<< It >> is based on : -LRB- 1 -RRB- an extended set of features ; and -LRB- 2 -RRB- [[ inductive decision tree learning ]] .,499,3 +500,The experimental results prove our claim that accurate [[ predicate-argument structures ]] enable high quality << IE >> results .,500,3 +501,"In this paper we present a [[ statistical profile ]] of the << Named Entity task >> , a specific information extraction task for which corpora in several languages are available .",501,3 +502,"In this paper we present a statistical profile of the [[ Named Entity task ]] , a specific << information extraction task >> for which corpora in several languages are available .",502,2 +503,"Using the results of the [[ statistical analysis ]] , we propose an << algorithm >> for lower bound estimation for Named Entity corpora and discuss the significance of the cross-lingual comparisons provided by the analysis .",503,3 +504,"Using the results of the statistical analysis , we propose an [[ algorithm ]] for << lower bound estimation >> for Named Entity corpora and discuss the significance of the cross-lingual comparisons provided by the analysis .",504,3 +505,"Using the results of the statistical analysis , we propose an algorithm for [[ lower bound estimation ]] for << Named Entity corpora >> and discuss the significance of the cross-lingual comparisons provided by the analysis .",505,3 +506,"We attack an inexplicably << under-explored language genre of spoken language >> -- [[ lyrics in music ]] -- via completely unsuper-vised induction of an SMT-style stochastic transduction grammar for hip hop lyrics , yielding a fully-automatically learned challenge-response system that produces rhyming lyrics given an input .",506,2 +507,"We attack an inexplicably << under-explored language genre of spoken language >> -- lyrics in music -- via completely [[ unsuper-vised induction ]] of an SMT-style stochastic transduction grammar for hip hop lyrics , yielding a fully-automatically learned challenge-response system that produces rhyming lyrics given an input .",507,3 +508,"We attack an inexplicably under-explored language genre of spoken language -- lyrics in music -- via completely [[ unsuper-vised induction ]] of an << SMT-style stochastic transduction grammar >> for hip hop lyrics , yielding a fully-automatically learned challenge-response system that produces rhyming lyrics given an input .",508,3 +509,"We attack an inexplicably under-explored language genre of spoken language -- lyrics in music -- via completely [[ unsuper-vised induction ]] of an SMT-style stochastic transduction grammar for hip hop lyrics , yielding a << fully-automatically learned challenge-response system >> that produces rhyming lyrics given an input .",509,3 +510,"We attack an inexplicably under-explored language genre of spoken language -- lyrics in music -- via completely unsuper-vised induction of an << SMT-style stochastic transduction grammar >> for [[ hip hop lyrics ]] , yielding a fully-automatically learned challenge-response system that produces rhyming lyrics given an input .",510,1 +511,"We attack an inexplicably under-explored language genre of spoken language -- lyrics in music -- via completely unsuper-vised induction of an SMT-style stochastic transduction grammar for hip hop lyrics , yielding a [[ fully-automatically learned challenge-response system ]] that produces << rhyming lyrics >> given an input .",511,3 +512,"In spite of the level of difficulty of the challenge , the [[ model ]] nevertheless produces fluent output as judged by human evaluators , and performs significantly better than widely used << phrase-based SMT models >> upon the same task .",512,5 +513,"In spite of the level of difficulty of the challenge , the << model >> nevertheless produces fluent output as judged by human evaluators , and performs significantly better than widely used phrase-based SMT models upon the same [[ task ]] .",513,6 +514,"In spite of the level of difficulty of the challenge , the model nevertheless produces fluent output as judged by human evaluators , and performs significantly better than widely used << phrase-based SMT models >> upon the same [[ task ]] .",514,6 +515,"In this paper , we investigate the problem of automatically << predicting segment boundaries >> in [[ spoken multiparty dialogue ]] .",515,3 +516,We first apply [[ approaches ]] that have been proposed for << predicting top-level topic shifts >> to the problem of identifying subtopic boundaries .,516,3 +517,We first apply [[ approaches ]] that have been proposed for predicting top-level topic shifts to the problem of << identifying subtopic boundaries >> .,517,3 +518,We first apply approaches that have been proposed for [[ predicting top-level topic shifts ]] to the problem of << identifying subtopic boundaries >> .,518,1 +519,We then explore the impact on performance of using [[ ASR output ]] as opposed to << human transcription >> .,519,5 +520,"Examination of the effect of features shows that << predicting top-level and predicting subtopic boundaries >> are two distinct tasks : -LRB- 1 -RRB- for [[ predicting subtopic boundaries ]] , the lexical cohesion-based approach alone can achieve competitive results , -LRB- 2 -RRB- for predicting top-level boundaries , the machine learning approach that combines lexical-cohesion and conversational features performs best , and -LRB- 3 -RRB- conversational cues , such as cue phrases and overlapping speech , are better indicators for the top-level prediction task .",520,4 +521,"Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks : -LRB- 1 -RRB- for << predicting subtopic boundaries >> , the [[ lexical cohesion-based approach ]] alone can achieve competitive results , -LRB- 2 -RRB- for predicting top-level boundaries , the machine learning approach that combines lexical-cohesion and conversational features performs best , and -LRB- 3 -RRB- conversational cues , such as cue phrases and overlapping speech , are better indicators for the top-level prediction task .",521,3 +522,"Examination of the effect of features shows that << predicting top-level and predicting subtopic boundaries >> are two distinct tasks : -LRB- 1 -RRB- for predicting subtopic boundaries , the lexical cohesion-based approach alone can achieve competitive results , -LRB- 2 -RRB- for [[ predicting top-level boundaries ]] , the machine learning approach that combines lexical-cohesion and conversational features performs best , and -LRB- 3 -RRB- conversational cues , such as cue phrases and overlapping speech , are better indicators for the top-level prediction task .",522,4 +523,"Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks : -LRB- 1 -RRB- for predicting subtopic boundaries , the lexical cohesion-based approach alone can achieve competitive results , -LRB- 2 -RRB- for << predicting top-level boundaries >> , the [[ machine learning approach ]] that combines lexical-cohesion and conversational features performs best , and -LRB- 3 -RRB- conversational cues , such as cue phrases and overlapping speech , are better indicators for the top-level prediction task .",523,3 +524,"Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks : -LRB- 1 -RRB- for predicting subtopic boundaries , the lexical cohesion-based approach alone can achieve competitive results , -LRB- 2 -RRB- for predicting top-level boundaries , the << machine learning approach >> that combines [[ lexical-cohesion and conversational features ]] performs best , and -LRB- 3 -RRB- conversational cues , such as cue phrases and overlapping speech , are better indicators for the top-level prediction task .",524,0 +525,"Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks : -LRB- 1 -RRB- for predicting subtopic boundaries , the lexical cohesion-based approach alone can achieve competitive results , -LRB- 2 -RRB- for predicting top-level boundaries , the machine learning approach that combines lexical-cohesion and conversational features performs best , and -LRB- 3 -RRB- << conversational cues >> , such as [[ cue phrases ]] and overlapping speech , are better indicators for the top-level prediction task .",525,2 +526,"Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks : -LRB- 1 -RRB- for predicting subtopic boundaries , the lexical cohesion-based approach alone can achieve competitive results , -LRB- 2 -RRB- for predicting top-level boundaries , the machine learning approach that combines lexical-cohesion and conversational features performs best , and -LRB- 3 -RRB- << conversational cues >> , such as cue phrases and [[ overlapping speech ]] , are better indicators for the top-level prediction task .",526,2 +527,"Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks : -LRB- 1 -RRB- for predicting subtopic boundaries , the lexical cohesion-based approach alone can achieve competitive results , -LRB- 2 -RRB- for predicting top-level boundaries , the machine learning approach that combines lexical-cohesion and conversational features performs best , and -LRB- 3 -RRB- conversational cues , such as << cue phrases >> and [[ overlapping speech ]] , are better indicators for the top-level prediction task .",527,0 +528,"Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks : -LRB- 1 -RRB- for predicting subtopic boundaries , the lexical cohesion-based approach alone can achieve competitive results , -LRB- 2 -RRB- for predicting top-level boundaries , the machine learning approach that combines lexical-cohesion and conversational features performs best , and -LRB- 3 -RRB- conversational cues , such as cue phrases and overlapping speech , are better [[ indicators ]] for the << top-level prediction task >> .",528,3 +529,"We also find that the [[ transcription errors ]] inevitable in << ASR output >> have a negative impact on models that combine lexical-cohesion and conversational features , but do not change the general preference of approach for the two tasks .",529,1 +530,"We also find that the transcription errors inevitable in ASR output have a negative impact on [[ models ]] that combine << lexical-cohesion and conversational features >> , but do not change the general preference of approach for the two tasks .",530,0 +531,We describe a simple [[ unsupervised technique ]] for learning << morphology >> by identifying hubs in an automaton .,531,3 +532,We describe a simple << unsupervised technique >> for learning morphology by identifying [[ hubs ]] in an automaton .,532,3 +533,We describe a simple unsupervised technique for learning morphology by identifying [[ hubs ]] in an << automaton >> .,533,4 +534,"For our purposes , a [[ hub ]] is a << node >> in a graph with in-degree greater than one and out-degree greater than one .",534,2 +535,"For our purposes , a hub is a [[ node ]] in a << graph >> with in-degree greater than one and out-degree greater than one .",535,4 +536,"We create a [[ word-trie ]] , transform it into a minimal DFA , then identify << hubs >> .",536,3 +537,"We create a word-trie , transform it into a [[ minimal DFA ]] , then identify << hubs >> .",537,3 +538,"In << Bayesian machine learning >> , [[ conjugate priors ]] are popular , mostly due to mathematical convenience .",538,4 +539,"Specifically , we formulate the << conjugate prior >> in the form of [[ Bregman divergence ]] and show that it is the inherent geometry of conjugate priors that makes them appropriate and intuitive .",539,1 +540,We use this [[ geometric understanding of conjugate priors ]] to derive the << hyperparameters >> and expression of the prior used to couple the generative and discriminative components of a hybrid model for semi-supervised learning .,540,3 +541,We use this geometric understanding of conjugate priors to derive the hyperparameters and expression of the [[ prior ]] used to couple the << generative and discriminative components >> of a hybrid model for semi-supervised learning .,541,3 +542,We use this geometric understanding of conjugate priors to derive the hyperparameters and expression of the prior used to couple the [[ generative and discriminative components ]] of a << hybrid model >> for semi-supervised learning .,542,4 +543,We use this geometric understanding of conjugate priors to derive the hyperparameters and expression of the prior used to couple the generative and discriminative components of a [[ hybrid model ]] for << semi-supervised learning >> .,543,3 +544,"This paper defines a << generative probabilistic model of parse trees >> , which we call [[ PCFG-LA ]] .",544,2 +545,This << model >> is an extension of [[ PCFG ]] in which non-terminal symbols are augmented with latent variables .,545,3 +546,This model is an extension of << PCFG >> in which [[ non-terminal symbols ]] are augmented with latent variables .,546,4 +547,This model is an extension of PCFG in which << non-terminal symbols >> are augmented with [[ latent variables ]] .,547,3 +548,<< Finegrained CFG rules >> are automatically induced from a [[ parsed corpus ]] by training a PCFG-LA model using an EM-algorithm .,548,3 +549,Finegrained CFG rules are automatically induced from a parsed corpus by training a << PCFG-LA model >> using an [[ EM-algorithm ]] .,549,3 +550,"Because << exact parsing >> with a [[ PCFG-LA ]] is NP-hard , several approximations are described and empirically compared .",550,3 +551,"In experiments using the [[ Penn WSJ corpus ]] , our automatically trained << model >> gave a performance of 86.6 % -LRB- F1 , sentences < 40 words -RRB- , which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection .",551,6 +552,"In experiments using the [[ Penn WSJ corpus ]] , our automatically trained model gave a performance of 86.6 % -LRB- F1 , sentences < 40 words -RRB- , which is comparable to that of an << unlexicalized PCFG parser >> created using extensive manual feature selection .",552,6 +553,"In experiments using the Penn WSJ corpus , our automatically trained [[ model ]] gave a performance of 86.6 % -LRB- F1 , sentences < 40 words -RRB- , which is comparable to that of an << unlexicalized PCFG parser >> created using extensive manual feature selection .",553,5 +554,"In experiments using the Penn WSJ corpus , our automatically trained << model >> gave a performance of 86.6 % -LRB- [[ F1 ]] , sentences < 40 words -RRB- , which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection .",554,6 +555,"In experiments using the Penn WSJ corpus , our automatically trained model gave a performance of 86.6 % -LRB- [[ F1 ]] , sentences < 40 words -RRB- , which is comparable to that of an << unlexicalized PCFG parser >> created using extensive manual feature selection .",555,6 +556,"In experiments using the Penn WSJ corpus , our automatically trained model gave a performance of 86.6 % -LRB- F1 , sentences < 40 words -RRB- , which is comparable to that of an << unlexicalized PCFG parser >> created using extensive [[ manual feature selection ]] .",556,3 +557,"First , we present a new paradigm for << speaker-independent -LRB- SI -RRB- training of hidden Markov models -LRB- HMM -RRB- >> , which uses a large amount of [[ speech ]] from a few speakers instead of the traditional practice of using a little speech from many speakers .",557,3 +558,"In addition , combination of the training speakers is done by [[ averaging the statistics of independently trained models ]] rather than the usual << pooling of all the speech data >> from many speakers prior to training .",558,5 +559,"With only 12 training speakers for << SI recognition >> , we achieved a 7.5 % [[ word error rate ]] on a standard grammar and test set from the DARPA Resource Management corpus .",559,6 +560,"With only 12 training speakers for << SI recognition >> , we achieved a 7.5 % word error rate on a standard grammar and test set from the [[ DARPA Resource Management corpus ]] .",560,6 +561,"Second , we show a significant improvement for << speaker adaptation -LRB- SA -RRB- >> using the new [[ SI corpus ]] and a small amount of speech from the new -LRB- target -RRB- speaker .",561,6 +562,"Using only 40 utterances from the target speaker for << adaptation >> , the [[ error rate ]] dropped to 4.1 % -- a 45 % reduction in error compared to the SI result .",562,6 +563,"[[ Dictionary construction ]] , one of the most difficult tasks in developing a << machine translation system >> , is expensive .",563,4 +564,"To avoid this problem , we investigate how we build a << dictionary >> using existing [[ linguistic resources ]] .",564,3 +565,"Our algorithm can be applied to any language pairs , but for the present we focus on building a << Korean-to-Japanese dictionary >> using [[ English ]] as a pivot .",565,3 +566,We attempt three ways of [[ automatic construction ]] to corroborate the effect of the << directionality of dictionaries >> .,566,6 +567,"First , we introduce << `` one-time look up '' method >> using a [[ Korean-to-English and a Japanese-to-English dictionary ]] .",567,3 +568,"Second , we show a << method >> using [[ `` overlapping constraint '' ]] with a Korean-to-English dictionary and an English-to-Japanese dictionary .",568,3 +569,"Second , we show a << method >> using `` overlapping constraint '' with a [[ Korean-to-English dictionary ]] and an English-to-Japanese dictionary .",569,3 +570,"Second , we show a method using `` overlapping constraint '' with a [[ Korean-to-English dictionary ]] and an << English-to-Japanese dictionary >> .",570,0 +571,"Second , we show a << method >> using `` overlapping constraint '' with a Korean-to-English dictionary and an [[ English-to-Japanese dictionary ]] .",571,3 +572,"Third , we consider another alternative [[ method ]] rarely used for building a << dictionary >> : an English-to-Korean dictionary and English-to-Japanese dictionary .",572,3 +573,"Third , we consider another alternative method rarely used for building a << dictionary >> : an [[ English-to-Korean dictionary ]] and English-to-Japanese dictionary .",573,2 +574,"Third , we consider another alternative method rarely used for building a dictionary : an [[ English-to-Korean dictionary ]] and << English-to-Japanese dictionary >> .",574,0 +575,"Third , we consider another alternative method rarely used for building a << dictionary >> : an English-to-Korean dictionary and [[ English-to-Japanese dictionary ]] .",575,2 +576,An empirical comparison of [[ CFG filtering techniques ]] for << LTAG >> and HPSG is presented .,576,3 +577,An empirical comparison of [[ CFG filtering techniques ]] for LTAG and << HPSG >> is presented .,577,3 +578,An empirical comparison of CFG filtering techniques for [[ LTAG ]] and << HPSG >> is presented .,578,5 +579,We demonstrate that an [[ approximation of HPSG ]] produces a more effective << CFG filter >> than that of LTAG .,579,3 +580,We demonstrate that an approximation of HPSG produces a more effective [[ CFG filter ]] than << that >> of LTAG .,580,5 +581,We demonstrate that an approximation of HPSG produces a more effective CFG filter than [[ that ]] of << LTAG >> .,581,3 +582,<< Syntax-based statistical machine translation -LRB- MT -RRB- >> aims at applying [[ statistical models ]] to structured data .,582,3 +583,Syntax-based statistical machine translation -LRB- MT -RRB- aims at applying << statistical models >> to [[ structured data ]] .,583,3 +584,"In this paper , we present a << syntax-based statistical machine translation system >> based on a [[ probabilistic synchronous dependency insertion grammar ]] .",584,3 +585,[[ Synchronous dependency insertion grammars ]] are a version of << synchronous grammars >> defined on dependency trees .,585,2 +586,<< Synchronous dependency insertion grammars >> are a version of synchronous grammars defined on [[ dependency trees ]] .,586,1 +587,We first introduce our [[ approach ]] to inducing such a << grammar >> from parallel corpora .,587,3 +588,We first introduce our approach to inducing such a << grammar >> from [[ parallel corpora ]] .,588,3 +589,"Second , we describe the [[ graphical model ]] for the << machine translation task >> , which can also be viewed as a stochastic tree-to-tree transducer .",589,3 +590,"Second , we describe the << graphical model >> for the machine translation task , which can also be viewed as a [[ stochastic tree-to-tree transducer ]] .",590,3 +591,We introduce a [[ polynomial time decoding algorithm ]] for the << model >> .,591,3 +592,We evaluate the outputs of our << MT system >> using the [[ NIST and Bleu automatic MT evaluation software ]] .,592,3 +593,The result shows that our [[ system ]] outperforms the << baseline system >> based on the IBM models in both translation speed and quality .,593,5 +594,The result shows that our system outperforms the << baseline system >> based on the [[ IBM models ]] in both translation speed and quality .,594,3 +595,The result shows that our << system >> outperforms the baseline system based on the IBM models in both [[ translation speed and quality ]] .,595,6 +596,The result shows that our system outperforms the << baseline system >> based on the IBM models in both [[ translation speed and quality ]] .,596,6 +597,We propose a << framework >> to derive the distance between concepts from [[ distributional measures of word co-occurrences ]] .,597,3 +598,"We show that the newly proposed [[ concept-distance measures ]] outperform traditional distributional word-distance measures in the << tasks >> of -LRB- 1 -RRB- ranking word pairs in order of semantic distance , and -LRB- 2 -RRB- correcting real-word spelling errors .",598,3 +599,"We show that the newly proposed << concept-distance measures >> outperform traditional [[ distributional word-distance measures ]] in the tasks of -LRB- 1 -RRB- ranking word pairs in order of semantic distance , and -LRB- 2 -RRB- correcting real-word spelling errors .",599,5 +600,"We show that the newly proposed concept-distance measures outperform traditional [[ distributional word-distance measures ]] in the << tasks >> of -LRB- 1 -RRB- ranking word pairs in order of semantic distance , and -LRB- 2 -RRB- correcting real-word spelling errors .",600,3 +601,"We show that the newly proposed concept-distance measures outperform traditional distributional word-distance measures in the << tasks >> of -LRB- 1 -RRB- [[ ranking word pairs in order of semantic distance ]] , and -LRB- 2 -RRB- correcting real-word spelling errors .",601,2 +602,"We show that the newly proposed concept-distance measures outperform traditional distributional word-distance measures in the << tasks >> of -LRB- 1 -RRB- ranking word pairs in order of semantic distance , and -LRB- 2 -RRB- [[ correcting real-word spelling errors ]] .",602,2 +603,"We show that the newly proposed concept-distance measures outperform traditional distributional word-distance measures in the tasks of -LRB- 1 -RRB- << ranking word pairs in order of semantic distance >> , and -LRB- 2 -RRB- [[ correcting real-word spelling errors ]] .",603,0 +604,"In the latter [[ task ]] , of all the << WordNet-based measures >> , only that proposed by Jiang and Conrath outperforms the best distributional concept-distance measures .",604,6 +605,"In the latter [[ task ]] , of all the WordNet-based measures , only that proposed by Jiang and Conrath outperforms the best << distributional concept-distance measures >> .",605,6 +606,"In the latter task , of all the << WordNet-based measures >> , only that proposed by Jiang and Conrath outperforms the best [[ distributional concept-distance measures ]] .",606,5 +607,One of the main results of this work is the definition of a relation between [[ broad semantic classes ]] and << LCS meaning components >> .,607,0 +608,"Our [[ acquisition program - LEXICALL - ]] takes , as input , the result of previous work on verb classification and thematic grid tagging , and outputs << LCS representations >> for different languages .",608,3 +609,"Our << acquisition program - LEXICALL - >> takes , as input , the result of previous work on [[ verb classification ]] and thematic grid tagging , and outputs LCS representations for different languages .",609,3 +610,"Our acquisition program - LEXICALL - takes , as input , the result of previous work on [[ verb classification ]] and << thematic grid tagging >> , and outputs LCS representations for different languages .",610,0 +611,"Our << acquisition program - LEXICALL - >> takes , as input , the result of previous work on verb classification and [[ thematic grid tagging ]] , and outputs LCS representations for different languages .",611,3 +612,"These [[ representations ]] have been ported into << English , Arabic and Spanish lexicons >> , each containing approximately 9000 verbs .",612,3 +613,We are currently using these [[ lexicons ]] in an << operational foreign language tutoring >> and machine translation .,613,3 +614,We are currently using these [[ lexicons ]] in an operational foreign language tutoring and << machine translation >> .,614,3 +615,We are currently using these lexicons in an [[ operational foreign language tutoring ]] and << machine translation >> .,615,0 +616,The theoretical study of the [[ range concatenation grammar -LSB- RCG -RSB- formalism ]] has revealed many attractive properties which may be used in << NLP >> .,616,3 +617,"In particular , << range concatenation languages -LSB- RCL -RSB- >> can be parsed in [[ polynomial time ]] and many classical grammatical formalisms can be translated into equivalent RCGs without increasing their worst-case parsing time complexity .",617,1 +618,"In particular , range concatenation languages -LSB- RCL -RSB- can be parsed in polynomial time and many classical << grammatical formalisms >> can be translated into equivalent RCGs without increasing their [[ worst-case parsing time complexity ]] .",618,6 +619,"For example , after translation into an equivalent RCG , any << tree adjoining grammar >> can be parsed in [[ O -LRB- n6 -RRB- time ]] .",619,1 +620,"In this paper , we study a [[ parsing technique ]] whose purpose is to improve the practical efficiency of << RCL parsers >> .",620,3 +621,The non-deterministic parsing choices of the [[ main parser ]] for a << language L >> are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L .,621,3 +622,The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the << shared derivation forest >> output by a prior [[ RCL parser ]] for a suitable superset of L .,622,3 +623,The results of a practical evaluation of this << method >> on a [[ wide coverage English grammar ]] are given .,623,6 +624,"In this paper we introduce [[ Ant-Q ]] , a family of algorithms which present many similarities with Q-learning -LRB- Watkins , 1989 -RRB- , and which we apply to the solution of << symmetric and asym-metric instances of the traveling salesman problem -LRB- TSP -RRB- >> .",624,3 +625,"<< Ant-Q algorithms >> were inspired by work on the [[ ant system -LRB- AS -RRB- ]] , a distributed algorithm for combinatorial optimization based on the metaphor of ant colonies which was recently proposed in -LRB- Dorigo , 1992 ; Dorigo , Maniezzo and Colorni , 1996 -RRB- .",625,3 +626,"Ant-Q algorithms were inspired by work on the [[ ant system -LRB- AS -RRB- ]] , a << distributed algorithm >> for combinatorial optimization based on the metaphor of ant colonies which was recently proposed in -LRB- Dorigo , 1992 ; Dorigo , Maniezzo and Colorni , 1996 -RRB- .",626,2 +627,"Ant-Q algorithms were inspired by work on the ant system -LRB- AS -RRB- , a [[ distributed algorithm ]] for << combinatorial optimization >> based on the metaphor of ant colonies which was recently proposed in -LRB- Dorigo , 1992 ; Dorigo , Maniezzo and Colorni , 1996 -RRB- .",627,3 +628,"We show that [[ AS ]] is a particular instance of the << Ant-Q family >> , and that there are instances of this family which perform better than AS .",628,2 +629,"We show that AS is a particular instance of the Ant-Q family , and that there are [[ instances ]] of this << family >> which perform better than AS .",629,4 +630,"We show that AS is a particular instance of the Ant-Q family , and that there are [[ instances ]] of this family which perform better than << AS >> .",630,5 +631,We experimentally investigate the functioning of Ant-Q and we show that the results obtained by [[ Ant-Q ]] on << symmetric TSP >> 's are competitive with those obtained by other heuristic approaches based on neural networks or local search .,631,3 +632,We experimentally investigate the functioning of Ant-Q and we show that the results obtained by [[ Ant-Q ]] on symmetric TSP 's are competitive with those obtained by other << heuristic approaches >> based on neural networks or local search .,632,5 +633,We experimentally investigate the functioning of Ant-Q and we show that the results obtained by Ant-Q on symmetric TSP 's are competitive with those obtained by other << heuristic approaches >> based on [[ neural networks ]] or local search .,633,3 +634,We experimentally investigate the functioning of Ant-Q and we show that the results obtained by Ant-Q on symmetric TSP 's are competitive with those obtained by other heuristic approaches based on [[ neural networks ]] or << local search >> .,634,0 +635,We experimentally investigate the functioning of Ant-Q and we show that the results obtained by Ant-Q on symmetric TSP 's are competitive with those obtained by other << heuristic approaches >> based on neural networks or [[ local search ]] .,635,3 +636,"Finally , we apply [[ Ant-Q ]] to some difficult << asymmetric TSP >> 's obtaining very good results : Ant-Q was able to find solutions of a quality which usually can be found only by very specialized algorithms .",636,3 +637,"In this paper , we develop a [[ geometric framework ]] for << linear or nonlinear discriminant subspace learning and classification >> .",637,3 +638,"In our framework , the << structures of classes >> are conceptualized as a [[ semi-Riemannian manifold ]] which is considered as a submanifold embedded in an ambient semi-Riemannian space .",638,3 +639,"In our framework , the structures of classes are conceptualized as a semi-Riemannian manifold which is considered as a [[ submanifold ]] embedded in an << ambient semi-Riemannian space >> .",639,4 +640,The << class structures >> of original samples can be characterized and deformed by [[ local metrics of the semi-Riemannian space ]] .,640,3 +641,<< Semi-Riemannian metrics >> are uniquely determined by the [[ smoothing of discrete functions ]] and the nullity of the semi-Riemannian space .,641,3 +642,Semi-Riemannian metrics are uniquely determined by the [[ smoothing of discrete functions ]] and the << nullity of the semi-Riemannian space >> .,642,0 +643,<< Semi-Riemannian metrics >> are uniquely determined by the smoothing of discrete functions and the [[ nullity of the semi-Riemannian space ]] .,643,3 +644,"Based on the geometrization of class structures , optimizing << class structures >> in the [[ feature space ]] is equivalent to maximizing the quadratic quantities of metric tensors in the semi-Riemannian space .",644,1 +645,"Based on the geometrization of class structures , optimizing class structures in the feature space is equivalent to maximizing the << quadratic quantities of metric tensors >> in the [[ semi-Riemannian space ]] .",645,1 +646,"Based on the proposed [[ framework ]] , a novel << algorithm >> , dubbed as Semi-Riemannian Discriminant Analysis -LRB- SRDA -RRB- , is presented for subspace-based classification .",646,3 +647,"Based on the proposed framework , a novel [[ algorithm ]] , dubbed as Semi-Riemannian Discriminant Analysis -LRB- SRDA -RRB- , is presented for << subspace-based classification >> .",647,3 +648,The performance of [[ SRDA ]] is tested on face recognition -LRB- singular case -RRB- and handwritten capital letter classification -LRB- nonsingular case -RRB- against existing << algorithms >> .,648,5 +649,The performance of << SRDA >> is tested on [[ face recognition -LRB- singular case ]] -RRB- and handwritten capital letter classification -LRB- nonsingular case -RRB- against existing algorithms .,649,6 +650,The performance of SRDA is tested on [[ face recognition -LRB- singular case ]] -RRB- and << handwritten capital letter classification -LRB- nonsingular case -RRB- >> against existing algorithms .,650,0 +651,The performance of SRDA is tested on [[ face recognition -LRB- singular case ]] -RRB- and handwritten capital letter classification -LRB- nonsingular case -RRB- against existing << algorithms >> .,651,6 +652,The performance of << SRDA >> is tested on face recognition -LRB- singular case -RRB- and [[ handwritten capital letter classification -LRB- nonsingular case -RRB- ]] against existing algorithms .,652,6 +653,The performance of SRDA is tested on face recognition -LRB- singular case -RRB- and [[ handwritten capital letter classification -LRB- nonsingular case -RRB- ]] against existing << algorithms >> .,653,6 +654,"The experimental results show that [[ SRDA ]] works well on << recognition >> and classification , implying that semi-Riemannian geometry is a promising new tool for pattern recognition and machine learning .",654,3 +655,"The experimental results show that [[ SRDA ]] works well on recognition and << classification >> , implying that semi-Riemannian geometry is a promising new tool for pattern recognition and machine learning .",655,3 +656,"The experimental results show that SRDA works well on [[ recognition ]] and << classification >> , implying that semi-Riemannian geometry is a promising new tool for pattern recognition and machine learning .",656,0 +657,"The experimental results show that SRDA works well on recognition and classification , implying that [[ semi-Riemannian geometry ]] is a promising new tool for << pattern recognition >> and machine learning .",657,3 +658,"The experimental results show that SRDA works well on recognition and classification , implying that [[ semi-Riemannian geometry ]] is a promising new tool for pattern recognition and << machine learning >> .",658,3 +659,"The experimental results show that SRDA works well on recognition and classification , implying that semi-Riemannian geometry is a promising new tool for [[ pattern recognition ]] and << machine learning >> .",659,0 +660,A [[ deterministic parser ]] is under development which represents a departure from traditional << deterministic parsers >> in that it combines both symbolic and connectionist components .,660,5 +661,A deterministic parser is under development which represents a departure from traditional deterministic parsers in that << it >> combines both [[ symbolic and connectionist components ]] .,661,4 +662,The << connectionist component >> is trained either from [[ patterns ]] derived from the rules of a deterministic grammar .,662,3 +663,The connectionist component is trained either from << patterns >> derived from the [[ rules of a deterministic grammar ]] .,663,3 +664,The development and evolution of such a [[ hybrid architecture ]] has lead to a << parser >> which is superior to any known deterministic parser .,664,3 +665,The development and evolution of such a hybrid architecture has lead to a [[ parser ]] which is superior to any known << deterministic parser >> .,665,5 +666,Experiments are described and powerful [[ training techniques ]] are demonstrated that permit << decision-making >> by the connectionist component in the parsing process .,666,3 +667,Experiments are described and powerful training techniques are demonstrated that permit << decision-making >> by the [[ connectionist component ]] in the parsing process .,667,3 +668,Experiments are described and powerful training techniques are demonstrated that permit decision-making by the [[ connectionist component ]] in the << parsing process >> .,668,4 +669,Data are presented which show how a [[ connectionist -LRB- neural -RRB- network ]] trained with linguistic rules can parse both << expected -LRB- grammatical -RRB- sentences >> as well as some novel -LRB- ungrammatical or lexically ambiguous -RRB- sentences .,669,3 +670,Data are presented which show how a [[ connectionist -LRB- neural -RRB- network ]] trained with linguistic rules can parse both expected -LRB- grammatical -RRB- sentences as well as some novel << -LRB- ungrammatical or lexically ambiguous -RRB- sentences >> .,670,3 +671,Data are presented which show how a << connectionist -LRB- neural -RRB- network >> trained with [[ linguistic rules ]] can parse both expected -LRB- grammatical -RRB- sentences as well as some novel -LRB- ungrammatical or lexically ambiguous -RRB- sentences .,671,3 +672,Data are presented which show how a connectionist -LRB- neural -RRB- network trained with linguistic rules can parse both [[ expected -LRB- grammatical -RRB- sentences ]] as well as some novel << -LRB- ungrammatical or lexically ambiguous -RRB- sentences >> .,672,0 +673,"Robust << natural language interpretation >> requires strong [[ semantic domain models ]] , fail-soft recovery heuristics , and very flexible control structures .",673,3 +674,"Robust natural language interpretation requires strong [[ semantic domain models ]] , << fail-soft recovery heuristics >> , and very flexible control structures .",674,0 +675,"Robust << natural language interpretation >> requires strong semantic domain models , [[ fail-soft recovery heuristics ]] , and very flexible control structures .",675,3 +676,"Robust natural language interpretation requires strong semantic domain models , [[ fail-soft recovery heuristics ]] , and very flexible << control structures >> .",676,0 +677,"Robust << natural language interpretation >> requires strong semantic domain models , fail-soft recovery heuristics , and very flexible [[ control structures ]] .",677,3 +678,"Although [[ single-strategy parsers ]] have met with a measure of success , a << multi-strategy approach >> is shown to provide a much higher degree of flexibility , redundancy , and ability to bring task-specific domain knowledge -LRB- in addition to general linguistic knowledge -RRB- to bear on both grammatical and ungrammatical input .",678,5 +679,"Although single-strategy parsers have met with a measure of success , a multi-strategy approach is shown to provide a much higher degree of flexibility , redundancy , and ability to bring [[ task-specific domain knowledge ]] -LRB- in addition to << general linguistic knowledge >> -RRB- to bear on both grammatical and ungrammatical input .",679,0 +680,"A << parsing algorithm >> is presented that integrates several different [[ parsing strategies ]] , with case-frame instantiation dominating .",680,4 +681,"A parsing algorithm is presented that integrates several different << parsing strategies >> , with [[ case-frame instantiation ]] dominating .",681,2 +682,"Each of these [[ parsing strategies ]] exploits different types of knowledge ; and their combination provides a strong framework in which to process << conjunctions >> , fragmentary input , and ungrammatical structures , as well as less exotic , grammatically correct input .",682,3 +683,"Each of these [[ parsing strategies ]] exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , << fragmentary input >> , and ungrammatical structures , as well as less exotic , grammatically correct input .",683,3 +684,"Each of these [[ parsing strategies ]] exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , fragmentary input , and << ungrammatical structures >> , as well as less exotic , grammatically correct input .",684,3 +685,"Each of these [[ parsing strategies ]] exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , fragmentary input , and ungrammatical structures , as well as less << exotic , grammatically correct input >> .",685,3 +686,"Each of these parsing strategies exploits different types of knowledge ; and their combination provides a strong framework in which to process [[ conjunctions ]] , << fragmentary input >> , and ungrammatical structures , as well as less exotic , grammatically correct input .",686,0 +687,"Each of these parsing strategies exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , [[ fragmentary input ]] , and << ungrammatical structures >> , as well as less exotic , grammatically correct input .",687,0 +688,"Each of these parsing strategies exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , fragmentary input , and [[ ungrammatical structures ]] , as well as less << exotic , grammatically correct input >> .",688,0 +689,Several [[ specific heuristics ]] for handling << ungrammatical input >> are presented within this multi-strategy framework .,689,3 +690,Several [[ specific heuristics ]] for handling ungrammatical input are presented within this << multi-strategy framework >> .,690,4 +691,"Recently , [[ Stacked Auto-Encoders -LRB- SAE -RRB- ]] have been successfully used for << learning imbalanced datasets >> .",691,3 +692,"In this paper , for the first time , we propose to use a [[ Neural Network classifier ]] furnished by an SAE structure for detecting the errors made by a strong << Automatic Speech Recognition -LRB- ASR -RRB- system >> .",692,3 +693,"In this paper , for the first time , we propose to use a << Neural Network classifier >> furnished by an [[ SAE structure ]] for detecting the errors made by a strong Automatic Speech Recognition -LRB- ASR -RRB- system .",693,3 +694,"[[ Error detection ]] on an << automatic transcription >> provided by a '' strong '' ASR system , i.e. exhibiting a small word error rate , is difficult due to the limited number of '' positive '' examples -LRB- i.e. words erroneously recognized -RRB- available for training a binary classi-fier .",694,3 +695,"In this paper we investigate and compare different types of [[ classifiers ]] for << automatically detecting ASR errors >> , including the one based on a stacked auto-encoder architecture .",695,3 +696,"In this paper we investigate and compare different types of << classifiers >> for automatically detecting ASR errors , including the [[ one ]] based on a stacked auto-encoder architecture .",696,2 +697,"In this paper we investigate and compare different types of classifiers for automatically detecting ASR errors , including the << one >> based on a [[ stacked auto-encoder architecture ]] .",697,3 +698,We show the effectiveness of the latter by measuring and comparing performance on the << automatic transcriptions >> of an [[ English corpus ]] collected from TED talks .,698,1 +699,We show the effectiveness of the latter by measuring and comparing performance on the automatic transcriptions of an << English corpus >> collected from [[ TED talks ]] .,699,3 +700,"Performance of each investigated << classifier >> is evaluated both via [[ receiving operating curve ]] and via a measure , called mean absolute error , related to the quality in predicting the corresponding word error rate .",700,6 +701,"Performance of each investigated classifier is evaluated both via [[ receiving operating curve ]] and via a << measure >> , called mean absolute error , related to the quality in predicting the corresponding word error rate .",701,0 +702,"Performance of each investigated << classifier >> is evaluated both via receiving operating curve and via a [[ measure ]] , called mean absolute error , related to the quality in predicting the corresponding word error rate .",702,6 +703,The results demonstrates that the [[ classifier ]] based on SAE detects the << ASR errors >> better than the other classification methods .,703,3 +704,The results demonstrates that the [[ classifier ]] based on SAE detects the ASR errors better than the other << classification methods >> .,704,5 +705,The results demonstrates that the << classifier >> based on [[ SAE ]] detects the ASR errors better than the other classification methods .,705,3 +706,The results demonstrates that the classifier based on SAE detects the << ASR errors >> better than the other [[ classification methods ]] .,706,3 +707,"Within the EU Network of Excellence PASCAL , a challenge was organized to design a [[ statistical machine learning algorithm ]] that segments words into the << smallest meaning-bearing units of language >> , morphemes .",707,3 +708,"Within the EU Network of Excellence PASCAL , a challenge was organized to design a statistical machine learning algorithm that segments words into the << smallest meaning-bearing units of language >> , [[ morphemes ]] .",708,2 +709,"Ideally , [[ these ]] are basic vocabulary units suitable for different << tasks >> , such as speech and text understanding , machine translation , information retrieval , and statistical language modeling .",709,3 +710,"Ideally , these are basic vocabulary units suitable for different << tasks >> , such as [[ speech and text understanding ]] , machine translation , information retrieval , and statistical language modeling .",710,2 +711,"Ideally , these are basic vocabulary units suitable for different tasks , such as [[ speech and text understanding ]] , << machine translation >> , information retrieval , and statistical language modeling .",711,0 +712,"Ideally , these are basic vocabulary units suitable for different << tasks >> , such as speech and text understanding , [[ machine translation ]] , information retrieval , and statistical language modeling .",712,2 +713,"Ideally , these are basic vocabulary units suitable for different tasks , such as speech and text understanding , [[ machine translation ]] , << information retrieval >> , and statistical language modeling .",713,0 +714,"Ideally , these are basic vocabulary units suitable for different << tasks >> , such as speech and text understanding , machine translation , [[ information retrieval ]] , and statistical language modeling .",714,2 +715,"Ideally , these are basic vocabulary units suitable for different tasks , such as speech and text understanding , machine translation , [[ information retrieval ]] , and << statistical language modeling >> .",715,0 +716,"Ideally , these are basic vocabulary units suitable for different << tasks >> , such as speech and text understanding , machine translation , information retrieval , and [[ statistical language modeling ]] .",716,2 +717,"In this paper , we evaluate the application of these [[ segmen-tation algorithms ]] to << large vocabulary speech recognition >> using statistical n-gram language models based on the proposed word segments instead of entire words .",717,3 +718,"In this paper , we evaluate the application of these << segmen-tation algorithms >> to large vocabulary speech recognition using [[ statistical n-gram language models ]] based on the proposed word segments instead of entire words .",718,6 +719,Experiments were done for two << ag-glutinative and morphologically rich languages >> : [[ Finnish ]] and Turk-ish .,719,2 +720,Experiments were done for two ag-glutinative and morphologically rich languages : [[ Finnish ]] and << Turk-ish >> .,720,0 +721,Experiments were done for two << ag-glutinative and morphologically rich languages >> : Finnish and [[ Turk-ish ]] .,721,2 +722,This paper describes a recently collected [[ spoken language corpus ]] for the << ATIS -LRB- Air Travel Information System -RRB- domain >> .,722,3 +723,"We summarize the motivation for this effort , the goals , the implementation of a multi-site data collection paradigm , and the accomplishments of MADCOW in monitoring the collection and distribution of 12,000 utterances of [[ spontaneous speech ]] from five sites for use in a << multi-site common evaluation of speech , natural language and spoken language >> .",723,6 +724,This paper proposes the [[ Hierarchical Directed Acyclic Graph -LRB- HDAG -RRB- Kernel ]] for << structured natural language data >> .,724,3 +725,We applied the proposed [[ method ]] to << question classification and sentence alignment tasks >> to evaluate its performance as a similarity measure and a kernel function .,725,3 +726,We applied the proposed << method >> to question classification and sentence alignment tasks to evaluate its performance as a [[ similarity measure ]] and a kernel function .,726,6 +727,We applied the proposed method to question classification and sentence alignment tasks to evaluate its performance as a [[ similarity measure ]] and a << kernel function >> .,727,0 +728,We applied the proposed << method >> to question classification and sentence alignment tasks to evaluate its performance as a similarity measure and a [[ kernel function ]] .,728,6 +729,The results of the experiments demonstrate that the [[ HDAG Kernel ]] is superior to other << kernel functions >> and baseline methods .,729,5 +730,The results of the experiments demonstrate that the [[ HDAG Kernel ]] is superior to other kernel functions and << baseline methods >> .,730,5 +731,The results of the experiments demonstrate that the HDAG Kernel is superior to other [[ kernel functions ]] and << baseline methods >> .,731,0 +732,We propose a solution to the challenge of the << CoNLL 2008 shared task >> that uses a [[ generative history-based latent variable model ]] to predict the most likely derivation of a synchronous dependency parser for both syntactic and semantic dependencies .,732,3 +733,We propose a solution to the challenge of the CoNLL 2008 shared task that uses a [[ generative history-based latent variable model ]] to predict the most likely derivation of a << synchronous dependency parser >> for both syntactic and semantic dependencies .,733,3 +734,We propose a solution to the challenge of the CoNLL 2008 shared task that uses a generative history-based latent variable model to predict the most likely derivation of a [[ synchronous dependency parser ]] for both << syntactic and semantic dependencies >> .,734,3 +735,"The submitted << model >> yields 79.1 % [[ macro-average F1 performance ]] , for the joint task , 86.9 % syntactic dependencies LAS and 71.0 % semantic dependencies F1 .",735,6 +736,"The submitted model yields 79.1 % [[ macro-average F1 performance ]] , for the joint << task >> , 86.9 % syntactic dependencies LAS and 71.0 % semantic dependencies F1 .",736,6 +737,"The submitted model yields 79.1 % macro-average F1 performance , for the joint << task >> , 86.9 % [[ syntactic dependencies LAS ]] and 71.0 % semantic dependencies F1 .",737,6 +738,"The submitted model yields 79.1 % macro-average F1 performance , for the joint task , 86.9 % [[ syntactic dependencies LAS ]] and 71.0 % << semantic dependencies F1 >> .",738,0 +739,"The submitted model yields 79.1 % macro-average F1 performance , for the joint << task >> , 86.9 % syntactic dependencies LAS and 71.0 % [[ semantic dependencies F1 ]] .",739,6 +740,"A larger << model >> trained after the deadline achieves 80.5 % [[ macro-average F1 ]] , 87.6 % syntactic dependencies LAS , and 73.1 % semantic dependencies F1 .",740,6 +741,"A larger model trained after the deadline achieves 80.5 % [[ macro-average F1 ]] , 87.6 % << syntactic dependencies LAS >> , and 73.1 % semantic dependencies F1 .",741,0 +742,"A larger << model >> trained after the deadline achieves 80.5 % macro-average F1 , 87.6 % [[ syntactic dependencies LAS ]] , and 73.1 % semantic dependencies F1 .",742,6 +743,"A larger model trained after the deadline achieves 80.5 % macro-average F1 , 87.6 % [[ syntactic dependencies LAS ]] , and 73.1 % << semantic dependencies F1 >> .",743,0 +744,"A larger << model >> trained after the deadline achieves 80.5 % macro-average F1 , 87.6 % syntactic dependencies LAS , and 73.1 % [[ semantic dependencies F1 ]] .",744,6 +745,We present an [[ approach ]] to annotating a level of << discourse structure >> that is based on identifying discourse connectives and their arguments .,745,3 +746,We present an << approach >> to annotating a level of discourse structure that is based on identifying [[ discourse connectives ]] and their arguments .,746,3 +747,"The [[ PDTB ]] is being built directly on top of the Penn TreeBank and Propbank , thus supporting the << extraction of useful syntactic and semantic features >> and providing a richer substrate for the development and evaluation of practical algorithms .",747,3 +748,"The [[ PDTB ]] is being built directly on top of the Penn TreeBank and Propbank , thus supporting the extraction of useful syntactic and semantic features and providing a richer substrate for the development and evaluation of << practical algorithms >> .",748,6 +749,"The << PDTB >> is being built directly on top of the [[ Penn TreeBank ]] and Propbank , thus supporting the extraction of useful syntactic and semantic features and providing a richer substrate for the development and evaluation of practical algorithms .",749,3 +750,"The PDTB is being built directly on top of the [[ Penn TreeBank ]] and << Propbank >> , thus supporting the extraction of useful syntactic and semantic features and providing a richer substrate for the development and evaluation of practical algorithms .",750,0 +751,"The << PDTB >> is being built directly on top of the Penn TreeBank and [[ Propbank ]] , thus supporting the extraction of useful syntactic and semantic features and providing a richer substrate for the development and evaluation of practical algorithms .",751,3 +752,We provide a detailed preliminary analysis of << inter-annotator agreement >> - both the [[ level of agreement ]] and the types of inter-annotator variation .,752,1 +753,We provide a detailed preliminary analysis of inter-annotator agreement - both the [[ level of agreement ]] and the types of << inter-annotator variation >> .,753,0 +754,We provide a detailed preliminary analysis of << inter-annotator agreement >> - both the level of agreement and the types of [[ inter-annotator variation ]] .,754,1 +755,"Currently , [[ N-gram models ]] are the most common and widely used models for << statistical language modeling >> .",755,3 +756,"In this paper , we investigated an alternative way to build language models , i.e. , using [[ artificial neural networks ]] to learn the << language model >> .",756,3 +757,Our experiment result shows that the [[ neural network ]] can learn a << language model >> that has performance even better than standard statistical methods .,757,3 +758,Our experiment result shows that the [[ neural network ]] can learn a language model that has performance even better than standard << statistical methods >> .,758,5 +759,Existing works in the field usually do not encode either the << temporal evolution >> or the [[ intensity of the observed facial displays ]] .,759,0 +760,"In this paper , << intrinsic topology of multidimensional continuous facial >> affect data is first modeled by an [[ ordinal man-ifold ]] .",760,3 +761,This [[ topology ]] is then incorporated into the << Hidden Conditional Ordinal Random Field -LRB- H-CORF -RRB- framework >> for dynamic ordinal regression by constraining H-CORF parameters to lie on the ordinal manifold .,761,4 +762,This topology is then incorporated into the [[ Hidden Conditional Ordinal Random Field -LRB- H-CORF -RRB- framework ]] for << dynamic ordinal regression >> by constraining H-CORF parameters to lie on the ordinal manifold .,762,3 +763,The resulting [[ model ]] attains << simultaneous dynamic recognition >> and intensity estimation of facial expressions of multiple emotions .,763,3 +764,The resulting [[ model ]] attains simultaneous dynamic recognition and << intensity estimation of facial expressions >> of multiple emotions .,764,3 +765,The resulting model attains [[ simultaneous dynamic recognition ]] and << intensity estimation of facial expressions >> of multiple emotions .,765,0 +766,"To the best of our knowledge , << the proposed method >> is the first one to achieve this on both deliberate as well as [[ spontaneous facial affect data ]] .",766,6 +767,"Recent advances in linear classification have shown that for << applications >> such as [[ document classification ]] , the training can be extremely efficient .",767,2 +768,These methods can not be easily applied to [[ data ]] larger than the << memory capacity >> due to the random access to the disk .,768,5 +769,We propose and analyze a [[ block minimization framework ]] for << data >> larger than the memory size .,769,3 +770,We propose and analyze a block minimization framework for [[ data ]] larger than the << memory size >> .,770,5 +771,"We investigate two implementations of the proposed [[ framework ]] for << primal and dual SVMs >> , respectively .",771,3 +772,This in turn affects the [[ accuracy ]] of << word sense disambiguation -LRB- WSD -RRB- systems >> trained and applied on different domains .,772,6 +773,"This paper presents a [[ method ]] to estimate the << sense priors of words >> drawn from a new domain , and highlights the importance of using well calibrated probabilities when performing these estimations .",773,3 +774,"This paper presents a method to estimate the << sense priors of words >> drawn from a [[ new domain ]] , and highlights the importance of using well calibrated probabilities when performing these estimations .",774,1 +775,"This paper presents a method to estimate the sense priors of words drawn from a new domain , and highlights the importance of using [[ well calibrated probabilities ]] when performing these << estimations >> .",775,3 +776,"By using [[ well calibrated probabilities ]] , we are able to estimate the << sense priors >> effectively to achieve significant improvements in WSD accuracy .",776,3 +777,"<< It >> was compiled from various resources such as [[ encyclopedias ]] and dictionaries , public databases of proper names and toponyms , collocations obtained from Czech WordNet , lists of botanical and zoological terms and others .",777,3 +778,"It was compiled from various resources such as [[ encyclopedias ]] and << dictionaries >> , public databases of proper names and toponyms , collocations obtained from Czech WordNet , lists of botanical and zoological terms and others .",778,0 +779,"<< It >> was compiled from various resources such as encyclopedias and [[ dictionaries ]] , public databases of proper names and toponyms , collocations obtained from Czech WordNet , lists of botanical and zoological terms and others .",779,3 +780,"<< It >> was compiled from various resources such as encyclopedias and dictionaries , [[ public databases of proper names and toponyms ]] , collocations obtained from Czech WordNet , lists of botanical and zoological terms and others .",780,3 +781,"<< It >> was compiled from various resources such as encyclopedias and dictionaries , public databases of proper names and toponyms , [[ collocations ]] obtained from Czech WordNet , lists of botanical and zoological terms and others .",781,3 +782,"<< It >> was compiled from various resources such as encyclopedias and dictionaries , public databases of proper names and toponyms , collocations obtained from Czech WordNet , [[ lists of botanical and zoological terms ]] and others .",782,3 +783,"It was compiled from various resources such as encyclopedias and dictionaries , public databases of proper names and toponyms , << collocations >> obtained from Czech WordNet , [[ lists of botanical and zoological terms ]] and others .",783,0 +784,We compare the built << MWEs database >> with the corpus data from [[ Czech National Corpus ]] -LRB- approx .,784,3 +785,"To obtain a more complete list of MWEs we propose and use a [[ technique ]] exploiting the << Word Sketch Engine >> , which allows us to work with statistical parameters such as frequency of MWEs and their components as well as with the salience for the whole MWEs .",785,3 +786,"To obtain a more complete list of MWEs we propose and use a technique exploiting the << Word Sketch Engine >> , which allows us to work with [[ statistical parameters ]] such as frequency of MWEs and their components as well as with the salience for the whole MWEs .",786,1 +787,We also discuss exploitation of the [[ database ]] for working out a more adequate << tagging >> and lemmatization .,787,3 +788,We also discuss exploitation of the [[ database ]] for working out a more adequate tagging and << lemmatization >> .,788,3 +789,We also discuss exploitation of the database for working out a more adequate [[ tagging ]] and << lemmatization >> .,789,0 +790,"The final goal is to be able to recognize [[ MWEs ]] in corpus text and lemmatize them as complete lexical units , i. e. to make << tagging >> and lemmatization more adequate .",790,3 +791,"The final goal is to be able to recognize [[ MWEs ]] in corpus text and lemmatize them as complete lexical units , i. e. to make tagging and << lemmatization >> more adequate .",791,3 +792,"The final goal is to be able to recognize MWEs in corpus text and lemmatize them as complete lexical units , i. e. to make [[ tagging ]] and << lemmatization >> more adequate .",792,0 +793,"We describe the ongoing construction of a large , [[ semantically annotated corpus ]] resource as reliable basis for the << large-scale acquisition of word-semantic information >> , e.g. the construction of domain-independent lexica .",793,3 +794,"We describe the ongoing construction of a large , semantically annotated corpus resource as reliable basis for the << large-scale acquisition of word-semantic information >> , e.g. the [[ construction of domain-independent lexica ]] .",794,2 +795,The backbone of the annotation are [[ semantic roles ]] in the << frame semantics paradigm >> .,795,4 +796,"On this basis , we discuss the problems of [[ vagueness ]] and << ambiguity >> in semantic annotation .",796,0 +797,"On this basis , we discuss the problems of [[ vagueness ]] and ambiguity in << semantic annotation >> .",797,1 +798,"On this basis , we discuss the problems of vagueness and [[ ambiguity ]] in << semantic annotation >> .",798,1 +799,[[ Statistical machine translation -LRB- SMT -RRB- ]] is currently one of the hot spots in << natural language processing >> .,799,2 +800,"Over the last few years dramatic improvements have been made , and a number of comparative evaluations have shown , that [[ SMT ]] gives competitive results to << rule-based translation systems >> , requiring significantly less development time .",800,5 +801,This is particularly important when building [[ translation systems ]] for << new language pairs >> or new domains .,801,3 +802,This is particularly important when building [[ translation systems ]] for new language pairs or << new domains >> .,802,3 +803,This is particularly important when building translation systems for [[ new language pairs ]] or << new domains >> .,803,0 +804,"[[ STTK ]] , a << statistical machine translation tool kit >> , will be introduced and used to build a working translation system .",804,2 +805,"[[ STTK ]] , a statistical machine translation tool kit , will be introduced and used to build a working << translation system >> .",805,3 +806,[[ STTK ]] has been developed by the presenter and co-workers over a number of years and is currently used as the basis of CMU 's << SMT system >> .,806,3 +807,[[ It ]] has also successfully been coupled with << rule-based and example based machine translation modules >> to build a multi engine machine translation system .,807,0 +808,[[ It ]] has also successfully been coupled with rule-based and example based machine translation modules to build a << multi engine machine translation system >> .,808,3 +809,It has also successfully been coupled with [[ rule-based and example based machine translation modules ]] to build a << multi engine machine translation system >> .,809,3 +810,This paper presents an [[ unsupervised learning approach ]] to building a << non-English -LRB- Arabic -RRB- stemmer >> .,810,3 +811,The << stemming model >> is based on [[ statistical machine translation ]] and it uses an English stemmer and a small -LRB- 10K sentences -RRB- parallel corpus as its sole training resources .,811,3 +812,The stemming model is based on statistical machine translation and << it >> uses an [[ English stemmer ]] and a small -LRB- 10K sentences -RRB- parallel corpus as its sole training resources .,812,3 +813,The stemming model is based on statistical machine translation and << it >> uses an English stemmer and a small -LRB- 10K sentences -RRB- [[ parallel corpus ]] as its sole training resources .,813,3 +814,"[[ Monolingual , unannotated text ]] can be used to further improve the << stemmer >> by allowing it to adapt to a desired domain or genre .",814,3 +815,"Our [[ resource-frugal approach ]] results in 87.5 % agreement with a state of the art , proprietary << Arabic stemmer >> built using rules , affix lists , and human annotated text , in addition to an unsupervised component .",815,5 +816,"Our << resource-frugal approach >> results in 87.5 % [[ agreement ]] with a state of the art , proprietary Arabic stemmer built using rules , affix lists , and human annotated text , in addition to an unsupervised component .",816,6 +817,"Our resource-frugal approach results in 87.5 % [[ agreement ]] with a state of the art , proprietary << Arabic stemmer >> built using rules , affix lists , and human annotated text , in addition to an unsupervised component .",817,6 +818,"Our resource-frugal approach results in 87.5 % agreement with a state of the art , proprietary << Arabic stemmer >> built using [[ rules ]] , affix lists , and human annotated text , in addition to an unsupervised component .",818,3 +819,"Our resource-frugal approach results in 87.5 % agreement with a state of the art , proprietary Arabic stemmer built using [[ rules ]] , << affix lists >> , and human annotated text , in addition to an unsupervised component .",819,0 +820,"Our resource-frugal approach results in 87.5 % agreement with a state of the art , proprietary << Arabic stemmer >> built using rules , [[ affix lists ]] , and human annotated text , in addition to an unsupervised component .",820,3 +821,"Our resource-frugal approach results in 87.5 % agreement with a state of the art , proprietary Arabic stemmer built using rules , [[ affix lists ]] , and << human annotated text >> , in addition to an unsupervised component .",821,0 +822,"Our resource-frugal approach results in 87.5 % agreement with a state of the art , proprietary << Arabic stemmer >> built using rules , affix lists , and [[ human annotated text ]] , in addition to an unsupervised component .",822,3 +823,"Our resource-frugal approach results in 87.5 % agreement with a state of the art , proprietary Arabic stemmer built using rules , affix lists , and [[ human annotated text ]] , in addition to an << unsupervised component >> .",823,0 +824,"Our resource-frugal approach results in 87.5 % agreement with a state of the art , proprietary << Arabic stemmer >> built using rules , affix lists , and human annotated text , in addition to an [[ unsupervised component ]] .",824,3 +825,"<< Task-based evaluation >> using [[ Arabic information retrieval ]] indicates an improvement of 22-38 % in average precision over unstemmed text , and 96 % of the performance of the proprietary stemmer above .",825,3 +826,"<< Task-based evaluation >> using Arabic information retrieval indicates an improvement of 22-38 % in [[ average precision ]] over unstemmed text , and 96 % of the performance of the proprietary stemmer above .",826,6 +827,"Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38 % in [[ average precision ]] over << unstemmed text >> , and 96 % of the performance of the proprietary stemmer above .",827,6 +828,The paper assesses the capability of an [[ HMM-based TTS system ]] to produce << German speech >> .,828,3 +829,"In addition , the [[ system ]] is adapted to a small set of << football announcements >> , in an exploratory attempt to synthe-sise expressive speech .",829,3 +830,"In addition , the [[ system ]] is adapted to a small set of football announcements , in an exploratory attempt to synthe-sise << expressive speech >> .",830,3 +831,"We conclude that the [[ HMMs ]] are able to produce highly << intelligible neutral German speech >> , with a stable quality , and that the expressivity is partially captured in spite of the small size of the football dataset .",831,3 +832,"Furthermore , in contrast to the approach of Dalrymple et al. -LSB- 1991 -RSB- , the treatment directly encodes the intuitive distinction between [[ full NPs ]] and the << referential elements >> that corefer with them through what we term role linking .",832,0 +833,"Finally , the [[ analysis ]] extends directly to other << discourse copying phenomena >> .",833,3 +834,"How to obtain [[ hierarchical relations ]] -LRB- e.g. superordinate - hyponym relation , synonym relation -RRB- is one of the most important problems for << thesaurus construction >> .",834,4 +835,"How to obtain << hierarchical relations >> -LRB- e.g. [[ superordinate - hyponym relation ]] , synonym relation -RRB- is one of the most important problems for thesaurus construction .",835,2 +836,"How to obtain hierarchical relations -LRB- e.g. [[ superordinate - hyponym relation ]] , << synonym relation >> -RRB- is one of the most important problems for thesaurus construction .",836,0 +837,"How to obtain << hierarchical relations >> -LRB- e.g. superordinate - hyponym relation , [[ synonym relation ]] -RRB- is one of the most important problems for thesaurus construction .",837,2 +838,"A pilot system for extracting these << relations >> automatically from an ordinary [[ Japanese language dictionary ]] -LRB- Shinmeikai Kokugojiten , published by Sansei-do , in machine readable form -RRB- is given .",838,3 +839,"The << features >> of the [[ definition sentences ]] in the dictionary , the mechanical extraction of the hierarchical relations and the estimation of the results are discussed .",839,3 +840,"The features of the [[ definition sentences ]] in the << dictionary >> , the mechanical extraction of the hierarchical relations and the estimation of the results are discussed .",840,4 +841,This is evident most compellingly by the very low [[ recognition rate ]] of all existing << face recognition systems >> when applied to live CCTV camera input .,841,6 +842,This is evident most compellingly by the very low recognition rate of all existing << face recognition systems >> when applied to [[ live CCTV camera input ]] .,842,3 +843,"In this paper , we present a [[ Bayesian framework ]] to perform multi-modal -LRB- such as variations in viewpoint and illumination -RRB- << face image super-resolution >> for recognition in tensor space .",843,3 +844,"In this paper , we present a Bayesian framework to perform multi-modal -LRB- such as variations in [[ viewpoint ]] and << illumination >> -RRB- face image super-resolution for recognition in tensor space .",844,0 +845,"In this paper , we present a Bayesian framework to perform multi-modal -LRB- such as variations in viewpoint and illumination -RRB- [[ face image super-resolution ]] for << recognition >> in tensor space .",845,3 +846,"In this paper , we present a Bayesian framework to perform multi-modal -LRB- such as variations in viewpoint and illumination -RRB- face image super-resolution for << recognition >> in [[ tensor space ]] .",846,1 +847,"Given a [[ single modal low-resolution face image ]] , we benefit from the multiple factor interactions of training tensor , and super-resolve its << high-resolution reconstructions >> across different modalities for face recognition .",847,3 +848,"Given a single modal low-resolution face image , we benefit from the [[ multiple factor interactions of training tensor ]] , and super-resolve its << high-resolution reconstructions >> across different modalities for face recognition .",848,3 +849,"Given a single modal low-resolution face image , we benefit from the multiple factor interactions of training tensor , and super-resolve its [[ high-resolution reconstructions ]] across different modalities for << face recognition >> .",849,3 +850,"Given a single modal low-resolution face image , we benefit from the multiple factor interactions of training tensor , and super-resolve its << high-resolution reconstructions >> across different [[ modalities ]] for face recognition .",850,1 +851,"Instead of performing << pixel-domain super-resolution and recognition >> independently as two separate sequential processes , we integrate the tasks of [[ super-resolution ]] and recognition by directly computing a maximum likelihood identity parameter vector in high-resolution tensor space for recognition .",851,2 +852,"Instead of performing pixel-domain super-resolution and recognition independently as two separate sequential processes , we integrate the tasks of [[ super-resolution ]] and << recognition >> by directly computing a maximum likelihood identity parameter vector in high-resolution tensor space for recognition .",852,0 +853,"Instead of performing << pixel-domain super-resolution and recognition >> independently as two separate sequential processes , we integrate the tasks of super-resolution and [[ recognition ]] by directly computing a maximum likelihood identity parameter vector in high-resolution tensor space for recognition .",853,2 +854,"Instead of performing pixel-domain super-resolution and recognition independently as two separate sequential processes , we integrate the tasks of << super-resolution >> and recognition by directly computing a [[ maximum likelihood identity parameter vector ]] in high-resolution tensor space for recognition .",854,3 +855,"Instead of performing pixel-domain super-resolution and recognition independently as two separate sequential processes , we integrate the tasks of super-resolution and << recognition >> by directly computing a [[ maximum likelihood identity parameter vector ]] in high-resolution tensor space for recognition .",855,3 +856,"Instead of performing pixel-domain super-resolution and recognition independently as two separate sequential processes , we integrate the tasks of super-resolution and recognition by directly computing a [[ maximum likelihood identity parameter vector ]] in high-resolution tensor space for << recognition >> .",856,3 +857,"Instead of performing pixel-domain super-resolution and recognition independently as two separate sequential processes , we integrate the tasks of super-resolution and recognition by directly computing a << maximum likelihood identity parameter vector >> in [[ high-resolution tensor space ]] for recognition .",857,1 +858,"We show results from << multi-modal super-resolution and face recognition >> experiments across different imaging modalities , using [[ low-resolution images ]] as testing inputs and demonstrate improved recognition rates over standard tensorface and eigenface representations .",858,3 +859,"We show results from << multi-modal super-resolution and face recognition >> experiments across different imaging modalities , using low-resolution images as testing inputs and demonstrate improved [[ recognition rates ]] over standard tensorface and eigenface representations .",859,6 +860,"We show results from multi-modal super-resolution and face recognition experiments across different imaging modalities , using low-resolution images as testing inputs and demonstrate improved [[ recognition rates ]] over standard << tensorface and eigenface representations >> .",860,6 +861,"In this paper , we describe a [[ phrase-based unigram model ]] for << statistical machine translation >> that uses a much simpler set of model parameters than similar phrase-based models .",861,3 +862,"In this paper , we describe a [[ phrase-based unigram model ]] for statistical machine translation that uses a much simpler set of model parameters than similar << phrase-based models >> .",862,5 +863,"In this paper , we describe a << phrase-based unigram model >> for statistical machine translation that uses a much simpler set of [[ model parameters ]] than similar phrase-based models .",863,3 +864,"During << decoding >> , we use a [[ block unigram model ]] and a word-based trigram language model .",864,3 +865,"During << decoding >> , we use a block unigram model and a [[ word-based trigram language model ]] .",865,3 +866,"During decoding , we use a << block unigram model >> and a [[ word-based trigram language model ]] .",866,0 +867,"During training , the << blocks >> are learned from [[ source interval projections ]] using an underlying word alignment .",867,3 +868,"During training , the blocks are learned from << source interval projections >> using an underlying [[ word alignment ]] .",868,3 +869,We show experimental results on << block selection criteria >> based on [[ unigram counts ]] and phrase length .,869,3 +870,We show experimental results on block selection criteria based on [[ unigram counts ]] and << phrase length >> .,870,0 +871,We show experimental results on << block selection criteria >> based on unigram counts and [[ phrase length ]] .,871,3 +872,This paper develops a new [[ approach ]] for extremely << fast detection >> in domains where the distribution of positive and negative examples is highly skewed -LRB- e.g. face detection or database retrieval -RRB- .,872,3 +873,This paper develops a new approach for extremely fast detection in domains where the distribution of positive and negative examples is highly skewed -LRB- e.g. [[ face detection ]] or << database retrieval >> -RRB- .,873,0 +874,"In such domains a [[ cascade of simple classifiers ]] each trained to achieve high detection rates and modest false positive rates can yield a final << detector >> with many desirable features : including high detection rates , very low false positive rates , and fast performance .",874,3 +875,"In such domains a cascade of simple << classifiers >> each trained to achieve high [[ detection rates ]] and modest false positive rates can yield a final detector with many desirable features : including high detection rates , very low false positive rates , and fast performance .",875,6 +876,"In such domains a cascade of simple classifiers each trained to achieve high [[ detection rates ]] and << modest false positive rates >> can yield a final detector with many desirable features : including high detection rates , very low false positive rates , and fast performance .",876,0 +877,"In such domains a cascade of simple << classifiers >> each trained to achieve high detection rates and [[ modest false positive rates ]] can yield a final detector with many desirable features : including high detection rates , very low false positive rates , and fast performance .",877,6 +878,"In such domains a cascade of simple classifiers each trained to achieve high detection rates and modest false positive rates can yield a final << detector >> with many desirable [[ features ]] : including high detection rates , very low false positive rates , and fast performance .",878,1 +879,"Achieving extremely high [[ detection rates ]] , rather than << low error >> , is not a task typically addressed by machine learning algorithms .",879,5 +880,We propose a new variant of [[ AdaBoost ]] as a mechanism for training the simple << classifiers >> used in the cascade .,880,3 +881,We propose a new variant of AdaBoost as a mechanism for training the simple [[ classifiers ]] used in the << cascade >> .,881,3 +882,Experimental results in the domain of << face detection >> show the [[ training algorithm ]] yields significant improvements in performance over conventional AdaBoost .,882,3 +883,Experimental results in the domain of << face detection >> show the training algorithm yields significant improvements in performance over conventional [[ AdaBoost ]] .,883,3 +884,Experimental results in the domain of face detection show the << training algorithm >> yields significant improvements in performance over conventional [[ AdaBoost ]] .,884,5 +885,"The final face detection system can process 15 frames per second , achieves over 90 % [[ detection ]] , and a << false positive rate >> of 1 in a 1,000,000 .",885,0 +886,This paper proposes a [[ method ]] for learning << joint embed-dings of images and text >> using a two-branch neural network with multiple layers of linear projections followed by nonlinearities .,886,3 +887,This paper proposes a << method >> for learning joint embed-dings of images and text using a [[ two-branch neural network ]] with multiple layers of linear projections followed by nonlinearities .,887,3 +888,This paper proposes a method for learning joint embed-dings of images and text using a << two-branch neural network >> with [[ multiple layers of linear projections ]] followed by nonlinearities .,888,4 +889,This paper proposes a method for learning joint embed-dings of images and text using a two-branch neural network with [[ multiple layers of linear projections ]] followed by << nonlinearities >> .,889,0 +890,This paper proposes a method for learning joint embed-dings of images and text using a << two-branch neural network >> with multiple layers of linear projections followed by [[ nonlinearities ]] .,890,4 +891,The << network >> is trained using a [[ large-margin objective ]] that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature .,891,3 +892,The network is trained using a << large-margin objective >> that combines [[ cross-view ranking constraints ]] with within-view neighborhood structure preservation constraints inspired by metric learning literature .,892,1 +893,The network is trained using a large-margin objective that combines [[ cross-view ranking constraints ]] with << within-view neighborhood structure preservation constraints >> inspired by metric learning literature .,893,0 +894,The network is trained using a << large-margin objective >> that combines cross-view ranking constraints with [[ within-view neighborhood structure preservation constraints ]] inspired by metric learning literature .,894,1 +895,Extensive experiments show that our << approach >> gains significant improvements in [[ accuracy ]] for image-to-text and text-to-image retrieval .,895,6 +896,Extensive experiments show that our << approach >> gains significant improvements in accuracy for [[ image-to-text and text-to-image retrieval ]] .,896,6 +897,Our << method >> achieves new state-of-the-art results on the [[ Flickr30K and MSCOCO image-sentence datasets ]] and shows promise on the new task of phrase lo-calization on the Flickr30K Entities dataset .,897,6 +898,Our << method >> achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of [[ phrase lo-calization ]] on the Flickr30K Entities dataset .,898,6 +899,Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of << phrase lo-calization >> on the [[ Flickr30K Entities dataset ]] .,899,3 +900,We investigate that claim by adopting a simple [[ MT-based paraphrasing technique ]] and evaluating << QA system >> performance on paraphrased questions .,900,3 +901,We investigate that claim by adopting a simple MT-based paraphrasing technique and evaluating << QA system >> performance on [[ paraphrased questions ]] .,901,6 +902,"The << TAP-XL Automated Analyst 's Assistant >> is an application designed to help an English - speaking analyst write a topical report , culling information from a large inflow of [[ multilingual , multimedia data ]] .",902,3 +903,"<< It >> gives users the ability to spend their time finding more data relevant to their task , and gives them translingual reach into other languages by leveraging [[ human language technology ]] .",903,3 +904,This paper discusses the application of [[ Unification Categorial Grammar -LRB- UCG -RRB- ]] to the framework of << Isomorphic Grammars >> for Machine Translation pioneered by Landsbergen .,904,3 +905,This paper discusses the application of Unification Categorial Grammar -LRB- UCG -RRB- to the framework of [[ Isomorphic Grammars ]] for << Machine Translation >> pioneered by Landsbergen .,905,3 +906,"The [[ Isomorphic Grammars approach ]] to << MT >> involves developing the grammars of the Source and Target languages in parallel , in order to ensure that SL and TL expressions which stand in the translation relation have isomorphic derivations .",906,3 +907,"After introducing this [[ approach ]] to << MT system design >> , and the basics of monolingual UCG , we will show how the two can be integrated , and present an example from an implemented bi-directional English-Spanish fragment .",907,3 +908,"After introducing this [[ approach ]] to MT system design , and the basics of << monolingual UCG >> , we will show how the two can be integrated , and present an example from an implemented bi-directional English-Spanish fragment .",908,3 +909,"After introducing this approach to [[ MT system design ]] , and the basics of << monolingual UCG >> , we will show how the two can be integrated , and present an example from an implemented bi-directional English-Spanish fragment .",909,0 +910,"After introducing this approach to [[ MT system design ]] , and the basics of monolingual UCG , we will show how the << two >> can be integrated , and present an example from an implemented bi-directional English-Spanish fragment .",910,2 +911,"After introducing this approach to MT system design , and the basics of [[ monolingual UCG ]] , we will show how the << two >> can be integrated , and present an example from an implemented bi-directional English-Spanish fragment .",911,2 +912,In the << security domain >> a key problem is [[ identifying rare behaviours of interest ]] .,912,4 +913,"[[ Training examples ]] for these << behaviours >> may or may not exist , and if they do exist there will be few examples , quite probably one .",913,3 +914,We present a novel [[ weakly supervised algorithm ]] that can detect << behaviours >> that either have never before been seen or for which there are few examples .,914,3 +915,"[[ Global context ]] is modelled , allowing the << detection of abnormal behaviours >> that in isolation appear normal .",915,3 +916,"We have developed a [[ computational model ]] of the process of describing the layout of an apartment or house , a much-studied << discourse task >> first characterized linguistically by Linde -LRB- 1974 -RRB- .",916,3 +917,"The [[ model ]] is embodied in a << program >> , APT , that can reproduce segments of actual tape-recorded descriptions , using organizational and discourse strategies derived through analysis of our corpus .",917,4 +918,"The model is embodied in a program , << APT >> , that can reproduce segments of actual tape-recorded descriptions , using [[ organizational and discourse strategies ]] derived through analysis of our corpus .",918,3 +919,This paper proposes a practical [[ approach ]] employing n-gram models and error-correction rules for << Thai key prediction >> and Thai-English language identification .,919,3 +920,This paper proposes a practical [[ approach ]] employing n-gram models and error-correction rules for Thai key prediction and << Thai-English language identification >> .,920,3 +921,This paper proposes a practical << approach >> employing [[ n-gram models ]] and error-correction rules for Thai key prediction and Thai-English language identification .,921,3 +922,This paper proposes a practical approach employing [[ n-gram models ]] and << error-correction rules >> for Thai key prediction and Thai-English language identification .,922,0 +923,This paper proposes a practical << approach >> employing n-gram models and [[ error-correction rules ]] for Thai key prediction and Thai-English language identification .,923,3 +924,This paper proposes a practical approach employing n-gram models and error-correction rules for [[ Thai key prediction ]] and << Thai-English language identification >> .,924,0 +925,The paper also proposes << rule-reduction algorithm >> applying [[ mutual information ]] to reduce the error-correction rules .,925,3 +926,The paper also proposes rule-reduction algorithm applying [[ mutual information ]] to reduce the << error-correction rules >> .,926,3 +927,Our [[ algorithm ]] reported more than 99 % accuracy in both << language identification >> and key prediction .,927,3 +928,Our [[ algorithm ]] reported more than 99 % accuracy in both language identification and << key prediction >> .,928,3 +929,Our << algorithm >> reported more than 99 % [[ accuracy ]] in both language identification and key prediction .,929,6 +930,This paper concerns the [[ discourse understanding process ]] in << spoken dialogue systems >> .,930,3 +931,This process enables the [[ system ]] to understand << user utterances >> based on the context of a dialogue .,931,3 +932,This paper proposes a [[ method ]] for resolving this << ambiguity >> based on statistical information obtained from dialogue corpora .,932,3 +933,This paper proposes a << method >> for resolving this ambiguity based on [[ statistical information ]] obtained from dialogue corpora .,933,3 +934,This paper proposes a method for resolving this ambiguity based on << statistical information >> obtained from [[ dialogue corpora ]] .,934,3 +935,"Unlike conventional << methods >> that use [[ hand-crafted rules ]] , the proposed method enables easy design of the discourse understanding process .",935,3 +936,Experiment results have shown that a [[ system ]] that exploits the proposed << method >> performs sufficiently and that holding multiple candidates for understanding results is effective .,936,3 +937,We consider the problem of << question-focused sentence retrieval >> from complex [[ news articles ]] describing multi-event stories published over time .,937,3 +938,We consider the problem of question-focused sentence retrieval from complex << news articles >> describing [[ multi-event stories ]] published over time .,938,1 +939,"To address the << sentence retrieval problem >> , we apply a [[ stochastic , graph-based method ]] for comparing the relative importance of the textual units , which was previously used successfully for generic summarization .",939,3 +940,"To address the sentence retrieval problem , we apply a [[ stochastic , graph-based method ]] for comparing the relative importance of the textual units , which was previously used successfully for << generic summarization >> .",940,3 +941,"Currently , we present a topic-sensitive version of our method and hypothesize that << it >> can outperform a competitive [[ baseline ]] , which compares the similarity of each sentence to the input question via IDF-weighted word overlap .",941,5 +942,"In our experiments , the [[ method ]] achieves a TRDR score that is significantly higher than that of the << baseline >> .",942,5 +943,"In our experiments , the << method >> achieves a [[ TRDR score ]] that is significantly higher than that of the baseline .",943,6 +944,"In our experiments , the method achieves a [[ TRDR score ]] that is significantly higher than that of the << baseline >> .",944,6 +945,"This paper proposes that << sentence analysis >> should be treated as [[ defeasible reasoning ]] , and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige , which is a formalization of defeasible reasoning , that includes arguments and defeat rules that capture defeasibility .",945,3 +946,"This paper proposes that sentence analysis should be treated as defeasible reasoning , and presents such a [[ treatment ]] for << Japanese sentence analyses >> using an argumentation system by Konolige , which is a formalization of defeasible reasoning , that includes arguments and defeat rules that capture defeasibility .",946,3 +947,"This paper proposes that sentence analysis should be treated as defeasible reasoning , and presents such a treatment for << Japanese sentence analyses >> using an [[ argumentation system ]] by Konolige , which is a formalization of defeasible reasoning , that includes arguments and defeat rules that capture defeasibility .",947,3 +948,"This paper proposes that sentence analysis should be treated as defeasible reasoning , and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige , which is a << formalization of defeasible reasoning >> , that includes [[ arguments ]] and defeat rules that capture defeasibility .",948,4 +949,"This paper proposes that sentence analysis should be treated as defeasible reasoning , and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige , which is a formalization of defeasible reasoning , that includes [[ arguments ]] and << defeat rules >> that capture defeasibility .",949,0 +950,"This paper proposes that sentence analysis should be treated as defeasible reasoning , and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige , which is a << formalization of defeasible reasoning >> , that includes arguments and [[ defeat rules ]] that capture defeasibility .",950,4 +951,"This paper proposes that sentence analysis should be treated as defeasible reasoning , and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige , which is a formalization of defeasible reasoning , that includes << arguments >> and defeat rules that capture [[ defeasibility ]] .",951,1 +952,"This paper proposes that sentence analysis should be treated as defeasible reasoning , and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige , which is a formalization of defeasible reasoning , that includes arguments and << defeat rules >> that capture [[ defeasibility ]] .",952,1 +953,"It gives an overview of [[ methods ]] used for << visual speech animation >> , parameterization of a human face and a tongue , necessary data sources and a synthesis method .",953,3 +954,A [[ 3D animation model ]] is used for a << pseudo-muscular animation schema >> to create such animation of visual speech which is usable for a lipreading .,954,3 +955,A 3D animation model is used for a [[ pseudo-muscular animation schema ]] to create such << animation of visual speech >> which is usable for a lipreading .,955,3 +956,A 3D animation model is used for a pseudo-muscular animation schema to create such [[ animation of visual speech ]] which is usable for a << lipreading >> .,956,3 +957,"Furthermore , a problem of [[ forming articulatory trajectories ]] is formulated to solve << labial coarticulation effects >> .",957,3 +958,[[ It ]] is used for the << synthesis method >> based on a selection of articulatory targets and interpolation technique .,958,3 +959,<< It >> is used for the synthesis method based on a [[ selection of articulatory targets ]] and interpolation technique .,959,3 +960,It is used for the synthesis method based on a [[ selection of articulatory targets ]] and << interpolation technique >> .,960,0 +961,<< It >> is used for the synthesis method based on a selection of articulatory targets and [[ interpolation technique ]] .,961,3 +962,"However , our experience with TACITUS ; especially in the MUC-3 evaluation , has shown that principled [[ techniques ]] for << syntactic and pragmatic analysis >> can be bolstered with methods for achieving robustness .",962,3 +963,"However , our experience with TACITUS ; especially in the MUC-3 evaluation , has shown that principled techniques for syntactic and pragmatic analysis can be bolstered with << methods >> for achieving [[ robustness ]] .",963,6 +964,"We describe [[ three techniques ]] for making << syntactic analysis >> more robust -- an agenda-based scheduling parser , a recovery technique for failed parses , and a new technique called terminal substring parsing .",964,3 +965,"We describe << three techniques >> for making syntactic analysis more robust -- an [[ agenda-based scheduling parser ]] , a recovery technique for failed parses , and a new technique called terminal substring parsing .",965,2 +966,"We describe three techniques for making syntactic analysis more robust -- an [[ agenda-based scheduling parser ]] , a << recovery technique >> for failed parses , and a new technique called terminal substring parsing .",966,0 +967,"We describe << three techniques >> for making syntactic analysis more robust -- an agenda-based scheduling parser , a [[ recovery technique ]] for failed parses , and a new technique called terminal substring parsing .",967,2 +968,"We describe three techniques for making << syntactic analysis >> more robust -- an agenda-based scheduling parser , a [[ recovery technique ]] for failed parses , and a new technique called terminal substring parsing .",968,3 +969,"We describe three techniques for making syntactic analysis more robust -- an agenda-based scheduling parser , a [[ recovery technique ]] for << failed parses >> , and a new technique called terminal substring parsing .",969,3 +970,"We describe three techniques for making syntactic analysis more robust -- an agenda-based scheduling parser , a [[ recovery technique ]] for failed parses , and a new << technique >> called terminal substring parsing .",970,0 +971,"We describe << three techniques >> for making syntactic analysis more robust -- an agenda-based scheduling parser , a recovery technique for failed parses , and a new [[ technique ]] called terminal substring parsing .",971,2 +972,"For << pragmatics processing >> , we describe how the method of [[ abductive inference ]] is inherently robust , in that an interpretation is always possible , so that in the absence of the required world knowledge , performance degrades gracefully .",972,3 +973,"This paper proposes a [[ Hidden Markov Model -LRB- HMM -RRB- ]] and an << HMM-based chunk tagger >> , from which a named entity -LRB- NE -RRB- recognition -LRB- NER -RRB- system is built to recognize and classify names , times and numerical quantities .",973,0 +974,"This paper proposes a [[ Hidden Markov Model -LRB- HMM -RRB- ]] and an HMM-based chunk tagger , from which a << named entity -LRB- NE -RRB- recognition -LRB- NER -RRB- system >> is built to recognize and classify names , times and numerical quantities .",974,3 +975,"This paper proposes a Hidden Markov Model -LRB- HMM -RRB- and an [[ HMM-based chunk tagger ]] , from which a << named entity -LRB- NE -RRB- recognition -LRB- NER -RRB- system >> is built to recognize and classify names , times and numerical quantities .",975,3 +976,"This paper proposes a Hidden Markov Model -LRB- HMM -RRB- and an HMM-based chunk tagger , from which a [[ named entity -LRB- NE -RRB- recognition -LRB- NER -RRB- system ]] is built to recognize and classify << names >> , times and numerical quantities .",976,3 +977,"This paper proposes a Hidden Markov Model -LRB- HMM -RRB- and an HMM-based chunk tagger , from which a [[ named entity -LRB- NE -RRB- recognition -LRB- NER -RRB- system ]] is built to recognize and classify names , << times and numerical quantities >> .",977,3 +978,"This paper proposes a Hidden Markov Model -LRB- HMM -RRB- and an HMM-based chunk tagger , from which a named entity -LRB- NE -RRB- recognition -LRB- NER -RRB- system is built to recognize and classify [[ names ]] , << times and numerical quantities >> .",978,0 +979,"Through the HMM , our system is able to apply and integrate four types of internal and external evidences : 1 -RRB- simple << deterministic internal feature of the words >> , such as [[ capitalization ]] and digitalization ; 2 -RRB- internal semantic feature of important triggers ; 3 -RRB- internal gazetteer feature ; 4 -RRB- external macro context feature .",979,2 +980,"Through the HMM , our system is able to apply and integrate four types of internal and external evidences : 1 -RRB- simple deterministic internal feature of the words , such as [[ capitalization ]] and << digitalization >> ; 2 -RRB- internal semantic feature of important triggers ; 3 -RRB- internal gazetteer feature ; 4 -RRB- external macro context feature .",980,0 +981,"Through the HMM , our system is able to apply and integrate four types of internal and external evidences : 1 -RRB- simple << deterministic internal feature of the words >> , such as capitalization and [[ digitalization ]] ; 2 -RRB- internal semantic feature of important triggers ; 3 -RRB- internal gazetteer feature ; 4 -RRB- external macro context feature .",981,2 +982,Evaluation of our << system >> on [[ MUC-6 and MUC-7 English NE tasks ]] achieves F-measures of 96.6 % and 94.1 % respectively .,982,6 +983,Evaluation of our << system >> on MUC-6 and MUC-7 English NE tasks achieves [[ F-measures ]] of 96.6 % and 94.1 % respectively .,983,6 +984,Two [[ themes ]] have evolved in << speech and text image processing >> work at Xerox PARC that expand and redefine the role of recognition technology in document-oriented applications .,984,4 +985,Two themes have evolved in speech and text image processing work at Xerox PARC that expand and redefine the role of [[ recognition technology ]] in << document-oriented applications >> .,985,3 +986,One is the development of [[ systems ]] that provide functionality similar to that of << text processors >> but operate directly on audio and scanned image data .,986,0 +987,One is the development of << systems >> that provide functionality similar to that of text processors but operate directly on [[ audio and scanned image data ]] .,987,3 +988,"A second , related << theme >> is the use of [[ speech and text-image recognition ]] to retrieve arbitrary , user-specified information from documents with signal content .",988,3 +989,"A second , related theme is the use of << speech and text-image recognition >> to retrieve arbitrary , user-specified information from [[ documents with signal content ]] .",989,3 +990,"This paper discusses three << research >> initiatives at PARC that exemplify these themes : a [[ text-image editor ]] -LSB- 1 -RSB- , a wordspotter for voice editing and indexing -LSB- 12 -RSB- , and a decoding framework for scanned-document content retrieval -LSB- 4 -RSB- .",990,2 +991,"This paper discusses three research initiatives at PARC that exemplify these themes : a [[ text-image editor ]] -LSB- 1 -RSB- , a << wordspotter >> for voice editing and indexing -LSB- 12 -RSB- , and a decoding framework for scanned-document content retrieval -LSB- 4 -RSB- .",991,0 +992,"This paper discusses three << research >> initiatives at PARC that exemplify these themes : a text-image editor -LSB- 1 -RSB- , a [[ wordspotter ]] for voice editing and indexing -LSB- 12 -RSB- , and a decoding framework for scanned-document content retrieval -LSB- 4 -RSB- .",992,2 +993,"This paper discusses three research initiatives at PARC that exemplify these themes : a text-image editor -LSB- 1 -RSB- , a [[ wordspotter ]] for << voice editing and indexing >> -LSB- 12 -RSB- , and a decoding framework for scanned-document content retrieval -LSB- 4 -RSB- .",993,0 +994,"This paper discusses three research initiatives at PARC that exemplify these themes : a text-image editor -LSB- 1 -RSB- , a wordspotter for [[ voice editing and indexing ]] -LSB- 12 -RSB- , and a << decoding framework >> for scanned-document content retrieval -LSB- 4 -RSB- .",994,0 +995,"This paper discusses three << research >> initiatives at PARC that exemplify these themes : a text-image editor -LSB- 1 -RSB- , a wordspotter for voice editing and indexing -LSB- 12 -RSB- , and a [[ decoding framework ]] for scanned-document content retrieval -LSB- 4 -RSB- .",995,2 +996,"This paper discusses three research initiatives at PARC that exemplify these themes : a text-image editor -LSB- 1 -RSB- , a wordspotter for voice editing and indexing -LSB- 12 -RSB- , and a [[ decoding framework ]] for << scanned-document content retrieval >> -LSB- 4 -RSB- .",996,3 +997,The problem of << predicting image or video interestingness >> from their [[ low-level feature representations ]] has received increasing interest .,997,3 +998,"To make the annotation less subjective and more reliable , recent studies employ [[ crowdsourcing tools ]] to collect << pairwise comparisons >> -- relying on majority voting to prune the annotation outliers/errors .",998,3 +999,"To make the annotation less subjective and more reliable , recent studies employ << crowdsourcing tools >> to collect pairwise comparisons -- relying on [[ majority voting ]] to prune the annotation outliers/errors .",999,3 +1000,"To make the annotation less subjective and more reliable , recent studies employ crowdsourcing tools to collect pairwise comparisons -- relying on [[ majority voting ]] to prune the << annotation outliers/errors >> .",1000,3 +1001,"In this paper , we propose a more principled [[ way ]] to identify << annotation outliers >> by formulating the interestingness prediction task as a unified robust learning to rank problem , tackling both the outlier detection and interestingness prediction tasks jointly .",1001,3 +1002,"In this paper , we propose a more principled [[ way ]] to identify annotation outliers by formulating the interestingness prediction task as a unified robust learning to rank problem , tackling both the << outlier detection >> and interestingness prediction tasks jointly .",1002,3 +1003,"In this paper , we propose a more principled [[ way ]] to identify annotation outliers by formulating the interestingness prediction task as a unified robust learning to rank problem , tackling both the outlier detection and << interestingness prediction tasks >> jointly .",1003,3 +1004,"In this paper , we propose a more principled way to identify << annotation outliers >> by formulating the [[ interestingness prediction task ]] as a unified robust learning to rank problem , tackling both the outlier detection and interestingness prediction tasks jointly .",1004,3 +1005,"In this paper , we propose a more principled way to identify annotation outliers by formulating the << interestingness prediction task >> as a [[ unified robust learning ]] to rank problem , tackling both the outlier detection and interestingness prediction tasks jointly .",1005,3 +1006,"In this paper , we propose a more principled way to identify annotation outliers by formulating the interestingness prediction task as a [[ unified robust learning ]] to << rank problem >> , tackling both the outlier detection and interestingness prediction tasks jointly .",1006,3 +1007,"In this paper , we propose a more principled way to identify annotation outliers by formulating the interestingness prediction task as a unified robust learning to rank problem , tackling both the [[ outlier detection ]] and << interestingness prediction tasks >> jointly .",1007,0 +1008,Extensive experiments on both [[ image and video interestingness benchmark datasets ]] demonstrate that our new << approach >> significantly outperforms state-of-the-art alternatives .,1008,6 +1009,Extensive experiments on both image and video interestingness benchmark datasets demonstrate that our new [[ approach ]] significantly outperforms << state-of-the-art alternatives >> .,1009,5 +1010,"Many << description logics -LRB- DLs -RRB- >> combine [[ knowledge representation ]] on an abstract , logical level with an interface to `` concrete '' domains such as numbers and strings .",1010,0 +1011,We describe an implementation of [[ data-driven selection ]] of emphatic facial displays for an << embodied conversational agent >> in a dialogue system .,1011,3 +1012,We describe an implementation of << data-driven selection >> of [[ emphatic facial displays ]] for an embodied conversational agent in a dialogue system .,1012,3 +1013,We describe an implementation of data-driven selection of emphatic facial displays for an [[ embodied conversational agent ]] in a << dialogue system >> .,1013,4 +1014,"The [[ data ]] from those recordings was used in a range of << models >> for generating facial displays , each model making use of a different amount of context or choosing displays differently within a context .",1014,3 +1015,"The data from those recordings was used in a range of [[ models ]] for generating << facial displays >> , each model making use of a different amount of context or choosing displays differently within a context .",1015,3 +1016,"The << models >> were evaluated in two ways : by [[ cross-validation ]] against the corpus , and by asking users to rate the output .",1016,6 +1017,"When << classifying high-dimensional sequence data >> , traditional methods -LRB- e.g. , [[ HMMs ]] , CRFs -RRB- may require large amounts of training data to avoid overfitting .",1017,3 +1018,"When classifying high-dimensional sequence data , traditional methods -LRB- e.g. , [[ HMMs ]] , << CRFs >> -RRB- may require large amounts of training data to avoid overfitting .",1018,0 +1019,"When << classifying high-dimensional sequence data >> , traditional methods -LRB- e.g. , HMMs , [[ CRFs ]] -RRB- may require large amounts of training data to avoid overfitting .",1019,3 +1020,In such cases [[ dimensionality reduction ]] can be employed to find a << low-dimensional representation >> on which classification can be done more efficiently .,1020,3 +1021,In such cases dimensionality reduction can be employed to find a [[ low-dimensional representation ]] on which << classification >> can be done more efficiently .,1021,3 +1022,"[[ Existing methods ]] for << supervised dimensionality reduction >> often presume that the data is densely sampled so that a neighborhood graph structure can be formed , or that the data arises from a known distribution .",1022,3 +1023,[[ Sufficient dimension reduction techniques ]] aim to find a << low dimensional representation >> such that the remaining degrees of freedom become conditionally independent of the output values .,1023,3 +1024,"Spatial , temporal and periodic information is combined in a principled manner , and an optimal [[ manifold ]] is learned for the << end-task >> .",1024,3 +1025,"We demonstrate the effectiveness of our << approach >> on several tasks involving the [[ discrimination of human gesture and motion categories ]] , as well as on a database of dynamic textures .",1025,6 +1026,"We demonstrate the effectiveness of our << approach >> on several tasks involving the discrimination of human gesture and motion categories , as well as on a [[ database of dynamic textures ]] .",1026,6 +1027,We present an efficient [[ algorithm ]] for << chart-based phrase structure parsing >> of natural language that is tailored to the problem of extracting specific information from unrestricted texts where many of the words are unknown and much of the text is irrelevant to the task .,1027,3 +1028,We present an efficient algorithm for << chart-based phrase structure parsing >> of [[ natural language ]] that is tailored to the problem of extracting specific information from unrestricted texts where many of the words are unknown and much of the text is irrelevant to the task .,1028,3 +1029,"This is facilitated through the use of << phrase boundary heuristics >> based on the placement of [[ function words ]] , and by heuristic rules that permit certain kinds of phrases to be deduced despite the presence of unknown words .",1029,3 +1030,"A further << reduction in the search space >> is achieved by using [[ semantic ]] rather than syntactic categories on the terminal and non-terminal edges , thereby reducing the amount of ambiguity and thus the number of edges , since only edges with a valid semantic interpretation are ever introduced .",1030,3 +1031,"A further reduction in the search space is achieved by using [[ semantic ]] rather than << syntactic categories >> on the terminal and non-terminal edges , thereby reducing the amount of ambiguity and thus the number of edges , since only edges with a valid semantic interpretation are ever introduced .",1031,5 +1032,"A further reduction in the search space is achieved by using [[ semantic ]] rather than syntactic categories on the << terminal and non-terminal edges >> , thereby reducing the amount of ambiguity and thus the number of edges , since only edges with a valid semantic interpretation are ever introduced .",1032,1 +1033,"A further reduction in the search space is achieved by using semantic rather than [[ syntactic categories ]] on the << terminal and non-terminal edges >> , thereby reducing the amount of ambiguity and thus the number of edges , since only edges with a valid semantic interpretation are ever introduced .",1033,1 +1034,[[ Automatic estimation of word significance ]] oriented for << speech-based Information Retrieval -LRB- IR -RRB- >> is addressed .,1034,3 +1035,"Since the significance of words differs in IR , << automatic speech recognition -LRB- ASR -RRB- >> performance has been evaluated based on [[ weighted word error rate -LRB- WWER -RRB- ]] , which gives a weight on errors from the viewpoint of IR , instead of word error rate -LRB- WER -RRB- , which treats all words uniformly .",1035,6 +1036,"Since the significance of words differs in IR , automatic speech recognition -LRB- ASR -RRB- performance has been evaluated based on << weighted word error rate -LRB- WWER -RRB- >> , which gives a weight on errors from the viewpoint of IR , instead of [[ word error rate -LRB- WER -RRB- ]] , which treats all words uniformly .",1036,5 +1037,"A [[ decoding strategy ]] that minimizes << WWER >> based on a Minimum Bayes-Risk framework has been shown , and the reduction of errors on both ASR and IR has been reported .",1037,3 +1038,"A << decoding strategy >> that minimizes WWER based on a [[ Minimum Bayes-Risk framework ]] has been shown , and the reduction of errors on both ASR and IR has been reported .",1038,3 +1039,"A decoding strategy that minimizes WWER based on a Minimum Bayes-Risk framework has been shown , and the reduction of errors on both [[ ASR ]] and << IR >> has been reported .",1039,0 +1040,"In this paper , we propose an [[ automatic estimation method ]] for << word significance -LRB- weights -RRB- >> based on its influence on IR .",1040,6 +1041,"Specifically , weights are estimated so that [[ evaluation measures ]] of << ASR >> and IR are equivalent .",1041,6 +1042,"Specifically , weights are estimated so that [[ evaluation measures ]] of ASR and << IR >> are equivalent .",1042,6 +1043,"Specifically , weights are estimated so that evaluation measures of [[ ASR ]] and << IR >> are equivalent .",1043,0 +1044,"We apply the proposed [[ method ]] to a << speech-based information retrieval system >> , which is a typical IR system , and show that the method works well .",1044,3 +1045,"We apply the proposed method to a [[ speech-based information retrieval system ]] , which is a typical << IR system >> , and show that the method works well .",1045,2 +1046,"[[ Methods ]] developed for << spelling correction >> for languages like English -LRB- see the review by Kukich -LRB- Kukich , 1992 -RRB- -RRB- are not readily applicable to agglutinative languages .",1046,3 +1047,"Methods developed for [[ spelling correction ]] for << languages >> like English -LRB- see the review by Kukich -LRB- Kukich , 1992 -RRB- -RRB- are not readily applicable to agglutinative languages .",1047,3 +1048,"Methods developed for spelling correction for << languages >> like [[ English ]] -LRB- see the review by Kukich -LRB- Kukich , 1992 -RRB- -RRB- are not readily applicable to agglutinative languages .",1048,2 +1049,This poster presents an [[ approach ]] to << spelling correction >> in agglutinative languages that is based on two-level morphology and a dynamic-programming based search algorithm .,1049,3 +1050,This poster presents an approach to << spelling correction >> in [[ agglutinative languages ]] that is based on two-level morphology and a dynamic-programming based search algorithm .,1050,3 +1051,This poster presents an approach to << spelling correction >> in agglutinative languages that is based on [[ two-level morphology ]] and a dynamic-programming based search algorithm .,1051,3 +1052,This poster presents an approach to spelling correction in agglutinative languages that is based on [[ two-level morphology ]] and a << dynamic-programming based search algorithm >> .,1052,0 +1053,This poster presents an approach to << spelling correction >> in agglutinative languages that is based on two-level morphology and a [[ dynamic-programming based search algorithm ]] .,1053,3 +1054,"After an overview of our approach , we present results from experiments with << spelling correction >> in [[ Turkish ]] .",1054,3 +1055,"In this paper , we present a novel [[ training method ]] for a << localized phrase-based prediction model >> for statistical machine translation -LRB- SMT -RRB- .",1055,3 +1056,"In this paper , we present a novel training method for a [[ localized phrase-based prediction model ]] for << statistical machine translation -LRB- SMT -RRB- >> .",1056,3 +1057,The [[ model ]] predicts blocks with orientation to handle << local phrase re-ordering >> .,1057,3 +1058,"We use a [[ maximum likelihood criterion ]] to train a << log-linear block bigram model >> which uses real-valued features -LRB- e.g. a language model score -RRB- as well as binary features based on the block identities themselves , e.g. block bigram features .",1058,3 +1059,"We use a maximum likelihood criterion to train a << log-linear block bigram model >> which uses [[ real-valued features ]] -LRB- e.g. a language model score -RRB- as well as binary features based on the block identities themselves , e.g. block bigram features .",1059,3 +1060,"We use a maximum likelihood criterion to train a log-linear block bigram model which uses [[ real-valued features ]] -LRB- e.g. a language model score -RRB- as well as << binary features >> based on the block identities themselves , e.g. block bigram features .",1060,0 +1061,"We use a maximum likelihood criterion to train a log-linear block bigram model which uses << real-valued features >> -LRB- e.g. a [[ language model score ]] -RRB- as well as binary features based on the block identities themselves , e.g. block bigram features .",1061,2 +1062,"We use a maximum likelihood criterion to train a << log-linear block bigram model >> which uses real-valued features -LRB- e.g. a language model score -RRB- as well as [[ binary features ]] based on the block identities themselves , e.g. block bigram features .",1062,3 +1063,Our << training algorithm >> can easily handle millions of [[ features ]] .,1063,3 +1064,The best [[ system ]] obtains a 18.6 % improvement over the << baseline >> on a standard Arabic-English translation task .,1064,5 +1065,The best << system >> obtains a 18.6 % improvement over the baseline on a standard [[ Arabic-English translation task ]] .,1065,6 +1066,The best system obtains a 18.6 % improvement over the << baseline >> on a standard [[ Arabic-English translation task ]] .,1066,6 +1067,In this paper we describe a novel [[ data structure ]] for << phrase-based statistical machine translation >> which allows for the retrieval of arbitrarily long phrases while simultaneously using less memory than is required by current decoder implementations .,1067,3 +1068,In this paper we describe a novel [[ data structure ]] for phrase-based statistical machine translation which allows for the << retrieval of arbitrarily long phrases >> while simultaneously using less memory than is required by current decoder implementations .,1068,3 +1069,We detail the [[ computational complexity ]] and << average retrieval times >> for looking up phrase translations in our suffix array-based data structure .,1069,0 +1070,We detail the computational complexity and average retrieval times for looking up [[ phrase translations ]] in our << suffix array-based data structure >> .,1070,4 +1071,We show how << sampling >> can be used to reduce the [[ retrieval time ]] by orders of magnitude with no loss in translation quality .,1071,6 +1072,We show how << sampling >> can be used to reduce the retrieval time by orders of magnitude with no loss in [[ translation quality ]] .,1072,6 +1073,"The major objective of this program is to develop and demonstrate robust , high performance [[ continuous speech recognition -LRB- CSR -RRB- techniques ]] focussed on application in << Spoken Language Systems -LRB- SLS -RRB- >> which will enhance the effectiveness of military and civilian computer-based systems .",1073,3 +1074,"The major objective of this program is to develop and demonstrate robust , high performance continuous speech recognition -LRB- CSR -RRB- techniques focussed on application in [[ Spoken Language Systems -LRB- SLS -RRB- ]] which will enhance the effectiveness of << military and civilian computer-based systems >> .",1074,3 +1075,"A key complementary objective is to define and develop applications of robust speech recognition and understanding systems , and to help catalyze the transition of [[ spoken language technology ]] into << military and civilian systems >> , with particular focus on application of robust CSR to mobile military command and control .",1075,3 +1076,"A key complementary objective is to define and develop applications of robust speech recognition and understanding systems , and to help catalyze the transition of spoken language technology into military and civilian systems , with particular focus on application of robust [[ CSR ]] to << mobile military command and control >> .",1076,3 +1077,"The research effort focusses on developing advanced [[ acoustic modelling ]] , << rapid search >> , and recognition-time adaptation techniques for robust large-vocabulary CSR , and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to military application tasks .",1077,0 +1078,"The research effort focusses on developing advanced [[ acoustic modelling ]] , rapid search , and recognition-time adaptation techniques for robust << large-vocabulary CSR >> , and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to military application tasks .",1078,3 +1079,"The research effort focusses on developing advanced [[ acoustic modelling ]] , rapid search , and recognition-time adaptation techniques for robust large-vocabulary CSR , and on applying these << techniques >> to the new ARPA large-vocabulary CSR corpora and to military application tasks .",1079,2 +1080,"The research effort focusses on developing advanced [[ acoustic modelling ]] , rapid search , and recognition-time adaptation techniques for robust large-vocabulary CSR , and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to << military application tasks >> .",1080,3 +1081,"The research effort focusses on developing advanced acoustic modelling , [[ rapid search ]] , and << recognition-time adaptation techniques >> for robust large-vocabulary CSR , and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to military application tasks .",1081,0 +1082,"The research effort focusses on developing advanced acoustic modelling , [[ rapid search ]] , and recognition-time adaptation techniques for robust << large-vocabulary CSR >> , and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to military application tasks .",1082,3 +1083,"The research effort focusses on developing advanced acoustic modelling , [[ rapid search ]] , and recognition-time adaptation techniques for robust large-vocabulary CSR , and on applying these << techniques >> to the new ARPA large-vocabulary CSR corpora and to military application tasks .",1083,2 +1084,"The research effort focusses on developing advanced acoustic modelling , [[ rapid search ]] , and recognition-time adaptation techniques for robust large-vocabulary CSR , and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to << military application tasks >> .",1084,3 +1085,"The research effort focusses on developing advanced acoustic modelling , rapid search , and [[ recognition-time adaptation techniques ]] for robust << large-vocabulary CSR >> , and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to military application tasks .",1085,3 +1086,"The research effort focusses on developing advanced acoustic modelling , rapid search , and [[ recognition-time adaptation techniques ]] for robust large-vocabulary CSR , and on applying these << techniques >> to the new ARPA large-vocabulary CSR corpora and to military application tasks .",1086,2 +1087,"The research effort focusses on developing advanced acoustic modelling , rapid search , and [[ recognition-time adaptation techniques ]] for robust large-vocabulary CSR , and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to << military application tasks >> .",1087,3 +1088,"The research effort focusses on developing advanced acoustic modelling , rapid search , and recognition-time adaptation techniques for robust [[ large-vocabulary CSR ]] , and on applying these techniques to the new << ARPA large-vocabulary CSR corpora >> and to military application tasks .",1088,3 +1089,"The research effort focusses on developing advanced acoustic modelling , rapid search , and recognition-time adaptation techniques for robust large-vocabulary CSR , and on applying these [[ techniques ]] to the new ARPA large-vocabulary CSR corpora and to << military application tasks >> .",1089,3 +1090,"The research effort focusses on developing advanced acoustic modelling , rapid search , and recognition-time adaptation techniques for robust large-vocabulary CSR , and on applying these << techniques >> to the new [[ ARPA large-vocabulary CSR corpora ]] and to military application tasks .",1090,6 +1091,This paper examines what kind of << similarity between words >> can be represented by what kind of [[ word vectors ]] in the vector space model .,1091,3 +1092,This paper examines what kind of similarity between words can be represented by what kind of << word vectors >> in the [[ vector space model ]] .,1092,3 +1093,"Through two experiments , three [[ methods ]] for << constructing word vectors >> , i.e. , LSA-based , cooccurrence-based and dictionary-based methods , were compared in terms of the ability to represent two kinds of similarity , i.e. , taxonomic similarity and associative similarity .",1093,3 +1094,"Through two experiments , three methods for constructing word vectors , i.e. , [[ LSA-based , cooccurrence-based and dictionary-based methods ]] , were compared in terms of the ability to represent two kinds of << similarity >> , i.e. , taxonomic similarity and associative similarity .",1094,3 +1095,"Through two experiments , three methods for constructing word vectors , i.e. , LSA-based , cooccurrence-based and dictionary-based methods , were compared in terms of the ability to represent two kinds of << similarity >> , i.e. , [[ taxonomic similarity ]] and associative similarity .",1095,2 +1096,"Through two experiments , three methods for constructing word vectors , i.e. , LSA-based , cooccurrence-based and dictionary-based methods , were compared in terms of the ability to represent two kinds of similarity , i.e. , [[ taxonomic similarity ]] and << associative similarity >> .",1096,0 +1097,"Through two experiments , three methods for constructing word vectors , i.e. , LSA-based , cooccurrence-based and dictionary-based methods , were compared in terms of the ability to represent two kinds of << similarity >> , i.e. , taxonomic similarity and [[ associative similarity ]] .",1097,2 +1098,"The result of the comparison was that the [[ dictionary-based word vectors ]] better reflect << taxonomic similarity >> , while the LSA-based and the cooccurrence-based word vectors better reflect associative similarity .",1098,3 +1099,"The result of the comparison was that the dictionary-based word vectors better reflect taxonomic similarity , while the [[ LSA-based and the cooccurrence-based word vectors ]] better reflect << associative similarity >> .",1099,3 +1100,This paper presents a << maximum entropy word alignment algorithm >> for [[ Arabic-English ]] based on supervised training data .,1100,3 +1101,This paper presents a << maximum entropy word alignment algorithm >> for Arabic-English based on [[ supervised training data ]] .,1101,3 +1102,We demonstrate that it is feasible to create [[ training material ]] for problems in << machine translation >> and that a mixture of supervised and unsupervised methods yields superior performance .,1102,3 +1103,The [[ probabilistic model ]] used in the << alignment >> directly models the link decisions .,1103,3 +1104,The [[ probabilistic model ]] used in the alignment directly models the << link decisions >> .,1104,3 +1105,Significant improvement over traditional [[ word alignment techniques ]] is shown as well as improvement on several << machine translation tests >> .,1105,3 +1106,Performance of the [[ algorithm ]] is contrasted with << human annotation >> performance .,1106,5 +1107,"In this paper , we propose a novel [[ Cooperative Model ]] for << natural language understanding >> in a dialogue system .",1107,3 +1108,"In this paper , we propose a novel Cooperative Model for [[ natural language understanding ]] in a << dialogue system >> .",1108,3 +1109,We build << this >> based on both [[ Finite State Model -LRB- FSM -RRB- ]] and Statistical Learning Model -LRB- SLM -RRB- .,1109,3 +1110,We build this based on both [[ Finite State Model -LRB- FSM -RRB- ]] and << Statistical Learning Model -LRB- SLM -RRB- >> .,1110,0 +1111,We build << this >> based on both Finite State Model -LRB- FSM -RRB- and [[ Statistical Learning Model -LRB- SLM -RRB- ]] .,1111,3 +1112,[[ FSM ]] provides two strategies for << language understanding >> and have a high accuracy but little robustness and flexibility .,1112,3 +1113,The [[ ambiguity resolution of right-side dependencies ]] is essential for << dependency parsing >> of sentences with two or more verbs .,1113,3 +1114,Previous works on shift-reduce dependency parsers may not guarantee the [[ connectivity ]] of a << dependency tree >> due to their weakness at resolving the right-side dependencies .,1114,6 +1115,This paper proposes a << two-phase shift-reduce dependency parser >> based on [[ SVM learning ]] .,1115,3 +1116,"The [[ left-side dependents ]] and << right-side nominal dependents >> are detected in Phase I , and right-side verbal dependents are decided in Phase II .",1116,0 +1117,"The left-side dependents and << right-side nominal dependents >> are detected in Phase I , and [[ right-side verbal dependents ]] are decided in Phase II .",1117,0 +1118,"In experimental evaluation , our proposed [[ method ]] outperforms previous << shift-reduce dependency parsers >> for the Chine language , showing improvement of dependency accuracy by 10.08 % .",1118,5 +1119,"In experimental evaluation , our proposed << method >> outperforms previous shift-reduce dependency parsers for the [[ Chine language ]] , showing improvement of dependency accuracy by 10.08 % .",1119,6 +1120,"In experimental evaluation , our proposed method outperforms previous << shift-reduce dependency parsers >> for the [[ Chine language ]] , showing improvement of dependency accuracy by 10.08 % .",1120,6 +1121,"In experimental evaluation , our proposed << method >> outperforms previous shift-reduce dependency parsers for the Chine language , showing improvement of [[ dependency accuracy ]] by 10.08 % .",1121,6 +1122,"In experimental evaluation , our proposed method outperforms previous << shift-reduce dependency parsers >> for the Chine language , showing improvement of [[ dependency accuracy ]] by 10.08 % .",1122,6 +1123,"By using [[ commands ]] or << rules >> which are defined to facilitate the construction of format expected or some mathematical expressions , elaborate and pretty documents can be successfully obtained .",1123,0 +1124,"By using [[ commands ]] or rules which are defined to facilitate the construction of format expected or some << mathematical expressions >> , elaborate and pretty documents can be successfully obtained .",1124,3 +1125,"By using commands or [[ rules ]] which are defined to facilitate the construction of format expected or some << mathematical expressions >> , elaborate and pretty documents can be successfully obtained .",1125,3 +1126,This paper presents an [[ evaluation method ]] employing a latent variable model for << paraphrases >> with their contexts .,1126,6 +1127,This paper presents an << evaluation method >> employing a [[ latent variable model ]] for paraphrases with their contexts .,1127,3 +1128,The results also revealed an upper bound of [[ accuracy ]] of 77 % with the << method >> when using only topic information .,1128,6 +1129,The results also revealed an upper bound of accuracy of 77 % with the << method >> when using only [[ topic information ]] .,1129,3 +1130,We describe the [[ methods ]] and << hardware >> that we are using to produce a real-time demonstration of an integrated Spoken Language System .,1130,0 +1131,We describe the [[ methods ]] and hardware that we are using to produce a real-time demonstration of an << integrated Spoken Language System >> .,1131,3 +1132,We describe the methods and [[ hardware ]] that we are using to produce a real-time demonstration of an << integrated Spoken Language System >> .,1132,3 +1133,We describe [[ algorithms ]] that greatly reduce the computation needed to compute the << N-Best sentence hypotheses >> .,1133,3 +1134,To avoid << grammar coverage problems >> we use a [[ fully-connected first-order statistical class grammar ]] .,1134,3 +1135,"The << speech-search algorithm >> is implemented on a [[ board ]] with a single Intel i860 chip , which provides a factor of 5 speed-up over a SUN 4 for straight C code .",1135,3 +1136,"The speech-search algorithm is implemented on a << board >> with a single [[ Intel i860 chip ]] , which provides a factor of 5 speed-up over a SUN 4 for straight C code .",1136,4 +1137,"The speech-search algorithm is implemented on a board with a single [[ Intel i860 chip ]] , which provides a factor of 5 speed-up over a << SUN 4 >> for straight C code .",1137,5 +1138,"The speech-search algorithm is implemented on a board with a single [[ Intel i860 chip ]] , which provides a factor of 5 speed-up over a SUN 4 for << straight C code >> .",1138,3 +1139,"The speech-search algorithm is implemented on a board with a single Intel i860 chip , which provides a factor of 5 speed-up over a [[ SUN 4 ]] for << straight C code >> .",1139,3 +1140,"The [[ board ]] plugs directly into the VME bus of the SUN4 , which controls the << system >> and contains the natural language system and application back end .",1140,3 +1141,"The board plugs directly into the [[ VME bus ]] of the << SUN4 >> , which controls the system and contains the natural language system and application back end .",1141,4 +1142,"The << board >> plugs directly into the VME bus of the SUN4 , which controls the system and contains the [[ natural language system ]] and application back end .",1142,4 +1143,"The board plugs directly into the VME bus of the SUN4 , which controls the system and contains the [[ natural language system ]] and << application back end >> .",1143,0 +1144,"The << board >> plugs directly into the VME bus of the SUN4 , which controls the system and contains the natural language system and [[ application back end ]] .",1144,4 +1145,We address the problem of << estimating location information >> of an [[ image ]] using principles from automated representation learning .,1145,3 +1146,We address the problem of << estimating location information >> of an image using principles from [[ automated representation learning ]] .,1146,3 +1147,"We pursue a hierarchical sparse coding approach that learns features useful in discriminating images across locations , by initializing << it >> with a [[ geometric prior ]] corresponding to transformations between image appearance space and their corresponding location grouping space using the notion of parallel transport on manifolds .",1147,3 +1148,"We pursue a hierarchical sparse coding approach that learns features useful in discriminating images across locations , by initializing it with a << geometric prior >> corresponding to transformations between image appearance space and their corresponding location grouping space using the notion of [[ parallel transport on manifolds ]] .",1148,3 +1149,"We then extend this [[ approach ]] to account for the availability of << heterogeneous data modalities >> such as geo-tags and videos pertaining to different locations , and also study a relatively under-addressed problem of transferring knowledge available from certain locations to infer the grouping of data from novel locations .",1149,3 +1150,"We then extend this approach to account for the availability of << heterogeneous data modalities >> such as [[ geo-tags ]] and videos pertaining to different locations , and also study a relatively under-addressed problem of transferring knowledge available from certain locations to infer the grouping of data from novel locations .",1150,2 +1151,"We then extend this approach to account for the availability of heterogeneous data modalities such as [[ geo-tags ]] and << videos >> pertaining to different locations , and also study a relatively under-addressed problem of transferring knowledge available from certain locations to infer the grouping of data from novel locations .",1151,0 +1152,"We then extend this approach to account for the availability of << heterogeneous data modalities >> such as geo-tags and [[ videos ]] pertaining to different locations , and also study a relatively under-addressed problem of transferring knowledge available from certain locations to infer the grouping of data from novel locations .",1152,2 +1153,"We then extend this approach to account for the availability of heterogeneous data modalities such as geo-tags and videos pertaining to different locations , and also study a relatively under-addressed problem of [[ transferring knowledge ]] available from certain locations to infer the << grouping of data >> from novel locations .",1153,3 +1154,"We evaluate our << approach >> on several standard [[ datasets ]] such as im2gps , San Francisco and MediaEval2010 , and obtain state-of-the-art results .",1154,6 +1155,"We evaluate our approach on several standard << datasets >> such as [[ im2gps ]] , San Francisco and MediaEval2010 , and obtain state-of-the-art results .",1155,2 +1156,"We evaluate our approach on several standard datasets such as [[ im2gps ]] , << San Francisco >> and MediaEval2010 , and obtain state-of-the-art results .",1156,0 +1157,"We evaluate our approach on several standard << datasets >> such as im2gps , [[ San Francisco ]] and MediaEval2010 , and obtain state-of-the-art results .",1157,2 +1158,"We evaluate our approach on several standard datasets such as im2gps , [[ San Francisco ]] and << MediaEval2010 >> , and obtain state-of-the-art results .",1158,0 +1159,"We evaluate our approach on several standard << datasets >> such as im2gps , San Francisco and [[ MediaEval2010 ]] , and obtain state-of-the-art results .",1159,2 +1160,Conventional << HMMs >> have [[ weak duration constraints ]] .,1160,1 +1161,"In noisy conditions , the mismatch between corrupted speech signals and << models >> trained on [[ clean speech ]] may cause the decoder to produce word matches with unrealistic durations .",1161,3 +1162,"In noisy conditions , the mismatch between corrupted speech signals and models trained on clean speech may cause the [[ decoder ]] to produce << word matches >> with unrealistic durations .",1162,3 +1163,"In noisy conditions , the mismatch between corrupted speech signals and models trained on clean speech may cause the decoder to produce << word matches >> with [[ unrealistic durations ]] .",1163,1 +1164,This paper presents a simple way to incorporate << word duration constraints >> by [[ unrolling HMMs ]] to form a lattice where word duration probabilities can be applied directly to state transitions .,1164,3 +1165,This paper presents a simple way to incorporate word duration constraints by [[ unrolling HMMs ]] to form a << lattice >> where word duration probabilities can be applied directly to state transitions .,1165,3 +1166,This paper presents a simple way to incorporate word duration constraints by unrolling HMMs to form a lattice where [[ word duration probabilities ]] can be applied directly to << state transitions >> .,1166,3 +1167,The expanded << HMMs >> are compatible with conventional [[ Viterbi decoding ]] .,1167,0 +1168,"Experiments on [[ connected-digit recognition ]] show that when using explicit duration constraints the << decoder >> generates word matches with more reasonable durations , and word error rates are significantly reduced across a broad range of noise conditions .",1168,3 +1169,"Experiments on connected-digit recognition show that when using explicit [[ duration constraints ]] the << decoder >> generates word matches with more reasonable durations , and word error rates are significantly reduced across a broad range of noise conditions .",1169,3 +1170,"Experiments on connected-digit recognition show that when using explicit duration constraints the [[ decoder ]] generates << word matches >> with more reasonable durations , and word error rates are significantly reduced across a broad range of noise conditions .",1170,3 +1171,One of the claimed benefits of [[ Tree Adjoining Grammars ]] is that they have an << extended domain of locality -LRB- EDOL -RRB- >> .,1171,1 +1172,We consider how this can be exploited to limit the need for [[ feature structure unification ]] during << parsing >> .,1172,3 +1173,"We compare two wide-coverage << lexicalized grammars of English >> , [[ LEXSYS ]] and XTAG , finding that the two grammars exploit EDOL in different ways .",1173,2 +1174,"We compare two wide-coverage lexicalized grammars of English , [[ LEXSYS ]] and << XTAG >> , finding that the two grammars exploit EDOL in different ways .",1174,5 +1175,"We compare two wide-coverage << lexicalized grammars of English >> , LEXSYS and [[ XTAG ]] , finding that the two grammars exploit EDOL in different ways .",1175,2 +1176,"We compare two wide-coverage lexicalized grammars of English , LEXSYS and XTAG , finding that the two [[ grammars ]] exploit << EDOL >> in different ways .",1176,3 +1177,[[ Identity uncertainty ]] is a pervasive problem in << real-world data analysis >> .,1177,2 +1178,"Our << approach >> is based on the use of a [[ relational probability model ]] to define a generative model for the domain , including models of author and title corruption and a probabilistic citation grammar .",1178,3 +1179,"Our approach is based on the use of a [[ relational probability model ]] to define a << generative model >> for the domain , including models of author and title corruption and a probabilistic citation grammar .",1179,3 +1180,"Our approach is based on the use of a relational probability model to define a [[ generative model ]] for the << domain >> , including models of author and title corruption and a probabilistic citation grammar .",1180,3 +1181,"Our approach is based on the use of a << relational probability model >> to define a generative model for the domain , including [[ models of author and title corruption ]] and a probabilistic citation grammar .",1181,4 +1182,"Our approach is based on the use of a relational probability model to define a generative model for the domain , including [[ models of author and title corruption ]] and a << probabilistic citation grammar >> .",1182,0 +1183,"Our approach is based on the use of a << relational probability model >> to define a generative model for the domain , including models of author and title corruption and a [[ probabilistic citation grammar ]] .",1183,4 +1184,<< Identity uncertainty >> is handled by extending standard [[ models ]] to incorporate probabilities over the possible mappings between terms in the language and objects in the domain .,1184,3 +1185,"<< Inference >> is based on [[ Markov chain Monte Carlo ]] , augmented with specific methods for generating efficient proposals when the domain contains many objects .",1185,3 +1186,"<< Inference >> is based on Markov chain Monte Carlo , augmented with specific [[ methods ]] for generating efficient proposals when the domain contains many objects .",1186,3 +1187,Results on several [[ citation data sets ]] show that the << method >> outperforms current algorithms for citation matching .,1187,6 +1188,Results on several citation data sets show that the [[ method ]] outperforms << current algorithms >> for citation matching .,1188,5 +1189,Results on several citation data sets show that the [[ method ]] outperforms current algorithms for << citation matching >> .,1189,3 +1190,Results on several citation data sets show that the method outperforms [[ current algorithms ]] for << citation matching >> .,1190,3 +1191,"The declarative , relational nature of the model also means that our [[ algorithm ]] can determine << object characteristics >> such as author names by combining multiple citations of multiple papers .",1191,3 +1192,"The declarative , relational nature of the model also means that our algorithm can determine << object characteristics >> such as [[ author names ]] by combining multiple citations of multiple papers .",1192,2 +1193,The paper proposes and empirically motivates an integration of [[ supervised learning ]] with unsupervised learning to deal with << human biases in summarization >> .,1193,3 +1194,The paper proposes and empirically motivates an integration of << supervised learning >> with [[ unsupervised learning ]] to deal with human biases in summarization .,1194,0 +1195,The paper proposes and empirically motivates an integration of supervised learning with [[ unsupervised learning ]] to deal with << human biases in summarization >> .,1195,3 +1196,"In particular , we explore the use of << probabilistic decision tree >> within the [[ clustering framework ]] to account for the variation as well as regularity in human created summaries .",1196,1 +1197,The << corpus of human created extracts >> is created from a [[ newspaper corpus ]] and used as a test set .,1197,3 +1198,We build probabilistic decision trees of different flavors and integrate each of << them >> with the [[ clustering framework ]] .,1198,0 +1199,"In this study , we propose a knowledge-independent method for aligning terms and thus extracting translations from a << small , domain-specific corpus >> consisting of [[ parallel English and Chinese court judgments ]] from Hong Kong .",1199,4 +1200,"With a [[ sentence-aligned corpus ]] , << translation equivalences >> are suggested by analysing the frequency profiles of parallel concordances .",1200,3 +1201,"With a sentence-aligned corpus , translation equivalences are suggested by analysing the [[ frequency profiles ]] of << parallel concordances >> .",1201,4 +1202,"The [[ method ]] overcomes the limitations of conventional << statistical methods >> which require large corpora to be effective , and lexical approaches which depend on existing bilingual dictionaries .",1202,5 +1203,"The [[ method ]] overcomes the limitations of conventional statistical methods which require large corpora to be effective , and << lexical approaches >> which depend on existing bilingual dictionaries .",1203,5 +1204,"The method overcomes the limitations of conventional << statistical methods >> which require [[ large corpora ]] to be effective , and lexical approaches which depend on existing bilingual dictionaries .",1204,3 +1205,"The method overcomes the limitations of conventional statistical methods which require large corpora to be effective , and << lexical approaches >> which depend on existing [[ bilingual dictionaries ]] .",1205,3 +1206,Pilot testing on a parallel corpus of about 113K Chinese words and 120K English words gives an encouraging 85 % [[ precision ]] and 45 % << recall >> .,1206,0 +1207,"Future work includes fine-tuning the algorithm upon the analysis of the errors , and acquiring a [[ translation lexicon ]] for << legal terminology >> by filtering out general terms .",1207,3 +1208,"Traditional [[ machine learning techniques ]] have been applied to this << problem >> with reasonable success , but they have been shown to work well only when there is a good match between the training and test data with respect to topic .",1208,3 +1209,"This paper demonstrates that match with respect to domain and time is also important , and presents preliminary experiments with << training data >> labeled with [[ emoticons ]] , which has the potential of being independent of domain , topic and time .",1209,1 +1210,We present a novel [[ algorithm ]] for estimating the broad << 3D geometric structure of outdoor video scenes >> .,1210,3 +1211,"Leveraging [[ spatio-temporal video segmentation ]] , we decompose a << dynamic scene >> captured by a video into geometric classes , based on predictions made by region-classifiers that are trained on appearance and motion features .",1211,3 +1212,"Leveraging spatio-temporal video segmentation , we decompose a << dynamic scene >> captured by a video into [[ geometric classes ]] , based on predictions made by region-classifiers that are trained on appearance and motion features .",1212,4 +1213,"Leveraging spatio-temporal video segmentation , we decompose a dynamic scene captured by a video into << geometric classes >> , based on predictions made by [[ region-classifiers ]] that are trained on appearance and motion features .",1213,3 +1214,"Leveraging spatio-temporal video segmentation , we decompose a dynamic scene captured by a video into geometric classes , based on predictions made by << region-classifiers >> that are trained on [[ appearance and motion features ]] .",1214,3 +1215,"We built a novel , extensive [[ dataset ]] on geometric context of video to evaluate our << method >> , consisting of over 100 ground-truth annotated outdoor videos with over 20,000 frames .",1215,6 +1216,"We built a novel , extensive << dataset >> on [[ geometric context of video ]] to evaluate our method , consisting of over 100 ground-truth annotated outdoor videos with over 20,000 frames .",1216,1 +1217,"We built a novel , extensive << dataset >> on geometric context of video to evaluate our method , consisting of over 100 ground-truth [[ annotated outdoor videos ]] with over 20,000 frames .",1217,4 +1218,"To further scale beyond this dataset , we propose a [[ semi-supervised learning framework ]] to expand the pool of << labeled data >> with high confidence predictions obtained from unlabeled data .",1218,3 +1219,"To further scale beyond this dataset , we propose a << semi-supervised learning framework >> to expand the pool of labeled data with [[ high confidence predictions ]] obtained from unlabeled data .",1219,3 +1220,"To further scale beyond this dataset , we propose a semi-supervised learning framework to expand the pool of labeled data with << high confidence predictions >> obtained from [[ unlabeled data ]] .",1220,3 +1221,Our [[ system ]] produces an accurate prediction of << geometric context of video >> achieving 96 % accuracy across main geometric classes .,1221,3 +1222,Our << system >> produces an accurate prediction of geometric context of video achieving 96 % [[ accuracy ]] across main geometric classes .,1222,6 +1223,This paper describes a [[ system ]] -LRB- RAREAS -RRB- which synthesizes << marine weather forecasts >> directly from formatted weather data .,1223,3 +1224,This paper describes a << system >> -LRB- RAREAS -RRB- which synthesizes marine weather forecasts directly from [[ formatted weather data ]] .,1224,3 +1225,Such << synthesis >> appears feasible in certain [[ natural sublanguages with stereotyped text structure ]] .,1225,3 +1226,<< RAREAS >> draws on several kinds of [[ linguistic and non-linguistic knowledge ]] and mirrors a forecaster 's apparent tendency to ascribe less precise temporal adverbs to more remote meteorological events .,1226,3 +1227,The << approach >> can easily be adapted to synthesize [[ bilingual or multi-lingual texts ]] .,1227,3 +1228,"We go , on to describe [[ FlexP ]] , a << bottom-up pattern-matching parser >> that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system .",1228,2 +1229,"We go , on to describe FlexP , a [[ bottom-up pattern-matching parser ]] that we have designed and implemented to provide these << flexibilities >> for restricted natural language input to a limited-domain computer system .",1229,3 +1230,"We go , on to describe FlexP , a bottom-up pattern-matching parser that we have designed and implemented to provide these [[ flexibilities ]] for << restricted natural language >> input to a limited-domain computer system .",1230,1 +1231,"We go , on to describe FlexP , a bottom-up pattern-matching parser that we have designed and implemented to provide these [[ flexibilities ]] for restricted natural language input to a << limited-domain computer system >> .",1231,4 +1232,"We go , on to describe FlexP , a << bottom-up pattern-matching parser >> that we have designed and implemented to provide these flexibilities for [[ restricted natural language ]] input to a limited-domain computer system .",1232,3 +1233,Traditional << information retrieval techniques >> use a [[ histogram of keywords ]] as the document representation but oral communication may offer additional indices such as the time and place of the rejoinder and the attendance .,1233,3 +1234,Traditional information retrieval techniques use a [[ histogram of keywords ]] as the << document representation >> but oral communication may offer additional indices such as the time and place of the rejoinder and the attendance .,1234,3 +1235,"An alternative index could be the << activity >> such as [[ discussing ]] , planning , informing , story-telling , etc. .",1235,2 +1236,"An alternative index could be the activity such as [[ discussing ]] , << planning >> , informing , story-telling , etc. .",1236,0 +1237,"An alternative index could be the << activity >> such as discussing , [[ planning ]] , informing , story-telling , etc. .",1237,2 +1238,"An alternative index could be the activity such as discussing , [[ planning ]] , << informing >> , story-telling , etc. .",1238,0 +1239,"An alternative index could be the << activity >> such as discussing , planning , [[ informing ]] , story-telling , etc. .",1239,2 +1240,"An alternative index could be the activity such as discussing , planning , [[ informing ]] , << story-telling >> , etc. .",1240,0 +1241,"An alternative index could be the << activity >> such as discussing , planning , informing , [[ story-telling ]] , etc. .",1241,2 +1242,This paper addresses the problem of the << automatic detection >> of those [[ activities ]] in meeting situation and everyday rejoinders .,1242,3 +1243,The format of the << corpus >> adopts the [[ Child Language Data Exchange System -LRB- CHILDES -RRB- ]] .,1243,1 +1244,"In this paper , we describe [[ data collection ]] , << transcription >> , word segmentation , and part-of-speech annotation of this corpus .",1244,0 +1245,"In this paper , we describe [[ data collection ]] , transcription , word segmentation , and part-of-speech annotation of this << corpus >> .",1245,3 +1246,"In this paper , we describe data collection , [[ transcription ]] , << word segmentation >> , and part-of-speech annotation of this corpus .",1246,0 +1247,"In this paper , we describe data collection , [[ transcription ]] , word segmentation , and part-of-speech annotation of this << corpus >> .",1247,3 +1248,"In this paper , we describe data collection , transcription , [[ word segmentation ]] , and << part-of-speech annotation >> of this corpus .",1248,0 +1249,"In this paper , we describe data collection , transcription , [[ word segmentation ]] , and part-of-speech annotation of this << corpus >> .",1249,3 +1250,"In this paper , we describe data collection , transcription , word segmentation , and [[ part-of-speech annotation ]] of this << corpus >> .",1250,3 +1251,This paper shows how << dictionary word sense definitions >> can be analysed by applying a [[ hierarchy of phrasal patterns ]] .,1251,3 +1252,An experimental << system >> embodying this [[ mechanism ]] has been implemented for processing definitions from the Longman Dictionary of Contemporary English .,1252,4 +1253,"A property of this dictionary , exploited by the system , is that << it >> uses a [[ restricted vocabulary ]] in its word sense definitions .",1253,3 +1254,"A property of this dictionary , exploited by the system , is that it uses a [[ restricted vocabulary ]] in its << word sense definitions >> .",1254,3 +1255,The structures generated by the experimental system are intended to be used for the << classification of new word senses >> in terms of the senses of words in the [[ restricted vocabulary ]] .,1255,3 +1256,Thus the work reported addresses two [[ robustness problems ]] faced by current experimental << natural language processing systems >> : coping with an incomplete lexicon and with incomplete knowledge of phrasal constructions .,1256,1 +1257,Thus the work reported addresses two << robustness problems >> faced by current experimental natural language processing systems : coping with an [[ incomplete lexicon ]] and with incomplete knowledge of phrasal constructions .,1257,2 +1258,Thus the work reported addresses two << robustness problems >> faced by current experimental natural language processing systems : coping with an incomplete lexicon and with [[ incomplete knowledge of phrasal constructions ]] .,1258,2 +1259,"This paper presents a << word segmentation system >> in France Telecom R&D Beijing , which uses a unified [[ approach ]] to word breaking and OOV identification .",1259,3 +1260,"This paper presents a word segmentation system in France Telecom R&D Beijing , which uses a unified [[ approach ]] to << word breaking >> and OOV identification .",1260,3 +1261,"This paper presents a word segmentation system in France Telecom R&D Beijing , which uses a unified [[ approach ]] to word breaking and << OOV identification >> .",1261,3 +1262,"This paper presents a word segmentation system in France Telecom R&D Beijing , which uses a unified approach to [[ word breaking ]] and << OOV identification >> .",1262,0 +1263,"The system participated in all the tracks of the << segmentation bakeoff >> -- [[ PK-open ]] , PK-closed , AS-open , AS-closed , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1263,2 +1264,"The system participated in all the tracks of the segmentation bakeoff -- [[ PK-open ]] , << PK-closed >> , AS-open , AS-closed , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1264,0 +1265,"The system participated in all the tracks of the << segmentation bakeoff >> -- PK-open , [[ PK-closed ]] , AS-open , AS-closed , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1265,2 +1266,"The system participated in all the tracks of the segmentation bakeoff -- PK-open , [[ PK-closed ]] , << AS-open >> , AS-closed , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1266,0 +1267,"The system participated in all the tracks of the << segmentation bakeoff >> -- PK-open , PK-closed , [[ AS-open ]] , AS-closed , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1267,2 +1268,"The system participated in all the tracks of the segmentation bakeoff -- PK-open , PK-closed , [[ AS-open ]] , << AS-closed >> , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1268,0 +1269,"The system participated in all the tracks of the << segmentation bakeoff >> -- PK-open , PK-closed , AS-open , [[ AS-closed ]] , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1269,2 +1270,"The system participated in all the tracks of the segmentation bakeoff -- PK-open , PK-closed , AS-open , [[ AS-closed ]] , << HK-open >> , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1270,0 +1271,"The system participated in all the tracks of the << segmentation bakeoff >> -- PK-open , PK-closed , AS-open , AS-closed , [[ HK-open ]] , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1271,2 +1272,"The system participated in all the tracks of the segmentation bakeoff -- PK-open , PK-closed , AS-open , AS-closed , [[ HK-open ]] , << HK-closed >> , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1272,0 +1273,"The system participated in all the tracks of the << segmentation bakeoff >> -- PK-open , PK-closed , AS-open , AS-closed , HK-open , [[ HK-closed ]] , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1273,2 +1274,"The system participated in all the tracks of the segmentation bakeoff -- PK-open , PK-closed , AS-open , AS-closed , HK-open , [[ HK-closed ]] , << MSR-open >> and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1274,0 +1275,"The system participated in all the tracks of the << segmentation bakeoff >> -- PK-open , PK-closed , AS-open , AS-closed , HK-open , HK-closed , [[ MSR-open ]] and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1275,2 +1276,"The system participated in all the tracks of the segmentation bakeoff -- PK-open , PK-closed , AS-open , AS-closed , HK-open , HK-closed , [[ MSR-open ]] and << MSR - closed >> -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1276,0 +1277,"The system participated in all the tracks of the << segmentation bakeoff >> -- PK-open , PK-closed , AS-open , AS-closed , HK-open , HK-closed , MSR-open and [[ MSR - closed ]] -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .",1277,2 +1278,"The system participated in all the tracks of the segmentation bakeoff -- PK-open , PK-closed , AS-open , AS-closed , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in [[ MSR-open ]] , << MSR-close >> and PK-open tracks .",1278,0 +1279,"The system participated in all the tracks of the segmentation bakeoff -- PK-open , PK-closed , AS-open , AS-closed , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , [[ MSR-close ]] and << PK-open >> tracks .",1279,0 +1280,This paper describes a [[ method ]] of << interactively visualizing and directing the process of translating >> a sentence .,1280,3 +1281,"The [[ method ]] allows a user to explore a << model >> of syntax-based statistical machine translation -LRB- MT -RRB- , to understand the model 's strengths and weaknesses , and to compare it to other MT systems .",1281,3 +1282,"The method allows a user to explore a [[ model ]] of << syntax-based statistical machine translation -LRB- MT -RRB- >> , to understand the model 's strengths and weaknesses , and to compare it to other MT systems .",1282,3 +1283,"The method allows a user to explore a model of syntax-based statistical machine translation -LRB- MT -RRB- , to understand the model 's strengths and weaknesses , and to compare [[ it ]] to other << MT systems >> .",1283,5 +1284,"Using this [[ visualization method ]] , we can find and address conceptual and practical problems in an << MT system >> .",1284,3 +1285,"A [[ method ]] of << sense resolution >> is proposed that is based on WordNet , an on-line lexical database that incorporates semantic relations -LRB- synonymy , antonymy , hyponymy , meronymy , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1285,3 +1286,"A << method >> of sense resolution is proposed that is based on [[ WordNet ]] , an on-line lexical database that incorporates semantic relations -LRB- synonymy , antonymy , hyponymy , meronymy , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1286,3 +1287,"A method of sense resolution is proposed that is based on [[ WordNet ]] , an << on-line lexical database >> that incorporates semantic relations -LRB- synonymy , antonymy , hyponymy , meronymy , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1287,2 +1288,"A method of sense resolution is proposed that is based on << WordNet >> , an on-line lexical database that incorporates [[ semantic relations ]] -LRB- synonymy , antonymy , hyponymy , meronymy , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1288,4 +1289,"A method of sense resolution is proposed that is based on WordNet , an on-line lexical database that incorporates << semantic relations >> -LRB- [[ synonymy ]] , antonymy , hyponymy , meronymy , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1289,2 +1290,"A method of sense resolution is proposed that is based on WordNet , an on-line lexical database that incorporates semantic relations -LRB- [[ synonymy ]] , << antonymy >> , hyponymy , meronymy , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1290,0 +1291,"A method of sense resolution is proposed that is based on WordNet , an on-line lexical database that incorporates << semantic relations >> -LRB- synonymy , [[ antonymy ]] , hyponymy , meronymy , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1291,2 +1292,"A method of sense resolution is proposed that is based on WordNet , an on-line lexical database that incorporates semantic relations -LRB- synonymy , [[ antonymy ]] , << hyponymy >> , meronymy , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1292,0 +1293,"A method of sense resolution is proposed that is based on WordNet , an on-line lexical database that incorporates << semantic relations >> -LRB- synonymy , antonymy , [[ hyponymy ]] , meronymy , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1293,2 +1294,"A method of sense resolution is proposed that is based on WordNet , an on-line lexical database that incorporates semantic relations -LRB- synonymy , antonymy , [[ hyponymy ]] , << meronymy >> , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1294,0 +1295,"A method of sense resolution is proposed that is based on WordNet , an on-line lexical database that incorporates << semantic relations >> -LRB- synonymy , antonymy , hyponymy , [[ meronymy ]] , causal and troponymic entailment -RRB- as labeled pointers between word senses .",1295,2 +1296,"A method of sense resolution is proposed that is based on WordNet , an on-line lexical database that incorporates semantic relations -LRB- synonymy , antonymy , hyponymy , [[ meronymy ]] , << causal and troponymic entailment >> -RRB- as labeled pointers between word senses .",1296,0 +1297,"A method of sense resolution is proposed that is based on WordNet , an on-line lexical database that incorporates << semantic relations >> -LRB- synonymy , antonymy , hyponymy , meronymy , [[ causal and troponymic entailment ]] -RRB- as labeled pointers between word senses .",1297,2 +1298,"With [[ WordNet ]] , it is easy to retrieve sets of << semantically related words >> , a facility that will be used for sense resolution during text processing , as follows .",1298,3 +1299,"With WordNet , it is easy to retrieve sets of [[ semantically related words ]] , a facility that will be used for << sense resolution >> during text processing , as follows .",1299,3 +1300,"With WordNet , it is easy to retrieve sets of semantically related words , a facility that will be used for [[ sense resolution ]] during << text processing >> , as follows .",1300,3 +1301,"Or , -LRB- 2 -RRB- the context of the polysemous word will be used as a key to search a large corpus ; all words found to occur in that context will be noted ; [[ WordNet ]] will then be used to estimate the << semantic distance >> from those words to the alternative senses of the polysemous word ; and that sense will be chosen that is closest in meaning to other words occurring in the same context If successful , this procedure could have practical applications to problems of information retrieval , mechanical translation , intelligent tutoring systems , and elsewhere .",1301,3 +1302,"Or , -LRB- 2 -RRB- the context of the polysemous word will be used as a key to search a large corpus ; all words found to occur in that context will be noted ; WordNet will then be used to estimate the semantic distance from those words to the alternative senses of the polysemous word ; and that sense will be chosen that is closest in meaning to other words occurring in the same context If successful , this [[ procedure ]] could have practical applications to problems of << information retrieval >> , mechanical translation , intelligent tutoring systems , and elsewhere .",1302,3 +1303,"Or , -LRB- 2 -RRB- the context of the polysemous word will be used as a key to search a large corpus ; all words found to occur in that context will be noted ; WordNet will then be used to estimate the semantic distance from those words to the alternative senses of the polysemous word ; and that sense will be chosen that is closest in meaning to other words occurring in the same context If successful , this [[ procedure ]] could have practical applications to problems of information retrieval , << mechanical translation >> , intelligent tutoring systems , and elsewhere .",1303,3 +1304,"Or , -LRB- 2 -RRB- the context of the polysemous word will be used as a key to search a large corpus ; all words found to occur in that context will be noted ; WordNet will then be used to estimate the semantic distance from those words to the alternative senses of the polysemous word ; and that sense will be chosen that is closest in meaning to other words occurring in the same context If successful , this [[ procedure ]] could have practical applications to problems of information retrieval , mechanical translation , << intelligent tutoring systems >> , and elsewhere .",1304,3 +1305,"Or , -LRB- 2 -RRB- the context of the polysemous word will be used as a key to search a large corpus ; all words found to occur in that context will be noted ; WordNet will then be used to estimate the semantic distance from those words to the alternative senses of the polysemous word ; and that sense will be chosen that is closest in meaning to other words occurring in the same context If successful , this procedure could have practical applications to problems of [[ information retrieval ]] , << mechanical translation >> , intelligent tutoring systems , and elsewhere .",1305,0 +1306,"Or , -LRB- 2 -RRB- the context of the polysemous word will be used as a key to search a large corpus ; all words found to occur in that context will be noted ; WordNet will then be used to estimate the semantic distance from those words to the alternative senses of the polysemous word ; and that sense will be chosen that is closest in meaning to other words occurring in the same context If successful , this procedure could have practical applications to problems of information retrieval , [[ mechanical translation ]] , << intelligent tutoring systems >> , and elsewhere .",1306,0 +1307,The [[ interlingual approach ]] to << MT >> has been repeatedly advocated by researchers originally interested in natural language understanding who take machine translation to be one possible application .,1307,3 +1308,"In contrast , our project , the [[ Mu-project ]] , adopts the transfer approach as the basic framework of << MT >> .",1308,3 +1309,"In contrast , our project , the << Mu-project >> , adopts the [[ transfer approach ]] as the basic framework of MT .",1309,3 +1310,"This paper describes the detailed construction of the [[ transfer phase ]] of our << system >> from Japanese to English , and gives some examples of problems which seem difficult to treat in the interlingual approach .",1310,4 +1311,The basic design principles of the [[ transfer phase ]] of our << system >> have already been mentioned in -LRB- 1 -RRB- -LRB- 2 -RRB- .,1311,4 +1312,Some of the << principles >> which are relevant to the topic of this paper are : -LRB- a -RRB- [[ Multiple Layer of Grammars ]] -LRB- b -RRB- Multiple Layer Presentation -LRB- c -RRB- Lexicon Driven Processing -LRB- d -RRB- Form-Oriented Dictionary Description .,1312,4 +1313,Some of the principles which are relevant to the topic of this paper are : -LRB- a -RRB- [[ Multiple Layer of Grammars ]] -LRB- b -RRB- << Multiple Layer Presentation >> -LRB- c -RRB- Lexicon Driven Processing -LRB- d -RRB- Form-Oriented Dictionary Description .,1313,0 +1314,Some of the << principles >> which are relevant to the topic of this paper are : -LRB- a -RRB- Multiple Layer of Grammars -LRB- b -RRB- [[ Multiple Layer Presentation ]] -LRB- c -RRB- Lexicon Driven Processing -LRB- d -RRB- Form-Oriented Dictionary Description .,1314,4 +1315,Some of the principles which are relevant to the topic of this paper are : -LRB- a -RRB- Multiple Layer of Grammars -LRB- b -RRB- [[ Multiple Layer Presentation ]] -LRB- c -RRB- << Lexicon Driven Processing >> -LRB- d -RRB- Form-Oriented Dictionary Description .,1315,0 +1316,Some of the << principles >> which are relevant to the topic of this paper are : -LRB- a -RRB- Multiple Layer of Grammars -LRB- b -RRB- Multiple Layer Presentation -LRB- c -RRB- [[ Lexicon Driven Processing ]] -LRB- d -RRB- Form-Oriented Dictionary Description .,1316,4 +1317,Some of the principles which are relevant to the topic of this paper are : -LRB- a -RRB- Multiple Layer of Grammars -LRB- b -RRB- Multiple Layer Presentation -LRB- c -RRB- [[ Lexicon Driven Processing ]] -LRB- d -RRB- << Form-Oriented Dictionary Description >> .,1317,0 +1318,Some of the << principles >> which are relevant to the topic of this paper are : -LRB- a -RRB- Multiple Layer of Grammars -LRB- b -RRB- Multiple Layer Presentation -LRB- c -RRB- Lexicon Driven Processing -LRB- d -RRB- [[ Form-Oriented Dictionary Description ]] .,1318,4 +1319,This paper also shows how these [[ principles ]] are realized in the current << system >> .,1319,4 +1320,In this paper discourse segments are defined and a [[ method ]] for << discourse segmentation >> primarily based on abduction of temporal relations between segments is proposed .,1320,3 +1321,In this paper discourse segments are defined and a method for << discourse segmentation >> primarily based on [[ abduction of temporal relations ]] between segments is proposed .,1321,3 +1322,This << method >> is precise and computationally feasible and is supported by previous work in the area of [[ temporal anaphora resolution ]] .,1322,3 +1323,This paper describes to what extent << deep processing >> may benefit from [[ shallow techniques ]] and it presents a NLP system which integrates a linguistic PoS tagger and chunker as a preprocessing module of a broad coverage unification based grammar of Spanish .,1323,3 +1324,This paper describes to what extent deep processing may benefit from shallow techniques and it presents a NLP system which integrates a [[ linguistic PoS tagger and chunker ]] as a preprocessing module of a << broad coverage unification based grammar of Spanish >> .,1324,4 +1325,This paper describes to what extent deep processing may benefit from shallow techniques and it presents a << NLP system >> which integrates a linguistic PoS tagger and chunker as a preprocessing module of a [[ broad coverage unification based grammar of Spanish ]] .,1325,3 +1326,Experiments show that the efficiency of the overall analysis improves significantly and that our [[ system ]] also provides robustness to the << linguistic processing >> while maintaining both the accuracy and the precision of the grammar .,1326,3 +1327,Experiments show that the efficiency of the overall analysis improves significantly and that our << system >> also provides [[ robustness ]] to the linguistic processing while maintaining both the accuracy and the precision of the grammar .,1327,6 +1328,Experiments show that the efficiency of the overall analysis improves significantly and that our << system >> also provides robustness to the linguistic processing while maintaining both the [[ accuracy ]] and the precision of the grammar .,1328,6 +1329,Experiments show that the efficiency of the overall analysis improves significantly and that our system also provides robustness to the linguistic processing while maintaining both the [[ accuracy ]] and the << precision >> of the grammar .,1329,0 +1330,Experiments show that the efficiency of the overall analysis improves significantly and that our << system >> also provides robustness to the linguistic processing while maintaining both the accuracy and the [[ precision ]] of the grammar .,1330,6 +1331,[[ Joint image filters ]] can leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for << suppressing noise >> or enhancing spatial resolution .,1331,3 +1332,[[ Joint image filters ]] can leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for suppressing noise or << enhancing spatial resolution >> .,1332,3 +1333,<< Joint image filters >> can leverage the [[ guidance image ]] as a prior and transfer the structural details from the guidance image to the target image for suppressing noise or enhancing spatial resolution .,1333,3 +1334,Joint image filters can leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for [[ suppressing noise ]] or << enhancing spatial resolution >> .,1334,0 +1335,<< Existing methods >> rely on various kinds of [[ explicit filter construction ]] or hand-designed objective functions .,1335,3 +1336,Existing methods rely on various kinds of [[ explicit filter construction ]] or << hand-designed objective functions >> .,1336,0 +1337,<< Existing methods >> rely on various kinds of explicit filter construction or [[ hand-designed objective functions ]] .,1337,3 +1338,"It is thus difficult to understand , improve , and accelerate << them >> in a [[ coherent framework ]] .",1338,3 +1339,"In this paper , we propose a [[ learning-based approach ]] to construct a << joint filter >> based on Convolution-al Neural Networks .",1339,3 +1340,"In this paper , we propose a << learning-based approach >> to construct a joint filter based on [[ Convolution-al Neural Networks ]] .",1340,3 +1341,"In contrast to existing [[ methods ]] that consider only the guidance image , our << method >> can selectively transfer salient structures that are consistent in both guidance and target images .",1341,5 +1342,"In contrast to existing << methods >> that consider only the [[ guidance image ]] , our method can selectively transfer salient structures that are consistent in both guidance and target images .",1342,3 +1343,"In contrast to existing methods that consider only the guidance image , our [[ method ]] can selectively << transfer salient structures >> that are consistent in both guidance and target images .",1343,3 +1344,"We show that the [[ model ]] trained on a certain type of data , e.g. , RGB and depth images , generalizes well for other << modalities >> , e.g. , Flash/Non-Flash and RGB/NIR images .",1344,3 +1345,"We show that the << model >> trained on a certain type of [[ data ]] , e.g. , RGB and depth images , generalizes well for other modalities , e.g. , Flash/Non-Flash and RGB/NIR images .",1345,3 +1346,"We show that the model trained on a certain type of << data >> , e.g. , [[ RGB and depth images ]] , generalizes well for other modalities , e.g. , Flash/Non-Flash and RGB/NIR images .",1346,2 +1347,"We show that the model trained on a certain type of data , e.g. , RGB and depth images , generalizes well for other << modalities >> , e.g. , [[ Flash/Non-Flash and RGB/NIR images ]] .",1347,2 +1348,We validate the effectiveness of the proposed [[ joint filter ]] through extensive comparisons with << state-of-the-art methods >> .,1348,5 +1349,"In our current research into the design of << cognitively well-motivated interfaces >> relying primarily on the [[ display of graphical information ]] , we have observed that graphical information alone does not provide sufficient support to users - particularly when situations arise that do not simply conform to the users ' expectations .",1349,3 +1350,"To solve this problem , we are working towards the integration of [[ natural language generation ]] to augment the << interaction >>",1350,3 +1351,A central problem of word sense disambiguation -LRB- WSD -RRB- is the lack of [[ manually sense-tagged data ]] required for << supervised learning >> .,1351,3 +1352,"In this paper , we evaluate an [[ approach ]] to automatically acquire << sense-tagged training data >> from English-Chinese parallel corpora , which are then used for disambiguating the nouns in the SENSEVAL-2 English lexical sample task .",1352,3 +1353,"In this paper , we evaluate an [[ approach ]] to automatically acquire sense-tagged training data from English-Chinese parallel corpora , which are then used for disambiguating the << nouns >> in the SENSEVAL-2 English lexical sample task .",1353,3 +1354,"In this paper , we evaluate an approach to automatically acquire << sense-tagged training data >> from [[ English-Chinese parallel corpora ]] , which are then used for disambiguating the nouns in the SENSEVAL-2 English lexical sample task .",1354,3 +1355,"In this paper , we evaluate an approach to automatically acquire sense-tagged training data from English-Chinese parallel corpora , which are then used for disambiguating the [[ nouns ]] in the << SENSEVAL-2 English lexical sample task >> .",1355,4 +1356,Our investigation reveals that this [[ method ]] of << acquiring sense-tagged data >> is promising .,1356,3 +1357,"On a subset of the most difficult SENSEVAL-2 nouns , the accuracy difference between the two approaches is only 14.0 % , and the difference could narrow further to 6.5 % if we disregard the advantage that << manually sense-tagged data >> have in their [[ sense coverage ]] .",1357,1 +1358,Our analysis also highlights the importance of the issue of [[ domain dependence ]] in << evaluating WSD programs >> .,1358,1 +1359,"This paper presents an analysis of << temporal anaphora >> in sentences which contain [[ quantification over events ]] , within the framework of Discourse Representation Theory .",1359,4 +1360,"This paper presents an analysis of << temporal anaphora >> in sentences which contain quantification over events , within the framework of [[ Discourse Representation Theory ]] .",1360,3 +1361,"The analysis in -LRB- Partee , 1984 -RRB- of quantified sentences , introduced by a temporal connective , gives the wrong truth-conditions when the [[ temporal connective ]] in the << subordinate clause >> is before or after .",1361,4 +1362,"This << problem >> has been previously analyzed in -LRB- de Swart , 1991 -RRB- as an instance of the proportion problem and given a solution from a [[ Generalized Quantifier approach ]] .",1362,3 +1363,"By using a careful distinction between the different notions of reference time based on -LRB- Kamp and Reyle , 1993 -RRB- , we propose a [[ solution ]] to this << problem >> , within the framework of DRT .",1363,3 +1364,"By using a careful distinction between the different notions of reference time based on -LRB- Kamp and Reyle , 1993 -RRB- , we propose a solution to this << problem >> , within the framework of [[ DRT ]] .",1364,3 +1365,We show some applications of this [[ solution ]] to additional << temporal anaphora phenomena >> in quantified sentences .,1365,3 +1366,We show some applications of this << solution >> to additional temporal anaphora phenomena in [[ quantified sentences ]] .,1366,3 +1367,"In this paper , we explore [[ correlation of dependency relation paths ]] to rank candidate answers in << answer extraction >> .",1367,3 +1368,"Using the [[ correlation measure ]] , we compare << dependency relations >> of a candidate answer and mapped question phrases in sentence with the corresponding relations in question .",1368,3 +1369,"Different from previous studies , we propose an approximate phrase mapping algorithm and incorporate the [[ mapping score ]] into the << correlation measure >> .",1369,4 +1370,The [[ correlations ]] are further incorporated into a << Maximum Entropy-based ranking model >> which estimates path weights from training .,1370,4 +1371,Experimental results show that our [[ method ]] significantly outperforms state-of-the-art << syntactic relation-based methods >> by up to 20 % in MRR .,1371,5 +1372,Experimental results show that our << method >> significantly outperforms state-of-the-art syntactic relation-based methods by up to 20 % in [[ MRR ]] .,1372,6 +1373,Experimental results show that our method significantly outperforms state-of-the-art << syntactic relation-based methods >> by up to 20 % in [[ MRR ]] .,1373,6 +1374,[[ Evaluation ]] is also crucial to assessing competing claims and identifying promising technical << approaches >> .,1374,3 +1375,"Recently considerable progress has been made by a number of groups involved in the DARPA Spoken Language Systems -LRB- SLS -RRB- program to agree on a [[ methodology ]] for comparative << evaluation of SLS systems >> , and that methodology has been put into practice several times in comparative tests of several SLS systems .",1375,6 +1376,"Recently considerable progress has been made by a number of groups involved in the DARPA Spoken Language Systems -LRB- SLS -RRB- program to agree on a methodology for comparative evaluation of SLS systems , and that [[ methodology ]] has been put into practice several times in comparative tests of several << SLS systems >> .",1376,6 +1377,"These [[ evaluations ]] are probably the only << NL evaluations >> other than the series of Message Understanding Conferences -LRB- Sundheim , 1989 ; Sundheim , 1991 -RRB- to have been developed and used by a group of researchers at different sites , although several excellent workshops have been held to study some of these problems -LRB- Palmer et al. , 1989 ; Neal et al. , 1991 -RRB- .",1377,2 +1378,"These [[ evaluations ]] are probably the only NL evaluations other than the series of << Message Understanding Conferences >> -LRB- Sundheim , 1989 ; Sundheim , 1991 -RRB- to have been developed and used by a group of researchers at different sites , although several excellent workshops have been held to study some of these problems -LRB- Palmer et al. , 1989 ; Neal et al. , 1991 -RRB- .",1378,0 +1379,This paper describes a practical [[ `` black-box '' methodology ]] for << automatic evaluation of question-answering NL systems >> .,1379,6 +1380,"While each new application domain will require some development of special resources , the heart of the methodology is domain-independent , and << it >> can be used with either [[ speech or text input ]] .",1380,3 +1381,In this paper we present a novel [[ autonomous pipeline ]] to build a << personalized parametric model -LRB- pose-driven avatar -RRB- >> using a single depth sensor .,1381,3 +1382,In this paper we present a novel << autonomous pipeline >> to build a personalized parametric model -LRB- pose-driven avatar -RRB- using a [[ single depth sensor ]] .,1382,3 +1383,"We fit each incomplete scan using << template fitting techniques >> with a generic [[ human template ]] , and register all scans to every pose using global consistency constraints .",1383,3 +1384,"After registration , these [[ watertight models ]] with different poses are used to train a << parametric model >> in a fashion similar to the SCAPE method .",1384,3 +1385,"After registration , these watertight models with different poses are used to train a << parametric model >> in a fashion similar to the [[ SCAPE method ]] .",1385,3 +1386,"Once the parametric model is built , [[ it ]] can be used as an << anim-itable avatar >> or more interestingly synthesizing dynamic 3D models from single-view depth videos .",1386,3 +1387,"Once the parametric model is built , [[ it ]] can be used as an anim-itable avatar or more interestingly synthesizing << dynamic 3D models >> from single-view depth videos .",1387,3 +1388,"Once the parametric model is built , it can be used as an anim-itable avatar or more interestingly synthesizing << dynamic 3D models >> from [[ single-view depth videos ]] .",1388,3 +1389,Experimental results demonstrate the effectiveness of our [[ system ]] to produce << dynamic models >> .,1389,3 +1390,"In this paper , we propose a novel [[ algorithm ]] to detect/compensate << on-line interference effects >> when integrating Global Navigation Satellite System -LRB- GNSS -RRB- and Inertial Navigation System -LRB- INS -RRB- .",1390,3 +1391,"In this paper , we propose a novel algorithm to detect/compensate on-line interference effects when integrating [[ Global Navigation Satellite System -LRB- GNSS -RRB- ]] and << Inertial Navigation System -LRB- INS -RRB- >> .",1391,0 +1392,The << GNSS/INS coupling >> is usually performed by an [[ Extended Kalman Filter -LRB- EKF -RRB- ]] which yields an accurate and robust localization .,1392,3 +1393,The GNSS/INS coupling is usually performed by an [[ Extended Kalman Filter -LRB- EKF -RRB- ]] which yields an << accurate and robust localization >> .,1393,3 +1394,We first study the impact of the GNSS noise inflation on the << covariance >> of the [[ EKF outputs ]] so as to compute a least square estimate of the potential variance jumps .,1394,1 +1395,We first study the impact of the GNSS noise inflation on the covariance of the EKF outputs so as to compute a [[ least square estimate ]] of the potential << variance jumps >> .,1395,3 +1396,"Then , this [[ estimation ]] is used in a << Bayesian test >> which decides whether interference are corrupting the GNSS signal or not .",1396,3 +1397,The results show the performance of the proposed << approach >> on [[ simulated data ]] .,1397,6 +1398,We propose a [[ unified variational formulation ]] for << joint motion estimation and segmentation >> with explicit occlusion handling .,1398,3 +1399,We propose a << unified variational formulation >> for joint motion estimation and segmentation with [[ explicit occlusion handling ]] .,1399,3 +1400,We use a [[ convex formulation ]] of the << multi-label Potts model >> with label costs and show that the asymmetric map-uniqueness criterion can be integrated into our formulation by means of convex constraints .,1400,3 +1401,We use a convex formulation of the multi-label Potts model with label costs and show that the [[ asymmetric map-uniqueness criterion ]] can be integrated into our << formulation >> by means of convex constraints .,1401,4 +1402,We use a convex formulation of the multi-label Potts model with label costs and show that the asymmetric map-uniqueness criterion can be integrated into our << formulation >> by means of [[ convex constraints ]] .,1402,3 +1403,By using a fast [[ primal-dual algorithm ]] we are able to handle several hundred << motion segments >> .,1403,3 +1404,Two main classes of [[ approaches ]] have been studied to perform << monocular nonrigid 3D reconstruction >> : Template-based methods and Non-rigid Structure from Motion techniques .,1404,3 +1405,Two main classes of << approaches >> have been studied to perform monocular nonrigid 3D reconstruction : [[ Template-based methods ]] and Non-rigid Structure from Motion techniques .,1405,2 +1406,Two main classes of approaches have been studied to perform monocular nonrigid 3D reconstruction : [[ Template-based methods ]] and << Non-rigid Structure from Motion techniques >> .,1406,0 +1407,Two main classes of << approaches >> have been studied to perform monocular nonrigid 3D reconstruction : Template-based methods and [[ Non-rigid Structure from Motion techniques ]] .,1407,2 +1408,"While the first [[ ones ]] have been applied to reconstruct << poorly-textured surfaces >> , they assume the availability of a 3D shape model prior to reconstruction .",1408,3 +1409,"While the first ones have been applied to reconstruct poorly-textured surfaces , << they >> assume the availability of a [[ 3D shape model ]] prior to reconstruction .",1409,3 +1410,"In this paper , we introduce a [[ template-free approach ]] to reconstructing a << poorly-textured , deformable surface >> .",1410,3 +1411,"To this end , we leverage [[ surface isometry ]] and formulate << 3D reconstruction >> as the joint problem of non-rigid image registration and depth estimation .",1411,3 +1412,"To this end , we leverage surface isometry and formulate << 3D reconstruction >> as the [[ joint problem of non-rigid image registration and depth estimation ]] .",1412,3 +1413,Our experiments demonstrate that our [[ approach ]] yields much more accurate 3D reconstructions than << state-of-the-art techniques >> .,1413,5 +1414,Our experiments demonstrate that our << approach >> yields much more accurate [[ 3D reconstructions ]] than state-of-the-art techniques .,1414,6 +1415,Our experiments demonstrate that our approach yields much more accurate [[ 3D reconstructions ]] than << state-of-the-art techniques >> .,1415,6 +1416,"Many << computer vision applications >> , such as [[ image classification ]] and video indexing , are usually multi-label classification problems in which an instance can be assigned to more than one category .",1416,2 +1417,"Many computer vision applications , such as [[ image classification ]] and << video indexing >> , are usually multi-label classification problems in which an instance can be assigned to more than one category .",1417,0 +1418,"Many << computer vision applications >> , such as image classification and [[ video indexing ]] , are usually multi-label classification problems in which an instance can be assigned to more than one category .",1418,2 +1419,"Many << computer vision applications >> , such as image classification and video indexing , are usually [[ multi-label classification problems ]] in which an instance can be assigned to more than one category .",1419,3 +1420,"In this paper , we present a novel << multi-label classification approach >> with [[ hypergraph regu-larization ]] that addresses the correlations among different categories .",1420,1 +1421,"Then , an improved [[ SVM like learning system ]] incorporating the hypergraph regularization , called Rank-HLapSVM , is proposed to handle the << multi-label classification problems >> .",1421,3 +1422,"Then , an improved << SVM like learning system >> incorporating the [[ hypergraph regularization ]] , called Rank-HLapSVM , is proposed to handle the multi-label classification problems .",1422,4 +1423,"Then , an improved << SVM like learning system >> incorporating the hypergraph regularization , called [[ Rank-HLapSVM ]] , is proposed to handle the multi-label classification problems .",1423,2 +1424,We find that the corresponding << optimization problem >> can be efficiently solved by the [[ dual coordinate descent method ]] .,1424,3 +1425,Many promising experimental results on the [[ real datasets ]] including ImageCLEF and Me-diaMill demonstrate the effectiveness and efficiency of the proposed << algorithm >> .,1425,6 +1426,Many promising experimental results on the << real datasets >> including [[ ImageCLEF ]] and Me-diaMill demonstrate the effectiveness and efficiency of the proposed algorithm .,1426,2 +1427,Many promising experimental results on the real datasets including [[ ImageCLEF ]] and << Me-diaMill >> demonstrate the effectiveness and efficiency of the proposed algorithm .,1427,0 +1428,Many promising experimental results on the << real datasets >> including ImageCLEF and [[ Me-diaMill ]] demonstrate the effectiveness and efficiency of the proposed algorithm .,1428,2 +1429,"We derive a [[ convex optimization problem ]] for the task of << segmenting sequential data >> , which explicitly treats presence of outliers .",1429,3 +1430,"We derive a convex optimization problem for the task of [[ segmenting sequential data ]] , which explicitly treats presence of << outliers >> .",1430,3 +1431,"We describe two [[ algorithms ]] for solving this << problem >> , one exact and one a top-down novel approach , and we derive a consistency results for the case of two segments and no outliers .",1431,3 +1432,<< Robustness >> to [[ outliers ]] is evaluated on two real-world tasks related to speech segmentation .,1432,1 +1433,<< Robustness >> to outliers is evaluated on two [[ real-world tasks ]] related to speech segmentation .,1433,6 +1434,<< Robustness >> to outliers is evaluated on two real-world tasks related to [[ speech segmentation ]] .,1434,6 +1435,Robustness to outliers is evaluated on two << real-world tasks >> related to [[ speech segmentation ]] .,1435,1 +1436,Our [[ algorithms ]] outperform << baseline seg-mentation algorithms >> .,1436,5 +1437,This paper examines the properties of << feature-based partial descriptions >> built on top of [[ Halliday 's systemic networks ]] .,1437,3 +1438,"We show that the crucial operation of [[ consistency checking ]] for such << descriptions >> is NP-complete , and therefore probably intractable , but proceed to develop algorithms which can sometimes alleviate the unpleasant consequences of this intractability .",1438,3 +1439,"We describe [[ Yoopick ]] , a << combinatorial sports prediction market >> that implements a flexible betting language , and in turn facilitates fine-grained probabilistic estimation of outcomes .",1439,2 +1440,"We describe [[ Yoopick ]] , a combinatorial sports prediction market that implements a flexible betting language , and in turn facilitates << fine-grained probabilistic estimation of outcomes >> .",1440,3 +1441,"We describe << Yoopick >> , a combinatorial sports prediction market that implements a [[ flexible betting language ]] , and in turn facilitates fine-grained probabilistic estimation of outcomes .",1441,3 +1442,The goal of this paper is to discover a set of [[ discriminative patches ]] which can serve as a fully << unsupervised mid-level visual representation >> .,1442,3 +1443,We pose this as an << unsupervised discriminative clustering problem >> on a huge dataset of [[ image patches ]] .,1443,3 +1444,"We use an iterative procedure which alternates between clustering and training discriminative classifiers , while applying careful [[ cross-validation ]] at each step to prevent << overfitting >> .",1444,3 +1445,"The paper experimentally demonstrates the effectiveness of [[ discriminative patches ]] as an << unsupervised mid-level visual representation >> , suggesting that it could be used in place of visual words for many tasks .",1445,3 +1446,"The paper experimentally demonstrates the effectiveness of << discriminative patches >> as an unsupervised mid-level visual representation , suggesting that [[ it ]] could be used in place of visual words for many tasks .",1446,3 +1447,"Furthermore , [[ discrim-inative patches ]] can also be used in a << supervised regime >> , such as scene classification , where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset .",1447,3 +1448,"Furthermore , discrim-inative patches can also be used in a << supervised regime >> , such as [[ scene classification ]] , where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset .",1448,2 +1449,"Furthermore , discrim-inative patches can also be used in a supervised regime , such as scene classification , where << they >> demonstrate state-of-the-art performance on the [[ MIT Indoor-67 dataset ]] .",1449,6 +1450,"We investigate the utility of an [[ algorithm ]] for << translation lexicon acquisition -LRB- SABLE -RRB- >> , used previously on a very large corpus to acquire general translation lexicons , when that algorithm is applied to a much smaller corpus to produce candidates for domain-specific translation lexicons .",1450,3 +1451,"We investigate the utility of an [[ algorithm ]] for translation lexicon acquisition -LRB- SABLE -RRB- , used previously on a very large corpus to acquire << general translation lexicons >> , when that algorithm is applied to a much smaller corpus to produce candidates for domain-specific translation lexicons .",1451,3 +1452,"We investigate the utility of an algorithm for translation lexicon acquisition -LRB- SABLE -RRB- , used previously on a very large corpus to acquire general translation lexicons , when that [[ algorithm ]] is applied to a much smaller corpus to produce candidates for << domain-specific translation lexicons >> .",1452,3 +1453,This paper describes a [[ computational model ]] of << word segmentation >> and presents simulation results on realistic acquisition .,1453,3 +1454,This paper describes a [[ computational model ]] of word segmentation and presents simulation results on << realistic acquisition >> .,1454,3 +1455,"In particular , we explore the capacity and limitations of << statistical learning mechanisms >> that have recently gained prominence in [[ cognitive psychology ]] and linguistics .",1455,3 +1456,"In particular , we explore the capacity and limitations of statistical learning mechanisms that have recently gained prominence in [[ cognitive psychology ]] and << linguistics >> .",1456,0 +1457,"In particular , we explore the capacity and limitations of << statistical learning mechanisms >> that have recently gained prominence in cognitive psychology and [[ linguistics ]] .",1457,3 +1458,"In the [[ model-based policy search approach ]] to << reinforcement learning -LRB- RL -RRB- >> , policies are found using a model -LRB- or `` simulator '' -RRB- of the Markov decision process .",1458,3 +1459,"In the model-based policy search approach to reinforcement learning -LRB- RL -RRB- , << policies >> are found using a model -LRB- or `` simulator '' -RRB- of the [[ Markov decision process ]] .",1459,3 +1460,"However , for << high-dimensional continuous-state tasks >> , it can be extremely difficult to build an accurate [[ model ]] , and thus often the algorithm returns a policy that works in simulation but not in real-life .",1460,3 +1461,"However , for high-dimensional continuous-state tasks , it can be extremely difficult to build an accurate model , and thus often the [[ algorithm ]] returns a << policy >> that works in simulation but not in real-life .",1461,3 +1462,"The other extreme , << model-free RL >> , tends to require infeasibly large numbers of [[ real-life trials ]] .",1462,3 +1463,"In this paper , we present a << hybrid algorithm >> that requires only an [[ approximate model ]] , and only a small number of real-life trials .",1463,3 +1464,"In this paper , we present a hybrid algorithm that requires only an << approximate model >> , and only a small number of [[ real-life trials ]] .",1464,3 +1465,"The key idea is to successively `` ground '' the << policy evaluations >> using [[ real-life trials ]] , but to rely on the approximate model to suggest local changes .",1465,3 +1466,Empirical results also demonstrate that -- when given only a [[ crude model ]] and a small number of << real-life trials >> -- our algorithm can obtain near-optimal performance in the real system .,1466,0 +1467,Empirical results also demonstrate that -- when given only a [[ crude model ]] and a small number of real-life trials -- our << algorithm >> can obtain near-optimal performance in the real system .,1467,3 +1468,Empirical results also demonstrate that -- when given only a crude model and a small number of [[ real-life trials ]] -- our << algorithm >> can obtain near-optimal performance in the real system .,1468,3 +1469,"Although every << natural language system >> needs a [[ computational lexicon ]] , each system puts different amounts and types of information into its lexicon according to its individual needs .",1469,3 +1470,"This paper presents our experience in planning and building [[ COMPLEX ]] , a << computational lexicon >> designed to be a repository of shared lexical information for use by Natural Language Processing -LRB- NLP -RRB- systems .",1470,2 +1471,"This paper presents our experience in planning and building [[ COMPLEX ]] , a computational lexicon designed to be a repository of shared lexical information for use by << Natural Language Processing -LRB- NLP -RRB- systems >> .",1471,3 +1472,"Sentence planning is a set of inter-related but distinct << tasks >> , one of which is [[ sentence scoping ]] , i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences .",1472,4 +1473,"Sentence planning is a set of inter-related but distinct tasks , one of which is sentence scoping , i.e. the choice of [[ syntactic structure ]] for elementary << speech acts >> and the decision of how to combine them into one or more sentences .",1473,3 +1474,"In this paper , we present [[ SPoT ]] , a << sentence planner >> , and a new methodology for automatically training SPoT on the basis of feedback provided by human judges .",1474,2 +1475,"In this paper , we present SPoT , a sentence planner , and a new [[ methodology ]] for automatically training << SPoT >> on the basis of feedback provided by human judges .",1475,3 +1476,"First , a very simple , [[ randomized sentence-plan-generator -LRB- SPG -RRB- ]] generates a potentially large list of possible << sentence plans >> for a given text-plan input .",1476,3 +1477,"First , a very simple , << randomized sentence-plan-generator -LRB- SPG -RRB- >> generates a potentially large list of possible sentence plans for a given [[ text-plan input ]] .",1477,3 +1478,"Second , the [[ sentence-plan-ranker -LRB- SPR -RRB- ]] ranks the list of output << sentence plans >> , and then selects the top-ranked plan .",1478,3 +1479,The << SPR >> uses [[ ranking rules ]] automatically learned from training data .,1479,3 +1480,We show that the trained [[ SPR ]] learns to select a << sentence plan >> whose rating on average is only 5 % worse than the top human-ranked sentence plan .,1480,3 +1481,We show that the trained SPR learns to select a [[ sentence plan ]] whose rating on average is only 5 % worse than the << top human-ranked sentence plan >> .,1481,5 +1482,We discuss [[ maximum a posteriori estimation ]] of << continuous density hidden Markov models -LRB- CDHMM -RRB- >> .,1482,3 +1483,"The classical << MLE reestimation algorithms >> , namely the [[ forward-backward algorithm ]] and the segmental k-means algorithm , are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities .",1483,2 +1484,"The classical << MLE reestimation algorithms >> , namely the forward-backward algorithm and the [[ segmental k-means algorithm ]] , are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities .",1484,2 +1485,"The classical MLE reestimation algorithms , namely the << forward-backward algorithm >> and the [[ segmental k-means algorithm ]] , are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities .",1485,0 +1486,"The classical MLE reestimation algorithms , namely the forward-backward algorithm and the segmental k-means algorithm , are expanded and [[ reestimation formulas ]] are given for << HMM with Gaussian mixture observation densities >> .",1486,3 +1487,"Because of its adaptive nature , [[ Bayesian learning ]] serves as a unified approach for the following four << speech recognition applications >> , namely parameter smoothing , speaker adaptation , speaker group modeling and corrective training .",1487,3 +1488,"Because of its adaptive nature , Bayesian learning serves as a unified approach for the following four << speech recognition applications >> , namely [[ parameter smoothing ]] , speaker adaptation , speaker group modeling and corrective training .",1488,2 +1489,"Because of its adaptive nature , Bayesian learning serves as a unified approach for the following four speech recognition applications , namely [[ parameter smoothing ]] , << speaker adaptation >> , speaker group modeling and corrective training .",1489,0 +1490,"Because of its adaptive nature , Bayesian learning serves as a unified approach for the following four << speech recognition applications >> , namely parameter smoothing , [[ speaker adaptation ]] , speaker group modeling and corrective training .",1490,2 +1491,"Because of its adaptive nature , Bayesian learning serves as a unified approach for the following four speech recognition applications , namely parameter smoothing , [[ speaker adaptation ]] , << speaker group modeling >> and corrective training .",1491,0 +1492,"Because of its adaptive nature , Bayesian learning serves as a unified approach for the following four << speech recognition applications >> , namely parameter smoothing , speaker adaptation , [[ speaker group modeling ]] and corrective training .",1492,2 +1493,"Because of its adaptive nature , Bayesian learning serves as a unified approach for the following four speech recognition applications , namely parameter smoothing , speaker adaptation , [[ speaker group modeling ]] and << corrective training >> .",1493,0 +1494,"Because of its adaptive nature , Bayesian learning serves as a unified approach for the following four << speech recognition applications >> , namely parameter smoothing , speaker adaptation , speaker group modeling and [[ corrective training ]] .",1494,2 +1495,New experimental results on all four [[ applications ]] are provided to show the effectiveness of the << MAP estimation approach >> .,1495,6 +1496,This paper describes a characters-based Chinese collocation system and discusses the advantages of [[ it ]] over a traditional << word-based system >> .,1496,5 +1497,"Since wordbreaks are not conventionally marked in Chinese text corpora , a << character-based collocation system >> has the dual advantages of [[ avoiding pre-processing distortion ]] and directly accessing sub-lexical information .",1497,1 +1498,"Since wordbreaks are not conventionally marked in Chinese text corpora , a character-based collocation system has the dual advantages of [[ avoiding pre-processing distortion ]] and directly << accessing sub-lexical information >> .",1498,0 +1499,"Since wordbreaks are not conventionally marked in Chinese text corpora , a << character-based collocation system >> has the dual advantages of avoiding pre-processing distortion and directly [[ accessing sub-lexical information ]] .",1499,1 +1500,"Furthermore , << word-based collocational properties >> can be obtained through an [[ auxiliary module of automatic segmentation ]] .",1500,3 +1501,This paper describes a [[ method ]] for << utterance classification >> that does not require manual transcription of training data .,1501,3 +1502,The [[ method ]] combines domain independent acoustic models with off-the-shelf classifiers to give << utterance classification >> performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription .,1502,3 +1503,The << method >> combines [[ domain independent acoustic models ]] with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription .,1503,4 +1504,The << method >> combines domain independent acoustic models with off-the-shelf [[ classifiers ]] to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription .,1504,4 +1505,The method combines << domain independent acoustic models >> with off-the-shelf [[ classifiers ]] to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription .,1505,0 +1506,The << method >> combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional [[ word-trigram recognition ]] requiring manual transcription .,1506,3 +1507,The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional << word-trigram recognition >> requiring [[ manual transcription ]] .,1507,3 +1508,"In our << method >> , [[ unsupervised training ]] is first used to train a phone n-gram model for a particular domain ; the output of recognition with this model is then passed to a phone-string classifier .",1508,4 +1509,"In our method , [[ unsupervised training ]] is first used to train a << phone n-gram model >> for a particular domain ; the output of recognition with this model is then passed to a phone-string classifier .",1509,3 +1510,"In our method , [[ unsupervised training ]] is first used to train a phone n-gram model for a particular << domain >> ; the output of recognition with this model is then passed to a phone-string classifier .",1510,3 +1511,The [[ classification accuracy ]] of the << method >> is evaluated on three different spoken language system domains .,1511,6 +1512,The classification accuracy of the << method >> is evaluated on three different [[ spoken language system domains ]] .,1512,6 +1513,"The [[ Interval Algebra -LRB- IA -RRB- ]] and a subset of the << Region Connection Calculus -LRB- RCC -RRB- >> , namely RCC-8 , are the dominant Artificial Intelligence approaches for representing and reasoning about qualitative temporal and topological relations respectively .",1513,0 +1514,"The [[ Interval Algebra -LRB- IA -RRB- ]] and a subset of the Region Connection Calculus -LRB- RCC -RRB- , namely RCC-8 , are the dominant << Artificial Intelligence approaches >> for representing and reasoning about qualitative temporal and topological relations respectively .",1514,2 +1515,"The Interval Algebra -LRB- IA -RRB- and a subset of the [[ Region Connection Calculus -LRB- RCC -RRB- ]] , namely RCC-8 , are the dominant << Artificial Intelligence approaches >> for representing and reasoning about qualitative temporal and topological relations respectively .",1515,2 +1516,"The Interval Algebra -LRB- IA -RRB- and a subset of the << Region Connection Calculus -LRB- RCC -RRB- >> , namely [[ RCC-8 ]] , are the dominant Artificial Intelligence approaches for representing and reasoning about qualitative temporal and topological relations respectively .",1516,2 +1517,"The Interval Algebra -LRB- IA -RRB- and a subset of the Region Connection Calculus -LRB- RCC -RRB- , namely RCC-8 , are the dominant [[ Artificial Intelligence approaches ]] for << representing and reasoning about qualitative temporal and topological relations >> respectively .",1517,3 +1518,Such << qualitative information >> can be formulated as a [[ Qualitative Constraint Network -LRB- QCN -RRB- ]] .,1518,3 +1519,"In this paper , we focus on the minimal labeling problem -LRB- MLP -RRB- and we propose an [[ algorithm ]] to efficiently derive all the feasible base relations of a << QCN >> .",1519,3 +1520,Our << algorithm >> considers [[ chordal QCNs ]] and a new form of partial consistency which we define as ◆ G-consistency .,1520,4 +1521,Our algorithm considers [[ chordal QCNs ]] and a new form of << partial consistency >> which we define as ◆ G-consistency .,1521,0 +1522,Our << algorithm >> considers chordal QCNs and a new form of [[ partial consistency ]] which we define as ◆ G-consistency .,1522,4 +1523,Our algorithm considers chordal QCNs and a new form of << partial consistency >> which we define as [[ ◆ G-consistency ]] .,1523,2 +1524,Experi-mentations with [[ QCNs of IA and RCC-8 ]] show the importance and efficiency of this new << approach >> .,1524,6 +1525,In this paper a [[ morphological component ]] with a limited capability to automatically interpret -LRB- and generate -RRB- << derived words >> is presented .,1525,3 +1526,"The << system >> combines an extended [[ two-level morphology ]] -LSB- Trost , 1991a ; Trost , 1991b -RSB- with a feature-based word grammar building on a hierarchical lexicon .",1526,3 +1527,"The system combines an extended [[ two-level morphology ]] -LSB- Trost , 1991a ; Trost , 1991b -RSB- with a << feature-based word grammar >> building on a hierarchical lexicon .",1527,0 +1528,"The system combines an extended two-level morphology -LSB- Trost , 1991a ; Trost , 1991b -RSB- with a << feature-based word grammar >> building on a [[ hierarchical lexicon ]] .",1528,3 +1529,<< Polymorphemic stems >> not explicitly stored in the lexicon are given a [[ compositional interpretation ]] .,1529,1 +1530,The << system >> is implemented in [[ CommonLisp ]] and has been tested on examples from German derivation .,1530,3 +1531,The << system >> is implemented in CommonLisp and has been tested on examples from [[ German derivation ]] .,1531,6 +1532,Four problems render vector space model -LRB- VSM -RRB- - based text classification approach ineffective : 1 -RRB- Many words within song lyrics actually contribute little to sentiment ; 2 -RRB- Nouns and verbs used to express sentiment are ambiguous ; 3 -RRB- [[ Negations ]] and << modifiers >> around the sentiment keywords make particular contributions to sentiment ; 4 -RRB- Song lyric is usually very short .,1532,0 +1533,Four problems render vector space model -LRB- VSM -RRB- - based text classification approach ineffective : 1 -RRB- Many words within song lyrics actually contribute little to sentiment ; 2 -RRB- Nouns and verbs used to express sentiment are ambiguous ; 3 -RRB- [[ Negations ]] and modifiers around the sentiment keywords make particular contributions to << sentiment >> ; 4 -RRB- Song lyric is usually very short .,1533,3 +1534,Four problems render vector space model -LRB- VSM -RRB- - based text classification approach ineffective : 1 -RRB- Many words within song lyrics actually contribute little to sentiment ; 2 -RRB- Nouns and verbs used to express sentiment are ambiguous ; 3 -RRB- Negations and [[ modifiers ]] around the sentiment keywords make particular contributions to << sentiment >> ; 4 -RRB- Song lyric is usually very short .,1534,3 +1535,"To address these problems , the [[ sentiment vector space model -LRB- s-VSM -RRB- ]] is proposed to represent << song lyric document >> .",1535,3 +1536,The preliminary experiments prove that the [[ s-VSM model ]] outperforms the << VSM model >> in the lyric-based song sentiment classification task .,1536,5 +1537,The preliminary experiments prove that the << s-VSM model >> outperforms the VSM model in the [[ lyric-based song sentiment classification task ]] .,1537,6 +1538,The preliminary experiments prove that the s-VSM model outperforms the << VSM model >> in the [[ lyric-based song sentiment classification task ]] .,1538,6 +1539,"We present an efficient [[ algorithm ]] for the << redundancy elimination problem >> : Given an underspecified semantic representation -LRB- USR -RRB- of a scope ambiguity , compute an USR with fewer mutually equivalent readings .",1539,3 +1540,"We present an efficient algorithm for the redundancy elimination problem : Given an [[ underspecified semantic representation -LRB- USR -RRB- ]] of a << scope ambiguity >> , compute an USR with fewer mutually equivalent readings .",1540,3 +1541,"We present an efficient algorithm for the redundancy elimination problem : Given an underspecified semantic representation -LRB- USR -RRB- of a scope ambiguity , compute an << USR >> with fewer mutually [[ equivalent readings ]] .",1541,3 +1542,The [[ algorithm ]] operates on << underspecified chart representations >> which are derived from dominance graphs ; it can be applied to the USRs computed by large-scale grammars .,1542,3 +1543,The algorithm operates on << underspecified chart representations >> which are derived from [[ dominance graphs ]] ; it can be applied to the USRs computed by large-scale grammars .,1543,3 +1544,The algorithm operates on underspecified chart representations which are derived from dominance graphs ; [[ it ]] can be applied to the << USRs >> computed by large-scale grammars .,1544,3 +1545,The algorithm operates on underspecified chart representations which are derived from dominance graphs ; it can be applied to the << USRs >> computed by [[ large-scale grammars ]] .,1545,3 +1546,"We evaluate the algorithm on a corpus , and show that [[ it ]] reduces the << degree of ambiguity >> significantly while taking negligible runtime .",1546,3 +1547,"Currently several << grammatical formalisms >> converge towards being declarative and towards utilizing [[ context-free phrase-structure grammar ]] as a backbone , e.g. LFG and PATR-II .",1547,3 +1548,"Currently several << grammatical formalisms >> converge towards being declarative and towards utilizing context-free phrase-structure grammar as a backbone , e.g. [[ LFG ]] and PATR-II .",1548,2 +1549,"Currently several grammatical formalisms converge towards being declarative and towards utilizing context-free phrase-structure grammar as a backbone , e.g. [[ LFG ]] and << PATR-II >> .",1549,0 +1550,"Currently several << grammatical formalisms >> converge towards being declarative and towards utilizing context-free phrase-structure grammar as a backbone , e.g. LFG and [[ PATR-II ]] .",1550,2 +1551,Typically the processing of these << formalisms >> is organized within a [[ chart-parsing framework ]] .,1551,1 +1552,The aim of this paper is to provide a survey and a practical comparison of fundamental [[ rule-invocation strategies ]] within << context-free chart parsing >> .,1552,4 +1553,"The present paper focusses on << terminology structuring >> by [[ lexical methods ]] , which match terms on the basis on their content words , taking morphological variants into account .",1553,3 +1554,Experiments are done on a ` flat ' list of terms obtained from an originally << hierarchically-structured terminology >> : the French version of the [[ US National Library of Medicine MeSH thesaurus ]] .,1554,2 +1555,"We compare the [[ lexically-induced relations ]] with the original << MeSH relations >> : after a quantitative evaluation of their congruence through recall and precision metrics , we perform a qualitative , human analysis ofthe ` new ' relations not present in the MeSH .",1555,5 +1556,"We compare the lexically-induced relations with the original << MeSH relations >> : after a quantitative evaluation of their congruence through [[ recall and precision metrics ]] , we perform a qualitative , human analysis ofthe ` new ' relations not present in the MeSH .",1556,6 +1557,"In order to boost the translation quality of << EBMT >> based on a [[ small-sized bilingual corpus ]] , we use an out-of-domain bilingual corpus and , in addition , the language model of an in-domain monolingual corpus .",1557,3 +1558,"In order to boost the translation quality of << EBMT >> based on a small-sized bilingual corpus , we use an [[ out-of-domain bilingual corpus ]] and , in addition , the language model of an in-domain monolingual corpus .",1558,3 +1559,"In order to boost the translation quality of << EBMT >> based on a small-sized bilingual corpus , we use an out-of-domain bilingual corpus and , in addition , the [[ language model ]] of an in-domain monolingual corpus .",1559,3 +1560,"In order to boost the translation quality of EBMT based on a small-sized bilingual corpus , we use an out-of-domain bilingual corpus and , in addition , the << language model >> of an [[ in-domain monolingual corpus ]] .",1560,3 +1561,The two [[ evaluation measures ]] of the BLEU score and the NIST score demonstrated the effect of using an << out-of-domain bilingual corpus >> and the possibility of using the language model .,1561,3 +1562,The two [[ evaluation measures ]] of the BLEU score and the NIST score demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the << language model >> .,1562,6 +1563,The two << evaluation measures >> of the [[ BLEU score ]] and the NIST score demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the language model .,1563,2 +1564,The two evaluation measures of the [[ BLEU score ]] and the << NIST score >> demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the language model .,1564,0 +1565,The two << evaluation measures >> of the BLEU score and the [[ NIST score ]] demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the language model .,1565,2 +1566,"[[ Diagrams ]] are common tools for representing << complex concepts >> , relationships and events , often when it would be difficult to portray the same information with natural images .",1566,3 +1567,"[[ Diagrams ]] are common tools for representing complex concepts , << relationships >> and events , often when it would be difficult to portray the same information with natural images .",1567,3 +1568,"[[ Diagrams ]] are common tools for representing complex concepts , relationships and << events >> , often when it would be difficult to portray the same information with natural images .",1568,3 +1569,"Diagrams are common tools for representing [[ complex concepts ]] , << relationships >> and events , often when it would be difficult to portray the same information with natural images .",1569,0 +1570,"Diagrams are common tools for representing complex concepts , [[ relationships ]] and << events >> , often when it would be difficult to portray the same information with natural images .",1570,0 +1571,"[[ Understanding natural images ]] has been extensively studied in << computer vision >> , while diagram understanding has received little attention .",1571,4 +1572,"[[ Understanding natural images ]] has been extensively studied in computer vision , while << diagram understanding >> has received little attention .",1572,5 +1573,"In this paper , we study the problem of diagram interpretation and reasoning , the challenging [[ task ]] of identifying the << structure of a diagram >> and the semantics of its constituents and their relationships .",1573,3 +1574,We introduce [[ Diagram Parse Graphs -LRB- DPG -RRB- ]] as our representation to model the << structure of diagrams >> .,1574,3 +1575,We define [[ syntactic parsing of diagrams ]] as learning to infer << DPGs >> for diagrams and study semantic interpretation and reasoning of diagrams in the context of diagram question answering .,1575,3 +1576,We define syntactic parsing of diagrams as learning to infer DPGs for diagrams and study [[ semantic interpretation and reasoning of diagrams ]] in the context of << diagram question answering >> .,1576,3 +1577,We devise an [[ LSTM-based method ]] for << syntactic parsing of diagrams >> and introduce a DPG-based attention model for diagram question answering .,1577,3 +1578,We devise an LSTM-based method for syntactic parsing of diagrams and introduce a [[ DPG-based attention model ]] for << diagram question answering >> .,1578,3 +1579,"We compile a new << dataset >> of [[ diagrams ]] with exhaustive annotations of constituents and relationships for over 5,000 diagrams and 15,000 questions and answers .",1579,1 +1580,Our results show the significance of our [[ models ]] for << syntactic parsing and question answering in diagrams >> using DPGs .,1580,3 +1581,Our results show the significance of our << models >> for syntactic parsing and question answering in diagrams using [[ DPGs ]] .,1581,3 +1582,"Previous [[ change detection methods ]] , focusing on << detecting large-scale significant changes >> , can not do this well .",1582,3 +1583,This paper proposes a feasible [[ end-to-end approach ]] to this challenging << problem >> .,1583,3 +1584,"Given two times observations , we formulate << fine-grained change detection >> as a [[ joint optimization problem ]] of three related factors , i.e. , normal-aware lighting difference , camera geometry correction flow , and real scene change mask .",1584,3 +1585,"Given two times observations , we formulate fine-grained change detection as a << joint optimization problem >> of three related [[ factors ]] , i.e. , normal-aware lighting difference , camera geometry correction flow , and real scene change mask .",1585,1 +1586,"Given two times observations , we formulate fine-grained change detection as a joint optimization problem of three related << factors >> , i.e. , [[ normal-aware lighting difference ]] , camera geometry correction flow , and real scene change mask .",1586,2 +1587,"Given two times observations , we formulate fine-grained change detection as a joint optimization problem of three related factors , i.e. , [[ normal-aware lighting difference ]] , << camera geometry correction flow >> , and real scene change mask .",1587,0 +1588,"Given two times observations , we formulate fine-grained change detection as a joint optimization problem of three related << factors >> , i.e. , normal-aware lighting difference , [[ camera geometry correction flow ]] , and real scene change mask .",1588,2 +1589,"Given two times observations , we formulate fine-grained change detection as a joint optimization problem of three related factors , i.e. , normal-aware lighting difference , [[ camera geometry correction flow ]] , and << real scene change mask >> .",1589,0 +1590,"Given two times observations , we formulate fine-grained change detection as a joint optimization problem of three related << factors >> , i.e. , normal-aware lighting difference , camera geometry correction flow , and [[ real scene change mask ]] .",1590,2 +1591,We solve the three << factors >> in a [[ coarse-to-fine manner ]] and achieve reliable change decision by rank minimization .,1591,3 +1592,We solve the three factors in a coarse-to-fine manner and achieve reliable << change decision >> by [[ rank minimization ]] .,1592,3 +1593,We build three [[ real-world datasets ]] to benchmark << fine-grained change detection of misaligned scenes >> under varied multiple lighting conditions .,1593,6 +1594,We build three real-world datasets to benchmark << fine-grained change detection of misaligned scenes >> under [[ varied multiple lighting conditions ]] .,1594,1 +1595,Extensive experiments show the superior performance of our [[ approach ]] over state-of-the-art << change detection methods >> and its ability to distinguish real scene changes from false ones caused by lighting variations .,1595,5 +1596,Extensive experiments show the superior performance of our [[ approach ]] over state-of-the-art change detection methods and its ability to distinguish << real scene changes >> from false ones caused by lighting variations .,1596,3 +1597,"[[ Automatic evaluation metrics ]] for << Machine Translation -LRB- MT -RRB- systems >> , such as BLEU or NIST , are now well established .",1597,6 +1598,"<< Automatic evaluation metrics >> for Machine Translation -LRB- MT -RRB- systems , such as [[ BLEU ]] or NIST , are now well established .",1598,2 +1599,"Automatic evaluation metrics for Machine Translation -LRB- MT -RRB- systems , such as [[ BLEU ]] or << NIST >> , are now well established .",1599,0 +1600,"<< Automatic evaluation metrics >> for Machine Translation -LRB- MT -RRB- systems , such as BLEU or [[ NIST ]] , are now well established .",1600,2 +1601,"Yet , [[ they ]] are scarcely used for the << assessment of language pairs >> like English-Chinese or English-Japanese , because of the word segmentation problem .",1601,3 +1602,"Yet , they are scarcely used for the assessment of << language pairs >> like [[ English-Chinese ]] or English-Japanese , because of the word segmentation problem .",1602,2 +1603,"Yet , they are scarcely used for the assessment of language pairs like [[ English-Chinese ]] or << English-Japanese >> , because of the word segmentation problem .",1603,0 +1604,"Yet , they are scarcely used for the assessment of << language pairs >> like English-Chinese or [[ English-Japanese ]] , because of the word segmentation problem .",1604,2 +1605,This study establishes the equivalence between the standard use of [[ BLEU ]] in << word n-grams >> and its application at the character level .,1605,3 +1606,This study establishes the equivalence between the standard use of [[ BLEU ]] in word n-grams and its application at the << character level >> .,1606,3 +1607,This study establishes the equivalence between the standard use of BLEU in [[ word n-grams ]] and its application at the << character level >> .,1607,0 +1608,"The use of [[ BLEU ]] at the << character level >> eliminates the word segmentation problem : it makes it possible to directly compare commercial systems outputting unsegmented texts with , for instance , statistical MT systems which usually segment their outputs .",1608,3 +1609,"The use of [[ BLEU ]] at the character level eliminates the << word segmentation problem >> : it makes it possible to directly compare commercial systems outputting unsegmented texts with , for instance , statistical MT systems which usually segment their outputs .",1609,3 +1610,"The use of BLEU at the character level eliminates the word segmentation problem : [[ it ]] makes it possible to directly compare << commercial systems >> outputting unsegmented texts with , for instance , statistical MT systems which usually segment their outputs .",1610,6 +1611,"The use of BLEU at the character level eliminates the word segmentation problem : [[ it ]] makes it possible to directly compare commercial systems outputting unsegmented texts with , for instance , << statistical MT systems >> which usually segment their outputs .",1611,6 +1612,"The use of BLEU at the character level eliminates the word segmentation problem : it makes it possible to directly compare [[ commercial systems ]] outputting unsegmented texts with , for instance , << statistical MT systems >> which usually segment their outputs .",1612,5 +1613,This paper proposes a series of modifications to the [[ left corner parsing algorithm ]] for << context-free grammars >> .,1613,3 +1614,"It is argued that the resulting [[ algorithm ]] is both efficient and flexible and is , therefore , a good choice for the << parser >> used in a natural language interface .",1614,3 +1615,"It is argued that the resulting algorithm is both efficient and flexible and is , therefore , a good choice for the [[ parser ]] used in a << natural language interface >> .",1615,3 +1616,This paper presents a novel << statistical singing voice conversion -LRB- SVC -RRB- technique >> with [[ direct waveform modification ]] based on the spectrum differential that can convert voice timbre of a source singer into that of a target singer without using a vocoder to generate converted singing voice waveforms .,1616,3 +1617,This paper presents a novel statistical singing voice conversion -LRB- SVC -RRB- technique with << direct waveform modification >> based on the [[ spectrum differential ]] that can convert voice timbre of a source singer into that of a target singer without using a vocoder to generate converted singing voice waveforms .,1617,3 +1618,This paper presents a novel statistical singing voice conversion -LRB- SVC -RRB- technique with direct waveform modification based on the [[ spectrum differential ]] that can convert << voice timbre >> of a source singer into that of a target singer without using a vocoder to generate converted singing voice waveforms .,1618,3 +1619,This paper presents a novel statistical singing voice conversion -LRB- SVC -RRB- technique with direct waveform modification based on the spectrum differential that can convert voice timbre of a source singer into that of a target singer without using a [[ vocoder ]] to generate << converted singing voice waveforms >> .,1619,3 +1620,[[ SVC ]] makes it possible to convert << singing voice characteristics >> of an arbitrary source singer into those of an arbitrary target singer .,1620,3 +1621,"However , [[ speech quality ]] of the << converted singing voice >> is significantly degraded compared to that of a natural singing voice due to various factors , such as analysis and modeling errors in the vocoder-based framework .",1621,6 +1622,"However , [[ speech quality ]] of the converted singing voice is significantly degraded compared to that of a << natural singing voice >> due to various factors , such as analysis and modeling errors in the vocoder-based framework .",1622,6 +1623,"However , speech quality of the [[ converted singing voice ]] is significantly degraded compared to that of a << natural singing voice >> due to various factors , such as analysis and modeling errors in the vocoder-based framework .",1623,5 +1624,The << differential spectral feature >> is directly estimated using a [[ differential Gaussian mixture model -LRB- GMM -RRB- ]] that is analytically derived from the traditional GMM used as a conversion model in the conventional SVC .,1624,3 +1625,The differential spectral feature is directly estimated using a << differential Gaussian mixture model -LRB- GMM -RRB- >> that is analytically derived from the traditional [[ GMM ]] used as a conversion model in the conventional SVC .,1625,3 +1626,The differential spectral feature is directly estimated using a differential Gaussian mixture model -LRB- GMM -RRB- that is analytically derived from the traditional [[ GMM ]] used as a << conversion model >> in the conventional SVC .,1626,3 +1627,The differential spectral feature is directly estimated using a differential Gaussian mixture model -LRB- GMM -RRB- that is analytically derived from the traditional GMM used as a [[ conversion model ]] in the conventional << SVC >> .,1627,3 +1628,The experimental results demonstrate that the proposed [[ method ]] makes it possible to significantly improve speech quality in the converted singing voice while preserving the conversion accuracy of singer identity compared to the conventional << SVC >> .,1628,5 +1629,The experimental results demonstrate that the proposed << method >> makes it possible to significantly improve [[ speech quality ]] in the converted singing voice while preserving the conversion accuracy of singer identity compared to the conventional SVC .,1629,6 +1630,The experimental results demonstrate that the proposed method makes it possible to significantly improve [[ speech quality ]] in the converted singing voice while preserving the conversion accuracy of singer identity compared to the conventional << SVC >> .,1630,6 +1631,The experimental results demonstrate that the proposed << method >> makes it possible to significantly improve speech quality in the converted singing voice while preserving the [[ conversion accuracy of singer identity ]] compared to the conventional SVC .,1631,6 +1632,The experimental results demonstrate that the proposed method makes it possible to significantly improve speech quality in the converted singing voice while preserving the [[ conversion accuracy of singer identity ]] compared to the conventional << SVC >> .,1632,3 +1633,During late-2013 through early-2014 NIST coordinated a special << i-vector challenge >> based on data used in previous [[ NIST Speaker Recognition Evaluations -LRB- SREs -RRB- ]] .,1633,3 +1634,"Unlike evaluations in the SRE series , the i-vector challenge was run entirely online and used [[ fixed-length feature vectors ]] projected into a << low-dimensional space -LRB- i-vectors -RRB- >> rather than audio recordings .",1634,3 +1635,"Unlike evaluations in the SRE series , the << i-vector challenge >> was run entirely online and used fixed-length feature vectors projected into a [[ low-dimensional space -LRB- i-vectors -RRB- ]] rather than audio recordings .",1635,3 +1636,"Unlike evaluations in the SRE series , the i-vector challenge was run entirely online and used fixed-length feature vectors projected into a << low-dimensional space -LRB- i-vectors -RRB- >> rather than [[ audio recordings ]] .",1636,5 +1637,"Compared to the 2012 [[ SRE ]] , the << i-vector challenge >> saw an increase in the number of participants by nearly a factor of two , and a two orders of magnitude increase in the number of systems submitted for evaluation .",1637,5 +1638,Initial results indicate the [[ leading system ]] achieved an approximate 37 % improvement relative to the << baseline system >> .,1638,5 +1639,Theoretical research in the area of << machine translation >> usually involves the search for and creation of an appropriate [[ formalism ]] .,1639,3 +1640,"In this paper , we will introduce the [[ anaphoric component ]] of the << Mimo formalism >> .",1640,4 +1641,"In [[ Mimo ]] , the << translation of anaphoric relations >> is compositional .",1641,3 +1642,"The [[ anaphoric component ]] is used to define << linguistic phenomena >> such as wh-movement , the passive and the binding of reflexives and pronouns mono-lingually .",1642,3 +1643,"The anaphoric component is used to define << linguistic phenomena >> such as [[ wh-movement ]] , the passive and the binding of reflexives and pronouns mono-lingually .",1643,2 +1644,"The anaphoric component is used to define linguistic phenomena such as [[ wh-movement ]] , << the passive and the binding of reflexives and pronouns >> mono-lingually .",1644,0 +1645,"The anaphoric component is used to define << linguistic phenomena >> such as wh-movement , [[ the passive and the binding of reflexives and pronouns ]] mono-lingually .",1645,2 +1646,The [[ efficiency ]] and << quality >> is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD 's .,1646,0 +1647,The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a << database >> of 40000 [[ images of popular music CD 's ]] .,1647,1 +1648,"The [[ scheme ]] builds upon popular techniques of indexing descriptors extracted from local regions , and is robust to << background clutter >> and occlusion .",1648,3 +1649,"The [[ scheme ]] builds upon popular techniques of indexing descriptors extracted from local regions , and is robust to background clutter and << occlusion >> .",1649,3 +1650,"The << scheme >> builds upon popular techniques of [[ indexing descriptors ]] extracted from local regions , and is robust to background clutter and occlusion .",1650,3 +1651,"The scheme builds upon popular techniques of << indexing descriptors >> extracted from [[ local regions ]] , and is robust to background clutter and occlusion .",1651,3 +1652,"The scheme builds upon popular techniques of indexing descriptors extracted from local regions , and is robust to [[ background clutter ]] and << occlusion >> .",1652,0 +1653,The << local region descriptors >> are hierarchically quantized in a [[ vocabulary tree ]] .,1653,3 +1654,"The [[ quantization ]] and the << indexing >> are therefore fully integrated , essentially being one and the same .",1654,0 +1655,"The [[ recognition quality ]] is evaluated through retrieval on a database with ground truth , showing the power of the << vocabulary tree approach >> , going as high as 1 million images .",1655,6 +1656,"The << recognition quality >> is evaluated through [[ retrieval ]] on a database with ground truth , showing the power of the vocabulary tree approach , going as high as 1 million images .",1656,6 +1657,"The recognition quality is evaluated through << retrieval >> on a [[ database with ground truth ]] , showing the power of the vocabulary tree approach , going as high as 1 million images .",1657,3 +1658,This paper presents a [[ method ]] for << blind estimation of reverberation times >> in reverberant enclosures .,1658,3 +1659,This paper presents a method for << blind estimation of reverberation times >> in [[ reverberant enclosures ]] .,1659,1 +1660,The proposed << algorithm >> is based on a [[ statistical model of short-term log-energy sequences ]] for echo-free speech .,1660,3 +1661,The proposed algorithm is based on a [[ statistical model of short-term log-energy sequences ]] for << echo-free speech >> .,1661,3 +1662,The [[ method ]] has been successfully applied to << robust automatic speech recognition >> in reverberant environments by model selection .,1662,3 +1663,The method has been successfully applied to << robust automatic speech recognition >> in [[ reverberant environments ]] by model selection .,1663,1 +1664,The << method >> has been successfully applied to robust automatic speech recognition in reverberant environments by [[ model selection ]] .,1664,3 +1665,"For this application , the << reverberation time >> is first estimated from the [[ reverberated speech utterance ]] to be recognized .",1665,3 +1666,The [[ estimation ]] is then used to select the best << acoustic model >> out of a library of models trained in various artificial re-verberant conditions .,1666,3 +1667,The estimation is then used to select the best [[ acoustic model ]] out of a library of << models >> trained in various artificial re-verberant conditions .,1667,4 +1668,The estimation is then used to select the best acoustic model out of a library of << models >> trained in various [[ artificial re-verberant conditions ]] .,1668,1 +1669,[[ Speech recognition ]] experiments in simulated and real reverberant environments show the efficiency of our << approach >> which outperforms standard channel normaliza-tion techniques .,1669,6 +1670,[[ Speech recognition ]] experiments in simulated and real reverberant environments show the efficiency of our approach which outperforms standard << channel normaliza-tion techniques >> .,1670,6 +1671,<< Speech recognition >> experiments in [[ simulated and real reverberant environments ]] show the efficiency of our approach which outperforms standard channel normaliza-tion techniques .,1671,1 +1672,Speech recognition experiments in simulated and real reverberant environments show the efficiency of our << approach >> which outperforms standard [[ channel normaliza-tion techniques ]] .,1672,5 +1673,"For one thing , [[ learning methodology ]] applicable in << general domains >> does not readily lend itself in the linguistic domain .",1673,3 +1674,"For one thing , learning methodology applicable in [[ general domains ]] does not readily lend itself in the << linguistic domain >> .",1674,5 +1675,"For another , [[ linguistic representation ]] used by << language processing systems >> is not geared to learning .",1675,3 +1676,"We introduced a new [[ linguistic representation ]] , the Dynamic Hierarchical Phrasal Lexicon -LRB- DHPL -RRB- -LSB- Zernik88 -RSB- , to facilitate << language acquisition >> .",1676,3 +1677,"We introduced a new << linguistic representation >> , the [[ Dynamic Hierarchical Phrasal Lexicon -LRB- DHPL -RRB- ]] -LSB- Zernik88 -RSB- , to facilitate language acquisition .",1677,2 +1678,"We introduced a new linguistic representation , the [[ Dynamic Hierarchical Phrasal Lexicon -LRB- DHPL -RRB- ]] -LSB- Zernik88 -RSB- , to facilitate << language acquisition >> .",1678,3 +1679,"From this , a [[ language learning model ]] was implemented in the program << RINA >> , which enhances its own lexical hierarchy by processing examples in context .",1679,4 +1680,"We identified two tasks : First , how [[ linguistic concepts ]] are acquired from training examples and organized in a << hierarchy >> ; this task was discussed in previous papers -LSB- Zernik87 -RSB- .",1680,4 +1681,"Second , we show in this paper how a [[ lexical hierarchy ]] is used in predicting new << linguistic concepts >> .",1681,3 +1682,This paper presents a novel [[ ensemble learning approach ]] to resolving << German pronouns >> .,1682,3 +1683,Experiments show that this [[ approach ]] is superior to a single << decision-tree classifier >> .,1683,5 +1684,"Furthermore , we present a [[ standalone system ]] that resolves << pronouns >> in unannotated text by using a fully automatic sequence of preprocessing modules that mimics the manual annotation process .",1684,3 +1685,"Furthermore , we present a [[ standalone system ]] that resolves pronouns in << unannotated text >> by using a fully automatic sequence of preprocessing modules that mimics the manual annotation process .",1685,3 +1686,"Furthermore , we present a standalone system that resolves [[ pronouns ]] in << unannotated text >> by using a fully automatic sequence of preprocessing modules that mimics the manual annotation process .",1686,4 +1687,"Furthermore , we present a << standalone system >> that resolves pronouns in unannotated text by using a fully automatic sequence of [[ preprocessing modules ]] that mimics the manual annotation process .",1687,3 +1688,"Furthermore , we present a standalone system that resolves pronouns in unannotated text by using a fully automatic sequence of [[ preprocessing modules ]] that mimics the << manual annotation process >> .",1688,3 +1689,"Although the << system >> performs well within a limited [[ textual domain ]] , further research is needed to make it effective for open-domain question answering and text summarisation .",1689,6 +1690,"Although the system performs well within a limited [[ textual domain ]] , further research is needed to make it effective for << open-domain question answering >> and text summarisation .",1690,5 +1691,"Although the system performs well within a limited textual domain , further research is needed to make [[ it ]] effective for << open-domain question answering >> and text summarisation .",1691,3 +1692,"Although the system performs well within a limited textual domain , further research is needed to make [[ it ]] effective for open-domain question answering and << text summarisation >> .",1692,3 +1693,"Although the system performs well within a limited textual domain , further research is needed to make it effective for [[ open-domain question answering ]] and << text summarisation >> .",1693,0 +1694,"In this paper , we compare the performance of a state-of-the-art [[ statistical parser ]] -LRB- Bikel , 2004 -RRB- in << parsing written and spoken language >> and in generating sub-categorization cues from written and spoken language .",1694,3 +1695,"In this paper , we compare the performance of a state-of-the-art [[ statistical parser ]] -LRB- Bikel , 2004 -RRB- in parsing written and spoken language and in << generating sub-categorization cues >> from written and spoken language .",1695,3 +1696,"In this paper , we compare the performance of a state-of-the-art statistical parser -LRB- Bikel , 2004 -RRB- in [[ parsing written and spoken language ]] and in << generating sub-categorization cues >> from written and spoken language .",1696,0 +1697,"In this paper , we compare the performance of a state-of-the-art statistical parser -LRB- Bikel , 2004 -RRB- in parsing written and spoken language and in << generating sub-categorization cues >> from [[ written and spoken language ]] .",1697,3 +1698,"Although [[ Bikel 's parser ]] achieves a higher accuracy for << parsing written language >> , it achieves a higher accuracy when extracting subcategorization cues from spoken language .",1698,3 +1699,"Although << Bikel 's parser >> achieves a higher [[ accuracy ]] for parsing written language , it achieves a higher accuracy when extracting subcategorization cues from spoken language .",1699,6 +1700,"Although Bikel 's parser achieves a higher accuracy for parsing written language , [[ it ]] achieves a higher accuracy when extracting << subcategorization cues >> from spoken language .",1700,3 +1701,"Although Bikel 's parser achieves a higher accuracy for parsing written language , << it >> achieves a higher [[ accuracy ]] when extracting subcategorization cues from spoken language .",1701,6 +1702,"Although Bikel 's parser achieves a higher accuracy for parsing written language , it achieves a higher accuracy when extracting [[ subcategorization cues ]] from << spoken language >> .",1702,4 +1703,Our experiments also show that current [[ technology ]] for << extracting subcategorization frames >> initially designed for written texts works equally well for spoken language .,1703,3 +1704,Our experiments also show that current [[ technology ]] for extracting subcategorization frames initially designed for written texts works equally well for << spoken language >> .,1704,3 +1705,Our experiments also show that current technology for [[ extracting subcategorization frames ]] initially designed for << written texts >> works equally well for spoken language .,1705,3 +1706,Our experiments also show that current technology for extracting subcategorization frames initially designed for [[ written texts ]] works equally well for << spoken language >> .,1706,5 +1707,"Additionally , we explore the utility of [[ punctuation ]] in helping << parsing >> and extraction of subcategorization cues .",1707,3 +1708,"Additionally , we explore the utility of [[ punctuation ]] in helping parsing and << extraction of subcategorization cues >> .",1708,3 +1709,Our experiments show that punctuation is of little help in [[ parsing spoken language ]] and << extracting subcategorization cues >> from spoken language .,1709,0 +1710,Our experiments show that punctuation is of little help in parsing spoken language and extracting [[ subcategorization cues ]] from << spoken language >> .,1710,4 +1711,Our experiments show that punctuation is of little help in parsing spoken language and << extracting subcategorization cues >> from [[ spoken language ]] .,1711,3 +1712,This paper proposes an [[ alignment adaptation approach ]] to improve << domain-specific -LRB- in-domain -RRB- word alignment >> .,1712,3 +1713,The basic idea of [[ alignment adaptation ]] is to use out-of-domain corpus to improve << in-domain word alignment >> results .,1713,3 +1714,The basic idea of << alignment adaptation >> is to use [[ out-of-domain corpus ]] to improve in-domain word alignment results .,1714,3 +1715,"In this paper , we first train two << statistical word alignment models >> with the [[ large-scale out-of-domain corpus ]] and the small-scale in-domain corpus respectively , and then interpolate these two models to improve the domain-specific word alignment .",1715,3 +1716,"In this paper , we first train two statistical word alignment models with the [[ large-scale out-of-domain corpus ]] and the << small-scale in-domain corpus >> respectively , and then interpolate these two models to improve the domain-specific word alignment .",1716,0 +1717,"In this paper , we first train two << statistical word alignment models >> with the large-scale out-of-domain corpus and the [[ small-scale in-domain corpus ]] respectively , and then interpolate these two models to improve the domain-specific word alignment .",1717,3 +1718,"In this paper , we first train two statistical word alignment models with the large-scale out-of-domain corpus and the small-scale in-domain corpus respectively , and then interpolate these two [[ models ]] to improve the << domain-specific word alignment >> .",1718,3 +1719,"Experimental results show that our [[ approach ]] improves << domain-specific word alignment >> in terms of both precision and recall , achieving a relative error rate reduction of 6.56 % as compared with the state-of-the-art technologies .",1719,3 +1720,"Experimental results show that our [[ approach ]] improves domain-specific word alignment in terms of both precision and recall , achieving a relative error rate reduction of 6.56 % as compared with the << state-of-the-art technologies >> .",1720,5 +1721,"Experimental results show that our << approach >> improves domain-specific word alignment in terms of both [[ precision ]] and recall , achieving a relative error rate reduction of 6.56 % as compared with the state-of-the-art technologies .",1721,6 +1722,"Experimental results show that our approach improves domain-specific word alignment in terms of both [[ precision ]] and << recall >> , achieving a relative error rate reduction of 6.56 % as compared with the state-of-the-art technologies .",1722,0 +1723,"Experimental results show that our << approach >> improves domain-specific word alignment in terms of both precision and [[ recall ]] , achieving a relative error rate reduction of 6.56 % as compared with the state-of-the-art technologies .",1723,6 +1724,"Experimental results show that our << approach >> improves domain-specific word alignment in terms of both precision and recall , achieving a [[ relative error rate reduction ]] of 6.56 % as compared with the state-of-the-art technologies .",1724,6 +1725,"Experimental results show that our approach improves domain-specific word alignment in terms of both precision and recall , achieving a [[ relative error rate reduction ]] of 6.56 % as compared with the << state-of-the-art technologies >> .",1725,6 +1726,"With performance above 97 % [[ accuracy ]] for newspaper text , << part of speech -LRB- pos -RRB- tagging >> might be considered a solved problem .",1726,6 +1727,"With performance above 97 % accuracy for [[ newspaper text ]] , << part of speech -LRB- pos -RRB- tagging >> might be considered a solved problem .",1727,6 +1728,Previous studies have shown that allowing the [[ parser ]] to resolve << pos tag ambiguity >> does not improve performance .,1728,3 +1729,"However , for << grammar formalisms >> which use more [[ fine-grained grammatical categories ]] , for example tag and ccg , tagging accuracy is much lower .",1729,3 +1730,"However , for grammar formalisms which use more << fine-grained grammatical categories >> , for example [[ tag ]] and ccg , tagging accuracy is much lower .",1730,2 +1731,"However , for grammar formalisms which use more << fine-grained grammatical categories >> , for example tag and [[ ccg ]] , tagging accuracy is much lower .",1731,2 +1732,"However , for << grammar formalisms >> which use more fine-grained grammatical categories , for example tag and ccg , [[ tagging accuracy ]] is much lower .",1732,6 +1733,"In fact , for these << formalisms >> , premature ambiguity resolution makes [[ parsing ]] infeasible .",1733,3 +1734,We describe a [[ multi-tagging approach ]] which maintains a suitable level of lexical category ambiguity for accurate and efficient << ccg parsing >> .,1734,3 +1735,We describe a << multi-tagging approach >> which maintains a suitable level of [[ lexical category ambiguity ]] for accurate and efficient ccg parsing .,1735,1 +1736,We extend this [[ multi-tagging approach ]] to the << pos level >> to overcome errors introduced by automatically assigned pos tags .,1736,3 +1737,"Although pos tagging accuracy seems high , maintaining some [[ pos tag ambiguity ]] in the << language processing pipeline >> results in more accurate ccg supertagging .",1737,1 +1738,"Although pos tagging accuracy seems high , maintaining some [[ pos tag ambiguity ]] in the language processing pipeline results in more accurate << ccg supertagging >> .",1738,3 +1739,We previously presented a [[ framework ]] for << segmentation of complex scenes >> using multiple physical hypotheses for simple image regions .,1739,3 +1740,We previously presented a << framework >> for segmentation of complex scenes using multiple [[ physical hypotheses ]] for simple image regions .,1740,3 +1741,We previously presented a framework for segmentation of complex scenes using multiple [[ physical hypotheses ]] for << simple image regions >> .,1741,3 +1742,A consequence of that [[ framework ]] was a proposal for a new << approach >> to the segmentation of complex scenes into regions corresponding to coherent surfaces rather than merely regions of similar color .,1742,3 +1743,A consequence of that framework was a proposal for a new [[ approach ]] to the << segmentation of complex scenes >> into regions corresponding to coherent surfaces rather than merely regions of similar color .,1743,3 +1744,A consequence of that framework was a proposal for a new approach to the segmentation of complex scenes into regions corresponding to [[ coherent surfaces ]] rather than merely << regions of similar color >> .,1744,5 +1745,Herein we present an implementation of this new approach and show example [[ segmentations ]] for << scenes >> containing multi-colored piece-wise uniform objects .,1745,3 +1746,Herein we present an implementation of this new approach and show example segmentations for << scenes >> containing [[ multi-colored piece-wise uniform objects ]] .,1746,1 +1747,Using our [[ approach ]] we are able to intelligently segment scenes with objects of greater complexity than previous << physics-based segmentation algorithms >> .,1747,5 +1748,"[[ SmartKom ]] is a << multimodal dialog system >> that combines speech , gesture , and mimics input and output .",1748,2 +1749,"SmartKom is a << multimodal dialog system >> that combines [[ speech ]] , gesture , and mimics input and output .",1749,3 +1750,"SmartKom is a multimodal dialog system that combines [[ speech ]] , << gesture >> , and mimics input and output .",1750,0 +1751,"SmartKom is a << multimodal dialog system >> that combines speech , [[ gesture ]] , and mimics input and output .",1751,3 +1752,[[ Spontaneous speech understanding ]] is combined with the << video-based recognition of natural gestures >> .,1752,0 +1753,One of the major scientific goals of [[ SmartKom ]] is to design new << computational methods >> for the seamless integration and mutual disambiguation of multimodal input and output on a semantic and pragmatic level .,1753,3 +1754,One of the major scientific goals of SmartKom is to design new [[ computational methods ]] for the seamless << integration and mutual disambiguation of multimodal input and output >> on a semantic and pragmatic level .,1754,3 +1755,One of the major scientific goals of SmartKom is to design new computational methods for the seamless << integration and mutual disambiguation of multimodal input and output >> on a [[ semantic and pragmatic level ]] .,1755,1 +1756,"<< SmartKom >> is based on the [[ situated delegation-oriented dialog paradigm ]] , in which the user delegates a task to a virtual communication assistant , visualized as a lifelike character on a graphical display .",1756,3 +1757,"We describe the SmartKom architecture , the use of an [[ XML-based markup language ]] for << multimodal content >> , and some of the distinguishing features of the first fully operational SmartKom demonstrator .",1757,3 +1758,We present a [[ single-image highlight removal method ]] that incorporates illumination-based constraints into << image in-painting >> .,1758,3 +1759,We present a single-image highlight removal method that incorporates [[ illumination-based constraints ]] into << image in-painting >> .,1759,4 +1760,"[[ Constraints ]] provided by observed pixel colors , highlight color analysis and illumination color uniformity are employed in our << method >> to improve estimation of the underlying diffuse color .",1760,3 +1761,"Constraints provided by observed [[ pixel colors ]] , << highlight color analysis >> and illumination color uniformity are employed in our method to improve estimation of the underlying diffuse color .",1761,0 +1762,"Constraints provided by observed pixel colors , [[ highlight color analysis ]] and << illumination color uniformity >> are employed in our method to improve estimation of the underlying diffuse color .",1762,0 +1763,"Constraints provided by observed pixel colors , highlight color analysis and illumination color uniformity are employed in our [[ method ]] to improve << estimation of the underlying diffuse color >> .",1763,3 +1764,The inclusion of these [[ illumination constraints ]] allows for better << recovery of shading and textures >> by inpainting .,1764,3 +1765,The inclusion of these illumination constraints allows for better << recovery of shading and textures >> by [[ inpainting ]] .,1765,3 +1766,"In this paper , we propose a novel [[ method ]] , called local non-negative matrix factorization -LRB- LNMF -RRB- , for learning << spatially localized , parts-based subspace representation of visual patterns >> .",1766,3 +1767,"An [[ objective function ]] is defined to impose << lo-calization constraint >> , in addition to the non-negativity constraint in the standard NMF -LSB- 1 -RSB- .",1767,3 +1768,"An objective function is defined to impose lo-calization constraint , in addition to the [[ non-negativity constraint ]] in the standard << NMF >> -LSB- 1 -RSB- .",1768,4 +1769,An [[ algorithm ]] is presented for the << learning >> of such basis components .,1769,3 +1770,"Experimental results are presented to compare [[ LNMF ]] with the << NMF and PCA methods >> for face representation and recognition , which demonstrates advantages of LNMF .",1770,5 +1771,"Experimental results are presented to compare [[ LNMF ]] with the NMF and PCA methods for << face representation and recognition >> , which demonstrates advantages of LNMF .",1771,3 +1772,"Experimental results are presented to compare LNMF with the [[ NMF and PCA methods ]] for << face representation and recognition >> , which demonstrates advantages of LNMF .",1772,3 +1773,"Experimental results are presented to compare LNMF with the NMF and PCA methods for [[ face representation and recognition ]] , which demonstrates advantages of << LNMF >> .",1773,6 +1774,"Many AI researchers have investigated useful ways of verifying and validating knowledge bases for [[ ontologies ]] and << rules >> , but it is not easy to directly apply them to checking process models .",1774,0 +1775,Other techniques developed for [[ checking and refining planning knowledge ]] tend to focus on << automated plan generation >> rather than helping users author process information .,1775,3 +1776,"In this paper , we propose a [[ complementary approach ]] which helps users author and check << process models >> .",1776,3 +1777,<< It >> builds [[ interdepen-dency models ]] from this analysis and uses them to find errors and propose fixes .,1777,3 +1778,It builds interdepen-dency models from this analysis and uses [[ them ]] to find << errors >> and propose fixes .,1778,3 +1779,It builds interdepen-dency models from this analysis and uses [[ them ]] to find errors and propose << fixes >> .,1779,3 +1780,"In this paper , we describe the research using [[ machine learning techniques ]] to build a << comma checker >> to be integrated in a grammar checker for Basque .",1780,3 +1781,"In this paper , we describe the research using machine learning techniques to build a [[ comma checker ]] to be integrated in a << grammar checker >> for Basque .",1781,4 +1782,"In this paper , we describe the research using machine learning techniques to build a comma checker to be integrated in a [[ grammar checker ]] for << Basque >> .",1782,3 +1783,"After several experiments , and trained with a little corpus of 100,000 words , the << system >> guesses correctly not placing commas with a [[ precision ]] of 96 % and a recall of 98 % .",1783,6 +1784,"After several experiments , and trained with a little corpus of 100,000 words , the << system >> guesses correctly not placing commas with a precision of 96 % and a [[ recall ]] of 98 % .",1784,6 +1785,[[ It ]] also gets a precision of 70 % and a recall of 49 % in the task of << placing commas >> .,1785,3 +1786,<< It >> also gets a [[ precision ]] of 70 % and a recall of 49 % in the task of placing commas .,1786,6 +1787,<< It >> also gets a precision of 70 % and a [[ recall ]] of 49 % in the task of placing commas .,1787,6 +1788,The present paper reports on a preparatory research for building a [[ language corpus annotation scenario ]] capturing the << discourse relations >> in Czech .,1788,3 +1789,The present paper reports on a preparatory research for building a language corpus annotation scenario capturing the << discourse relations >> in [[ Czech ]] .,1789,1 +1790,"We primarily focus on the description of the << syntactically motivated relations in discourse >> , basing our findings on the theoretical background of the [[ Prague Dependency Treebank 2.0 ]] and the Penn Discourse Treebank 2 .",1790,3 +1791,"We primarily focus on the description of the syntactically motivated relations in discourse , basing our findings on the theoretical background of the [[ Prague Dependency Treebank 2.0 ]] and the << Penn Discourse Treebank 2 >> .",1791,0 +1792,"We primarily focus on the description of the << syntactically motivated relations in discourse >> , basing our findings on the theoretical background of the Prague Dependency Treebank 2.0 and the [[ Penn Discourse Treebank 2 ]] .",1792,3 +1793,"Our aim is to revisit the present-day [[ syntactico-semantic -LRB- tectogrammatical -RRB- annotation ]] in the << Prague Dependency Treebank >> , extend it for the purposes of a sentence-boundary-crossing representation and eventually to design a new , discourse level of annotation .",1793,4 +1794,"Our aim is to revisit the present-day syntactico-semantic -LRB- tectogrammatical -RRB- annotation in the Prague Dependency Treebank , extend [[ it ]] for the purposes of a << sentence-boundary-crossing representation >> and eventually to design a new , discourse level of annotation .",1794,3 +1795,"Our aim is to revisit the present-day syntactico-semantic -LRB- tectogrammatical -RRB- annotation in the Prague Dependency Treebank , extend [[ it ]] for the purposes of a sentence-boundary-crossing representation and eventually to design a new , << discourse level of annotation >> .",1795,3 +1796,"In this paper , we propose a feasible process of such a transfer , comparing the possibilities the << Praguian dependency-based approach >> offers with the [[ Penn discourse annotation ]] based primarily on the analysis and classification of discourse connectives .",1796,5 +1797,"In this paper , we propose a feasible process of such a transfer , comparing the possibilities the << Praguian dependency-based approach >> offers with the Penn discourse annotation based primarily on the [[ analysis and classification of discourse connectives ]] .",1797,6 +1798,"In this paper , we propose a feasible process of such a transfer , comparing the possibilities the Praguian dependency-based approach offers with the << Penn discourse annotation >> based primarily on the [[ analysis and classification of discourse connectives ]] .",1798,6 +1799,[[ Regression-based techniques ]] have shown promising results for << people counting in crowded scenes >> .,1799,3 +1800,"However , most existing << techniques >> require expensive and laborious [[ data annotation ]] for model training .",1800,3 +1801,"However , most existing techniques require expensive and laborious [[ data annotation ]] for << model training >> .",1801,3 +1802,"-LRB- 2 -RRB- Rather than learning from only [[ labelled data ]] , the << abundant unlabelled data >> are exploited .",1802,5 +1803,"All three ideas are implemented in a [[ unified active and semi-supervised regression framework ]] with ability to perform << transfer learning >> , by exploiting the underlying geometric structure of crowd patterns via manifold analysis .",1803,3 +1804,"All three ideas are implemented in a << unified active and semi-supervised regression framework >> with ability to perform transfer learning , by exploiting the underlying [[ geometric structure of crowd patterns ]] via manifold analysis .",1804,3 +1805,"All three ideas are implemented in a unified active and semi-supervised regression framework with ability to perform transfer learning , by exploiting the underlying << geometric structure of crowd patterns >> via [[ manifold analysis ]] .",1805,3 +1806,"[[ Representing images with layers ]] has many important << applications >> , such as video compression , motion analysis , and 3D scene analysis .",1806,3 +1807,"Representing images with layers has many important << applications >> , such as [[ video compression ]] , motion analysis , and 3D scene analysis .",1807,2 +1808,"Representing images with layers has many important applications , such as [[ video compression ]] , << motion analysis >> , and 3D scene analysis .",1808,0 +1809,"Representing images with layers has many important << applications >> , such as video compression , [[ motion analysis ]] , and 3D scene analysis .",1809,2 +1810,"Representing images with layers has many important applications , such as video compression , [[ motion analysis ]] , and << 3D scene analysis >> .",1810,0 +1811,"Representing images with layers has many important << applications >> , such as video compression , motion analysis , and [[ 3D scene analysis ]] .",1811,2 +1812,This paper presents an [[ approach ]] to reliably extracting << layers >> from images by taking advantages of the fact that homographies induced by planar patches in the scene form a low dimensional linear subspace .,1812,3 +1813,This paper presents an approach to reliably extracting [[ layers ]] from << images >> by taking advantages of the fact that homographies induced by planar patches in the scene form a low dimensional linear subspace .,1813,4 +1814,This paper presents an approach to reliably extracting layers from images by taking advantages of the fact that homographies induced by [[ planar patches ]] in the << scene >> form a low dimensional linear subspace .,1814,4 +1815,"[[ Layers ]] in the input << images >> will be mapped in the subspace , where it is proven that they form well-defined clusters and can be reliably identified by a simple mean-shift based clustering algorithm .",1815,4 +1816,"Layers in the input [[ images ]] will be mapped in the subspace , where it is proven that they form well-defined << clusters >> and can be reliably identified by a simple mean-shift based clustering algorithm .",1816,3 +1817,"Layers in the input images will be mapped in the subspace , where it is proven that they form well-defined << clusters >> and can be reliably identified by a simple [[ mean-shift based clustering algorithm ]] .",1817,3 +1818,"Global optimality is achieved since all valid regions are simultaneously taken into account , and << noise >> can be effectively reduced by enforcing the [[ subspace constraint ]] .",1818,3 +1819,The << construction of causal graphs >> from [[ non-experimental data ]] rests on a set of constraints that the graph structure imposes on all probability distributions compatible with the graph .,1819,3 +1820,The construction of causal graphs from non-experimental data rests on a set of constraints that the graph structure imposes on all [[ probability distributions ]] compatible with the << graph >> .,1820,1 +1821,"These << constraints >> are of two types : [[ conditional inde-pendencies ]] and algebraic constraints , first noted by Verma .",1821,2 +1822,"These << constraints >> are of two types : conditional inde-pendencies and [[ algebraic constraints ]] , first noted by Verma .",1822,2 +1823,"While [[ conditional independencies ]] are well studied and frequently used in << causal induction algorithms >> , Verma constraints are still poorly understood , and rarely applied .",1823,3 +1824,"While << conditional independencies >> are well studied and frequently used in causal induction algorithms , [[ Verma constraints ]] are still poorly understood , and rarely applied .",1824,5 +1825,"In this paper we examine a special subset of Verma constraints which are easy to understand , easy to identify and easy to apply ; they arise from '' [[ dormant independencies ]] , '' namely , << conditional independencies >> that hold in interventional distributions .",1825,0 +1826,"In this paper we examine a special subset of Verma constraints which are easy to understand , easy to identify and easy to apply ; they arise from '' dormant independencies , '' namely , [[ conditional independencies ]] that hold in << interventional distributions >> .",1826,1 +1827,"We give a complete [[ algorithm ]] for determining if a << dormant independence >> between two sets of variables is entailed by the causal graph , such that this independence is identifiable , in other words if it resides in an interventional distribution that can be predicted without resorting to interventions .",1827,3 +1828,"We give a complete algorithm for determining if a dormant independence between two sets of variables is entailed by the causal graph , such that this independence is identifiable , in other words if << it >> resides in an [[ interventional distribution ]] that can be predicted without resorting to interventions .",1828,1 +1829,We further show the usefulness of [[ dormant independencies ]] in << model testing >> and induction by giving an algorithm that uses constraints entailed by dormant independencies to prune extraneous edges from a given causal graph .,1829,3 +1830,We further show the usefulness of [[ dormant independencies ]] in model testing and << induction >> by giving an algorithm that uses constraints entailed by dormant independencies to prune extraneous edges from a given causal graph .,1830,3 +1831,We further show the usefulness of dormant independencies in [[ model testing ]] and << induction >> by giving an algorithm that uses constraints entailed by dormant independencies to prune extraneous edges from a given causal graph .,1831,0 +1832,We further show the usefulness of dormant independencies in model testing and induction by giving an [[ algorithm ]] that uses constraints entailed by dormant independencies to prune << extraneous edges >> from a given causal graph .,1832,3 +1833,We further show the usefulness of dormant independencies in model testing and induction by giving an << algorithm >> that uses [[ constraints ]] entailed by dormant independencies to prune extraneous edges from a given causal graph .,1833,3 +1834,We further show the usefulness of dormant independencies in model testing and induction by giving an algorithm that uses constraints entailed by dormant independencies to prune [[ extraneous edges ]] from a given << causal graph >> .,1834,4 +1835,"With the recent popularity of << animated GIFs >> on [[ social media ]] , there is need for ways to index them with rich meta-data .",1835,1 +1836,"To advance research on << animated GIF understanding >> , we collected a new [[ dataset ]] , Tumblr GIF -LRB- TGIF -RRB- , with 100K animated GIFs from Tumblr and 120K natural language descriptions obtained via crowdsourcing .",1836,3 +1837,"To advance research on animated GIF understanding , we collected a new dataset , Tumblr GIF -LRB- TGIF -RRB- , with 100K << animated GIFs >> from Tumblr and 120K [[ natural language descriptions ]] obtained via crowdsourcing .",1837,0 +1838,"To advance research on animated GIF understanding , we collected a new dataset , Tumblr GIF -LRB- TGIF -RRB- , with 100K animated GIFs from Tumblr and 120K << natural language descriptions >> obtained via [[ crowdsourcing ]] .",1838,3 +1839,"The motivation for this work is to develop a testbed for image sequence description systems , where the task is to generate [[ natural language descriptions ]] for << animated GIFs >> or video clips .",1839,3 +1840,"The motivation for this work is to develop a testbed for image sequence description systems , where the task is to generate [[ natural language descriptions ]] for animated GIFs or << video clips >> .",1840,3 +1841,"The motivation for this work is to develop a testbed for image sequence description systems , where the task is to generate natural language descriptions for [[ animated GIFs ]] or << video clips >> .",1841,0 +1842,"To ensure a high quality dataset , we developed a series of novel [[ quality controls ]] to validate << free-form text input >> from crowd-workers .",1842,3 +1843,"We show that there is unambiguous association between [[ visual content ]] and << natural language descriptions >> in our dataset , making it an ideal benchmark for the visual content captioning task .",1843,0 +1844,"We show that there is unambiguous association between [[ visual content ]] and natural language descriptions in our << dataset >> , making it an ideal benchmark for the visual content captioning task .",1844,4 +1845,"We show that there is unambiguous association between visual content and [[ natural language descriptions ]] in our << dataset >> , making it an ideal benchmark for the visual content captioning task .",1845,4 +1846,"We show that there is unambiguous association between visual content and natural language descriptions in our dataset , making [[ it ]] an ideal benchmark for the << visual content captioning task >> .",1846,6 +1847,We perform extensive statistical analyses to compare our [[ dataset ]] to existing << image and video description datasets >> .,1847,5 +1848,"Next , we provide baseline results on the << animated GIF description task >> , using three [[ representative techniques ]] : nearest neighbor , statistical machine translation , and recurrent neural networks .",1848,3 +1849,"Next , we provide baseline results on the animated GIF description task , using three << representative techniques >> : [[ nearest neighbor ]] , statistical machine translation , and recurrent neural networks .",1849,2 +1850,"Next , we provide baseline results on the animated GIF description task , using three representative techniques : [[ nearest neighbor ]] , << statistical machine translation >> , and recurrent neural networks .",1850,0 +1851,"Next , we provide baseline results on the animated GIF description task , using three << representative techniques >> : nearest neighbor , [[ statistical machine translation ]] , and recurrent neural networks .",1851,2 +1852,"Next , we provide baseline results on the animated GIF description task , using three representative techniques : nearest neighbor , [[ statistical machine translation ]] , and << recurrent neural networks >> .",1852,0 +1853,"Next , we provide baseline results on the animated GIF description task , using three << representative techniques >> : nearest neighbor , statistical machine translation , and [[ recurrent neural networks ]] .",1853,2 +1854,"Finally , we show that models fine-tuned from our [[ animated GIF description dataset ]] can be helpful for << automatic movie description >> .",1854,3 +1855,"[[ Systemic grammar ]] has been used for << AI text generation >> work in the past , but the implementations have tended be ad hoc or inefficient .",1855,3 +1856,This paper presents an [[ approach ]] to systemic << text generation >> where AI problem solving techniques are applied directly to an unadulterated systemic grammar .,1856,3 +1857,This paper presents an approach to systemic text generation where [[ AI problem solving techniques ]] are applied directly to an unadulterated << systemic grammar >> .,1857,3 +1858,This approach is made possible by a special relationship between [[ systemic grammar ]] and << problem solving >> : both are organized primarily as choosing from alternatives .,1858,0 +1859,"The result is simple , efficient << text generation >> firmly based in a [[ linguistic theory ]] .",1859,3 +1860,In this paper a novel [[ solution ]] to << automatic and unsupervised word sense induction -LRB- WSI -RRB- >> is introduced .,1860,3 +1861,"[[ It ]] represents an instantiation of the << one sense per collocation observation >> -LRB- Gale et al. , 1992 -RRB- .",1861,2 +1862,Like most existing approaches << it >> utilizes [[ clustering of word co-occurrences ]] .,1862,3 +1863,This [[ approach ]] differs from other << approaches >> to WSI in that it enhances the effect of the one sense per collocation observation by using triplets of words instead of pairs .,1863,5 +1864,This [[ approach ]] differs from other approaches to << WSI >> in that it enhances the effect of the one sense per collocation observation by using triplets of words instead of pairs .,1864,3 +1865,This approach differs from other [[ approaches ]] to << WSI >> in that it enhances the effect of the one sense per collocation observation by using triplets of words instead of pairs .,1865,3 +1866,This approach differs from other approaches to WSI in that [[ it ]] enhances the effect of the << one sense per collocation observation >> by using triplets of words instead of pairs .,1866,3 +1867,This approach differs from other approaches to WSI in that << it >> enhances the effect of the one sense per collocation observation by using [[ triplets of words ]] instead of pairs .,1867,3 +1868,The combination with a << two-step clustering process >> using [[ sentence co-occurrences ]] as features allows for accurate results .,1868,3 +1869,"Additionally , a novel and likewise [[ automatic and unsupervised evaluation method ]] inspired by Schutze 's -LRB- 1992 -RRB- idea of evaluation of << word sense disambiguation algorithms >> is employed .",1869,6 +1870,Offering advantages like reproducability and independency of a given biased gold standard it also enables [[ automatic parameter optimization ]] of the << WSI algorithm >> .,1870,3 +1871,This abstract describes a [[ natural language system ]] which deals usefully with << ungrammatical input >> and describes some actual and potential applications of it in computer aided second language learning .,1871,3 +1872,This abstract describes a natural language system which deals usefully with ungrammatical input and describes some actual and potential applications of [[ it ]] in << computer aided second language learning >> .,1872,3 +1873,"However , << this >> is not the only area in which the principles of the [[ system ]] might be used , and the aim in building it was simply to demonstrate the workability of the general mechanism , and provide a framework for assessing developments of it .",1873,3 +1874,"In a motorized vehicle a number of easily << measurable signals >> with [[ frequency components ]] related to the rotational speed of the engine can be found , e.g. , vibrations , electrical system voltage level , and ambient sound .",1874,3 +1875,"In a motorized vehicle a number of easily measurable signals with [[ frequency components ]] related to the << rotational speed of the engine >> can be found , e.g. , vibrations , electrical system voltage level , and ambient sound .",1875,1 +1876,"In a motorized vehicle a number of easily << measurable signals >> with frequency components related to the rotational speed of the engine can be found , e.g. , [[ vibrations ]] , electrical system voltage level , and ambient sound .",1876,2 +1877,"In a motorized vehicle a number of easily measurable signals with frequency components related to the rotational speed of the engine can be found , e.g. , [[ vibrations ]] , << electrical system voltage level >> , and ambient sound .",1877,0 +1878,"In a motorized vehicle a number of easily << measurable signals >> with frequency components related to the rotational speed of the engine can be found , e.g. , vibrations , [[ electrical system voltage level ]] , and ambient sound .",1878,2 +1879,"In a motorized vehicle a number of easily measurable signals with frequency components related to the rotational speed of the engine can be found , e.g. , vibrations , [[ electrical system voltage level ]] , and << ambient sound >> .",1879,0 +1880,"In a motorized vehicle a number of easily << measurable signals >> with frequency components related to the rotational speed of the engine can be found , e.g. , vibrations , electrical system voltage level , and [[ ambient sound ]] .",1880,2 +1881,These [[ signals ]] could potentially be used to estimate the << speed and related states of the vehicle >> .,1881,3 +1882,"Unfortunately , such estimates would typically require the relations -LRB- scale factors -RRB- between the [[ frequency components ]] and the << speed >> for different gears to be known .",1882,0 +1883,"Unfortunately , such estimates would typically require the relations -LRB- scale factors -RRB- between the frequency components and the [[ speed ]] for different << gears >> to be known .",1883,1 +1884,"Consequently , in this article we look at the problem of estimating these << gear scale factors >> from [[ training data ]] consisting only of speed measurements and measurements of the signal in question .",1884,3 +1885,The << estimation problem >> is formulated as a [[ maximum likelihood estimation problem ]] and heuristics is used to find initial values for a numerical evaluation of the estimator .,1885,3 +1886,The estimation problem is formulated as a maximum likelihood estimation problem and [[ heuristics ]] is used to find initial values for a << numerical evaluation of the estimator >> .,1886,3 +1887,"Finally , a measurement campaign is conducted and the functionality of the << estimation method >> is verified on [[ real data ]] .",1887,6 +1888,<< LPC based speech coders >> operating at [[ bit rates ]] below 3.0 kbits/sec are usually associated with buzzy or metallic artefacts in the synthetic speech .,1888,1 +1889,LPC based speech coders operating at bit rates below 3.0 kbits/sec are usually associated with [[ buzzy or metallic artefacts ]] in the << synthetic speech >> .,1889,1 +1890,In this paper a new LPC vocoder is presented which splits the << LPC excitation >> into two frequency bands using a [[ variable cutoff frequency ]] .,1890,3 +1891,In this paper a new LPC vocoder is presented which splits the LPC excitation into two << frequency bands >> using a [[ variable cutoff frequency ]] .,1891,3 +1892,"In doing so the [[ coder ]] 's performance during both mixed voicing speech and speech containing acoustic noise is greatly improved , producing << soft natural sounding speech >> .",1892,3 +1893,"In doing so the << coder >> 's performance during both [[ mixed voicing speech ]] and speech containing acoustic noise is greatly improved , producing soft natural sounding speech .",1893,3 +1894,"In doing so the << coder >> 's performance during both mixed voicing speech and [[ speech containing acoustic noise ]] is greatly improved , producing soft natural sounding speech .",1894,3 +1895,The paper also describes new [[ parameter determination ]] and << quantisation techniques >> vital to the operation of this coder at such low bit rates .,1895,0 +1896,The paper also describes new [[ parameter determination ]] and quantisation techniques vital to the operation of this << coder >> at such low bit rates .,1896,3 +1897,The paper also describes new parameter determination and [[ quantisation techniques ]] vital to the operation of this << coder >> at such low bit rates .,1897,3 +1898,The paper also describes new parameter determination and quantisation techniques vital to the operation of this << coder >> at such [[ low bit rates ]] .,1898,1 +1899,"We consider a problem of << blind source separation >> from a set of [[ instantaneous linear mixtures ]] , where the mixing matrix is unknown .",1899,3 +1900,"It was discovered recently , that exploiting the << sparsity of sources >> in an appropriate representation according to some [[ signal dictionary ]] , dramatically improves the quality of separation .",1900,3 +1901,"It was discovered recently , that exploiting the << sparsity of sources >> in an appropriate representation according to some signal dictionary , dramatically improves the [[ quality of separation ]] .",1901,6 +1902,"In this work we use the property of << multi scale transforms >> , such as [[ wavelet or wavelet packets ]] , to decompose signals into sets of local features with various degrees of sparsity .",1902,2 +1903,The performance of the << algorithm >> is verified on [[ noise-free and noisy data ]] .,1903,6 +1904,"Experiments with [[ simulated signals ]] , << musical sounds >> and images demonstrate significant improvement of separation quality over previously reported results .",1904,0 +1905,"Experiments with [[ simulated signals ]] , musical sounds and images demonstrate significant improvement of << separation quality >> over previously reported results .",1905,6 +1906,"Experiments with simulated signals , [[ musical sounds ]] and << images >> demonstrate significant improvement of separation quality over previously reported results .",1906,0 +1907,"Experiments with simulated signals , [[ musical sounds ]] and images demonstrate significant improvement of << separation quality >> over previously reported results .",1907,6 +1908,"Experiments with simulated signals , musical sounds and [[ images ]] demonstrate significant improvement of << separation quality >> over previously reported results .",1908,6 +1909,"In this paper , we explore << multilingual feature-level data sharing >> via [[ Deep Neural Network -LRB- DNN -RRB- stacked bottleneck features ]] .",1909,3 +1910,"Given a set of available source languages , we apply [[ language identification ]] to pick the language most similar to the target language , for more efficient use of << multilingual resources >> .",1910,3 +1911,Our experiments with IARPA-Babel languages show that << bottleneck features >> trained on the most similar source language perform better than [[ those ]] trained on all available source languages .,1911,5 +1912,Further analysis suggests that only [[ data ]] similar to the target language is useful for << multilingual training >> .,1912,3 +1913,"This article introduces a [[ bidirectional grammar generation system ]] called feature structure-directed generation , developed for a << dialogue translation system >> .",1913,3 +1914,"This article introduces a << bidirectional grammar generation system >> called [[ feature structure-directed generation ]] , developed for a dialogue translation system .",1914,2 +1915,"This article introduces a bidirectional grammar generation system called [[ feature structure-directed generation ]] , developed for a << dialogue translation system >> .",1915,3 +1916,The << system >> utilizes [[ typed feature structures ]] to control the top-down derivation in a declarative way .,1916,3 +1917,The system utilizes [[ typed feature structures ]] to control the << top-down derivation >> in a declarative way .,1917,3 +1918,This << generation system >> also uses [[ disjunctive feature structures ]] to reduce the number of copies of the derivation tree .,1918,3 +1919,This generation system also uses [[ disjunctive feature structures ]] to reduce the number of copies of the << derivation tree >> .,1919,3 +1920,The [[ grammar ]] for this << generator >> is designed to properly generate the speaker 's intention in a telephone dialogue .,1920,3 +1921,The [[ grammar ]] for this generator is designed to properly generate the << speaker 's intention >> in a telephone dialogue .,1921,3 +1922,The grammar for this generator is designed to properly generate the << speaker 's intention >> in a [[ telephone dialogue ]] .,1922,1 +1923,[[ Automatic image annotation ]] is a newly developed and promising technique to provide << semantic image retrieval >> via text descriptions .,1923,3 +1924,Automatic image annotation is a newly developed and promising technique to provide << semantic image retrieval >> via [[ text descriptions ]] .,1924,3 +1925,It concerns a process of << automatically labeling the image contents >> with a pre-defined set of [[ keywords ]] which are exploited to represent the image semantics .,1925,3 +1926,It concerns a process of automatically labeling the image contents with a pre-defined set of [[ keywords ]] which are exploited to represent the << image semantics >> .,1926,3 +1927,A [[ Maximum Entropy Model-based approach ]] to the task of << automatic image annotation >> is proposed in this paper .,1927,3 +1928,"In the phase of training , a basic [[ visual vocabulary ]] consisting of blob-tokens to describe the << image content >> is generated at first ; then the statistical relationship is modeled between the blob-tokens and keywords by a Maximum Entropy Model constructed from the training set of labeled images .",1928,3 +1929,"In the phase of training , a basic << visual vocabulary >> consisting of [[ blob-tokens ]] to describe the image content is generated at first ; then the statistical relationship is modeled between the blob-tokens and keywords by a Maximum Entropy Model constructed from the training set of labeled images .",1929,4 +1930,"In the phase of training , a basic visual vocabulary consisting of blob-tokens to describe the image content is generated at first ; then the << statistical relationship >> is modeled between the blob-tokens and keywords by a [[ Maximum Entropy Model ]] constructed from the training set of labeled images .",1930,3 +1931,"In the phase of annotation , for an unlabeled image , the most likely associated << keywords >> are predicted in terms of the [[ blob-token set ]] extracted from the given image .",1931,3 +1932,We carried out experiments on a << medium-sized image collection >> with about 5000 images from [[ Corel Photo CDs ]] .,1932,3 +1933,"The experimental results demonstrated that the << annotation >> performance of this [[ method ]] outperforms some traditional annotation methods by about 8 % in mean precision , showing a potential of the Maximum Entropy Model in the task of automatic image annotation .",1933,3 +1934,"The experimental results demonstrated that the annotation performance of this [[ method ]] outperforms some traditional << annotation methods >> by about 8 % in mean precision , showing a potential of the Maximum Entropy Model in the task of automatic image annotation .",1934,5 +1935,"The experimental results demonstrated that the << annotation >> performance of this method outperforms some traditional [[ annotation methods ]] by about 8 % in mean precision , showing a potential of the Maximum Entropy Model in the task of automatic image annotation .",1935,3 +1936,"The experimental results demonstrated that the annotation performance of this method outperforms some traditional << annotation methods >> by about 8 % in [[ mean precision ]] , showing a potential of the Maximum Entropy Model in the task of automatic image annotation .",1936,6 +1937,"The experimental results demonstrated that the annotation performance of this method outperforms some traditional annotation methods by about 8 % in mean precision , showing a potential of the [[ Maximum Entropy Model ]] in the task of << automatic image annotation >> .",1937,3 +1938,However most of the works found in the literature have focused on identifying and understanding << temporal expressions >> in [[ newswire texts ]] .,1938,1 +1939,"In this paper we report our work on anchoring << temporal expressions >> in a novel genre , [[ emails ]] .",1939,1 +1940,"The highly under-specified nature of these expressions fits well with our << constraint-based representation of time >> , [[ Time Calculus for Natural Language -LRB- TCNL -RRB- ]] .",1940,2 +1941,"We have developed and evaluated a Temporal Expression Anchoror -LRB- TEA -RRB- , and the result shows that [[ it ]] performs significantly better than the << baseline >> , and compares favorably with some of the closely related work .",1941,5 +1942,"We address the problem of populating [[ object category detection datasets ]] with dense , << per-object 3D reconstructions >> , bootstrapped from class labels , ground truth figure-ground segmentations and a small set of keypoint annotations .",1942,3 +1943,"We address the problem of populating object category detection datasets with dense , << per-object 3D reconstructions >> , bootstrapped from class labels , [[ ground truth figure-ground segmentations ]] and a small set of keypoint annotations .",1943,3 +1944,"We address the problem of populating object category detection datasets with dense , per-object 3D reconstructions , bootstrapped from class labels , [[ ground truth figure-ground segmentations ]] and a small set of << keypoint annotations >> .",1944,0 +1945,"We address the problem of populating object category detection datasets with dense , << per-object 3D reconstructions >> , bootstrapped from class labels , ground truth figure-ground segmentations and a small set of [[ keypoint annotations ]] .",1945,3 +1946,"Our proposed [[ algorithm ]] first estimates << camera viewpoint >> using rigid structure-from-motion , then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions .",1946,3 +1947,"Our proposed [[ algorithm ]] first estimates camera viewpoint using rigid structure-from-motion , then reconstructs << object shapes >> by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions .",1947,3 +1948,"Our proposed << algorithm >> first estimates camera viewpoint using [[ rigid structure-from-motion ]] , then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions .",1948,3 +1949,"Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion , then reconstructs << object shapes >> by optimizing over [[ visual hull proposals ]] guided by loose within-class shape similarity assumptions .",1949,3 +1950,"Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion , then reconstructs object shapes by optimizing over << visual hull proposals >> guided by [[ loose within-class shape similarity assumptions ]] .",1950,3 +1951,"We show that our [[ method ]] is able to produce convincing << per-object 3D reconstructions >> on one of the most challenging existing object-category detection datasets , PASCAL VOC .",1951,3 +1952,"We show that our << method >> is able to produce convincing per-object 3D reconstructions on one of the most challenging existing [[ object-category detection datasets ]] , PASCAL VOC .",1952,3 +1953,"We show that our method is able to produce convincing per-object 3D reconstructions on one of the most challenging existing << object-category detection datasets >> , [[ PASCAL VOC ]] .",1953,2 +1954,[[ Probabilistic models ]] have been previously shown to be efficient and effective for << modeling and recognition of human motion >> .,1954,3 +1955,In particular we focus on methods which represent the << human motion model >> as a [[ triangulated graph ]] .,1955,3 +1956,Previous approaches learned << models >> based just on [[ positions ]] and velocities of the body parts while ignoring their appearance .,1956,3 +1957,Previous approaches learned models based just on [[ positions ]] and << velocities >> of the body parts while ignoring their appearance .,1957,0 +1958,Previous approaches learned << models >> based just on positions and [[ velocities ]] of the body parts while ignoring their appearance .,1958,3 +1959,"Moreover , a [[ heuristic approach ]] was commonly used to obtain << translation invariance >> .",1959,3 +1960,In this paper we suggest an improved [[ approach ]] for learning such << models >> and using them for human motion recognition .,1960,3 +1961,In this paper we suggest an improved approach for learning such models and using [[ them ]] for << human motion recognition >> .,1961,3 +1962,"The suggested [[ approach ]] combines multiple cues , i.e. , positions , velocities and appearance into both the << learning and detection phases >> .",1962,3 +1963,"The suggested approach combines multiple << cues >> , i.e. , [[ positions ]] , velocities and appearance into both the learning and detection phases .",1963,2 +1964,"The suggested approach combines multiple cues , i.e. , [[ positions ]] , << velocities >> and appearance into both the learning and detection phases .",1964,0 +1965,"The suggested approach combines multiple << cues >> , i.e. , positions , [[ velocities ]] and appearance into both the learning and detection phases .",1965,2 +1966,"The suggested approach combines multiple cues , i.e. , positions , [[ velocities ]] and << appearance >> into both the learning and detection phases .",1966,0 +1967,"The suggested approach combines multiple << cues >> , i.e. , positions , velocities and [[ appearance ]] into both the learning and detection phases .",1967,2 +1968,"Furthermore , we introduce [[ global variables ]] in the << model >> , which can represent global properties such as translation , scale or viewpoint .",1968,3 +1969,"Furthermore , we introduce [[ global variables ]] in the model , which can represent << global properties >> such as translation , scale or viewpoint .",1969,3 +1970,"Furthermore , we introduce global variables in the model , which can represent << global properties >> such as [[ translation ]] , scale or viewpoint .",1970,2 +1971,"Furthermore , we introduce global variables in the model , which can represent global properties such as [[ translation ]] , << scale >> or viewpoint .",1971,0 +1972,"Furthermore , we introduce global variables in the model , which can represent << global properties >> such as translation , [[ scale ]] or viewpoint .",1972,2 +1973,"Furthermore , we introduce global variables in the model , which can represent global properties such as translation , [[ scale ]] or << viewpoint >> .",1973,0 +1974,"Furthermore , we introduce global variables in the model , which can represent << global properties >> such as translation , scale or [[ viewpoint ]] .",1974,2 +1975,The << model >> is learned in an [[ unsupervised manner ]] from un-labelled data .,1975,3 +1976,The model is learned in an << unsupervised manner >> from [[ un-labelled data ]] .,1976,3 +1977,"We show that the suggested << hybrid proba-bilistic model >> -LRB- which combines [[ global variables ]] , like translation , with local variables , like relative positions and appearances of body parts -RRB- , leads to : -LRB- i -RRB- faster convergence of learning phase , -LRB- ii -RRB- robustness to occlusions , and , -LRB- iii -RRB- higher recognition rate .",1977,3 +1978,"We show that the suggested hybrid proba-bilistic model -LRB- which combines << global variables >> , like [[ translation ]] , with local variables , like relative positions and appearances of body parts -RRB- , leads to : -LRB- i -RRB- faster convergence of learning phase , -LRB- ii -RRB- robustness to occlusions , and , -LRB- iii -RRB- higher recognition rate .",1978,2 +1979,"We show that the suggested hybrid proba-bilistic model -LRB- which combines global variables , like translation , with << local variables >> , like [[ relative positions ]] and appearances of body parts -RRB- , leads to : -LRB- i -RRB- faster convergence of learning phase , -LRB- ii -RRB- robustness to occlusions , and , -LRB- iii -RRB- higher recognition rate .",1979,2 +1980,"We show that the suggested hybrid proba-bilistic model -LRB- which combines global variables , like translation , with local variables , like [[ relative positions ]] and << appearances of body parts >> -RRB- , leads to : -LRB- i -RRB- faster convergence of learning phase , -LRB- ii -RRB- robustness to occlusions , and , -LRB- iii -RRB- higher recognition rate .",1980,0 +1981,"We show that the suggested hybrid proba-bilistic model -LRB- which combines global variables , like translation , with << local variables >> , like relative positions and [[ appearances of body parts ]] -RRB- , leads to : -LRB- i -RRB- faster convergence of learning phase , -LRB- ii -RRB- robustness to occlusions , and , -LRB- iii -RRB- higher recognition rate .",1981,2 +1982,"We show that the suggested hybrid proba-bilistic model -LRB- which combines global variables , like translation , with local variables , like relative positions and appearances of body parts -RRB- , leads to : -LRB- i -RRB- [[ faster convergence ]] of << learning phase >> , -LRB- ii -RRB- robustness to occlusions , and , -LRB- iii -RRB- higher recognition rate .",1982,1 +1983,"We show that the suggested hybrid proba-bilistic model -LRB- which combines global variables , like translation , with local variables , like relative positions and appearances of body parts -RRB- , leads to : -LRB- i -RRB- [[ faster convergence ]] of learning phase , -LRB- ii -RRB- << robustness >> to occlusions , and , -LRB- iii -RRB- higher recognition rate .",1983,0 +1984,"We show that the suggested hybrid proba-bilistic model -LRB- which combines global variables , like translation , with local variables , like relative positions and appearances of body parts -RRB- , leads to : -LRB- i -RRB- faster convergence of learning phase , -LRB- ii -RRB- [[ robustness ]] to occlusions , and , -LRB- iii -RRB- higher << recognition rate >> .",1984,0 +1985,[[ Factor analysis ]] and << principal components analysis >> can be used to model linear relationships between observed variables and linearly map high-dimensional data to a lower-dimensional hidden space .,1985,0 +1986,[[ Factor analysis ]] and principal components analysis can be used to model << linear relationships between observed variables >> and linearly map high-dimensional data to a lower-dimensional hidden space .,1986,3 +1987,Factor analysis and [[ principal components analysis ]] can be used to model << linear relationships between observed variables >> and linearly map high-dimensional data to a lower-dimensional hidden space .,1987,3 +1988,"We describe a [[ nonlinear generalization of factor analysis ]] , called `` product analy-sis '' , that models the << observed variables >> as a linear combination of products of normally distributed hidden variables .",1988,3 +1989,"We describe a << nonlinear generalization of factor analysis >> , called [[ `` product analy-sis '' ]] , that models the observed variables as a linear combination of products of normally distributed hidden variables .",1989,2 +1990,"We describe a << nonlinear generalization of factor analysis >> , called `` product analy-sis '' , that models the observed variables as a [[ linear combination of products of normally distributed hidden variables ]] .",1990,3 +1991,"Just as << factor analysis >> can be viewed as [[ unsupervised linear regression ]] on unobserved , normally distributed hidden variables , product analysis can be viewed as unsupervised linear regression on products of unobserved , normally distributed hidden variables .",1991,3 +1992,"Just as factor analysis can be viewed as unsupervised linear regression on unobserved , normally distributed hidden variables , << product analysis >> can be viewed as [[ unsupervised linear regression ]] on products of unobserved , normally distributed hidden variables .",1992,3 +1993,"The mapping between the data and the hidden space is nonlinear , so we use an [[ approximate variational technique ]] for << inference >> and learning .",1993,3 +1994,"The mapping between the data and the hidden space is nonlinear , so we use an [[ approximate variational technique ]] for inference and << learning >> .",1994,3 +1995,"The mapping between the data and the hidden space is nonlinear , so we use an approximate variational technique for [[ inference ]] and << learning >> .",1995,0 +1996,"Since [[ product analysis ]] is a << generalization of factor analysis >> , product analysis always finds a higher data likelihood than factor analysis .",1996,2 +1997,"Since product analysis is a generalization of factor analysis , [[ product analysis ]] always finds a higher data likelihood than << factor analysis >> .",1997,5 +1998,We give results on [[ pattern recognition ]] and << illumination-invariant image clustering >> .,1998,0 +1999,This paper describes a [[ domain independent strategy ]] for the << multimedia articulation of answers >> elicited by a natural language interface to database query applications .,1999,3 +2000,This paper describes a domain independent strategy for the [[ multimedia articulation of answers ]] elicited by a << natural language interface >> to database query applications .,2000,3 +2001,This paper describes a domain independent strategy for the multimedia articulation of answers elicited by a [[ natural language interface ]] to << database query applications >> .,2001,3 +2002,<< Multimedia answers >> include [[ videodisc images ]] and heuristically-produced complete sentences in text or text-to-speech form .,2002,4 +2003,[[ Deictic reference ]] and << feedback >> about the discourse are enabled .,2003,0 +2004,[[ Deictic reference ]] and feedback about the << discourse >> are enabled .,2004,1 +2005,Deictic reference and [[ feedback ]] about the << discourse >> are enabled .,2005,1 +2006,The [[ LOGON MT demonstrator ]] assembles independently valuable << general-purpose NLP components >> into a machine translation pipeline that capitalizes on output quality .,2006,3 +2007,The LOGON MT demonstrator assembles independently valuable [[ general-purpose NLP components ]] into a << machine translation pipeline >> that capitalizes on output quality .,2007,4 +2008,"The << demonstrator >> embodies an interesting combination of [[ hand-built , symbolic resources ]] and stochastic processes .",2008,4 +2009,"The demonstrator embodies an interesting combination of [[ hand-built , symbolic resources ]] and << stochastic processes >> .",2009,0 +2010,"The << demonstrator >> embodies an interesting combination of hand-built , symbolic resources and [[ stochastic processes ]] .",2010,4 +2011,"We describe both the [[ syntax ]] and << semantics >> of a general propositional language of context , and give a Hilbert style proof system for this language .",2011,0 +2012,"We describe both the [[ syntax ]] and semantics of a general << propositional language of context >> , and give a Hilbert style proof system for this language .",2012,1 +2013,"We describe both the syntax and [[ semantics ]] of a general << propositional language of context >> , and give a Hilbert style proof system for this language .",2013,1 +2014,"We describe both the syntax and semantics of a general propositional language of context , and give a [[ Hilbert style proof system ]] for this << language >> .",2014,3 +2015,A << propositional logic of context >> extends [[ classical propositional logic ]] in two ways .,2015,3 +2016,[[ Image matching ]] is a fundamental problem in << Computer Vision >> .,2016,2 +2017,"In the context of << feature-based matching >> , [[ SIFT ]] and its variants have long excelled in a wide array of applications .",2017,3 +2018,"However , for ultra-wide baselines , as in the case of << aerial images >> captured under [[ large camera rotations ]] , the appearance variation goes beyond the reach of SIFT and RANSAC .",2018,1 +2019,"However , for ultra-wide baselines , as in the case of aerial images captured under large camera rotations , the appearance variation goes beyond the reach of [[ SIFT ]] and << RANSAC >> .",2019,0 +2020,"In this paper we propose a data-driven , deep learning-based approach that sidesteps local correspondence by framing the << problem >> as a [[ classification task ]] .",2020,3 +2021,"We train our << models >> on a [[ dataset of urban aerial imagery ]] consisting of ` same ' and ` different ' pairs , collected for this purpose , and characterize the problem via a human study with annotations from Amazon Mechanical Turk .",2021,3 +2022,"We train our models on a dataset of urban aerial imagery consisting of ` same ' and ` different ' pairs , collected for this purpose , and characterize the << problem >> via a [[ human study ]] with annotations from Amazon Mechanical Turk .",2022,3 +2023,"We train our models on a dataset of urban aerial imagery consisting of ` same ' and ` different ' pairs , collected for this purpose , and characterize the problem via a << human study >> with [[ annotations from Amazon Mechanical Turk ]] .",2023,3 +2024,We demonstrate that our [[ models ]] outperform the << state-of-the-art >> on ultra-wide baseline matching and approach human accuracy .,2024,5 +2025,We demonstrate that our [[ models ]] outperform the state-of-the-art on ultra-wide baseline matching and approach << human accuracy >> .,2025,5 +2026,We demonstrate that our << models >> outperform the state-of-the-art on [[ ultra-wide baseline matching ]] and approach human accuracy .,2026,6 +2027,We demonstrate that our models outperform the << state-of-the-art >> on [[ ultra-wide baseline matching ]] and approach human accuracy .,2027,6 +2028,"We argue that a more sophisticated and [[ fine-grained annotation ]] in the tree-bank would have very positve effects on << stochastic parsers >> trained on the tree-bank and on grammars induced from the treebank , and it would make the treebank more valuable as a source of data for theoretical linguistic investigations .",2028,3 +2029,"We argue that a more sophisticated and fine-grained annotation in the tree-bank would have very positve effects on << stochastic parsers >> trained on the [[ tree-bank ]] and on grammars induced from the treebank , and it would make the treebank more valuable as a source of data for theoretical linguistic investigations .",2029,3 +2030,"We argue that a more sophisticated and fine-grained annotation in the tree-bank would have very positve effects on stochastic parsers trained on the tree-bank and on << grammars >> induced from the [[ treebank ]] , and it would make the treebank more valuable as a source of data for theoretical linguistic investigations .",2030,3 +2031,"We argue that a more sophisticated and fine-grained annotation in the tree-bank would have very positve effects on stochastic parsers trained on the tree-bank and on grammars induced from the treebank , and it would make the [[ treebank ]] more valuable as a source of data for << theoretical linguistic investigations >> .",2031,3 +2032,"The information gained from corpus research and the analyses that are proposed are realized in the framework of [[ SILVA ]] , a << parsing and extraction tool >> for German text corpora .",2032,2 +2033,"The information gained from corpus research and the analyses that are proposed are realized in the framework of << SILVA >> , a parsing and extraction tool for [[ German text corpora ]] .",2033,3 +2034,"While [[ paraphrasing ]] is critical both for << interpretation and generation of natural language >> , current systems use manual or semi-automatic methods to collect paraphrases .",2034,3 +2035,"While paraphrasing is critical both for interpretation and generation of natural language , current [[ systems ]] use manual or semi-automatic methods to collect << paraphrases >> .",2035,3 +2036,"While paraphrasing is critical both for interpretation and generation of natural language , current << systems >> use [[ manual or semi-automatic methods ]] to collect paraphrases .",2036,3 +2037,We present an [[ unsupervised learning algorithm ]] for << identification of paraphrases >> from a corpus of multiple English translations of the same source text .,2037,3 +2038,We present an unsupervised learning algorithm for << identification of paraphrases >> from a [[ corpus of multiple English translations ]] of the same source text .,2038,3 +2039,Our [[ approach ]] yields << phrasal and single word lexical paraphrases >> as well as syntactic paraphrases .,2039,3 +2040,Our [[ approach ]] yields phrasal and single word lexical paraphrases as well as << syntactic paraphrases >> .,2040,3 +2041,Our approach yields [[ phrasal and single word lexical paraphrases ]] as well as << syntactic paraphrases >> .,2041,0 +2042,An efficient [[ bit-vector-based CKY-style parser ]] for << context-free parsing >> is presented .,2042,3 +2043,The [[ parser ]] computes a compact << parse forest representation >> of the complete set of possible analyses for large treebank grammars and long input sentences .,2043,3 +2044,The parser computes a compact [[ parse forest representation ]] of the complete set of possible analyses for << large treebank grammars >> and long input sentences .,2044,3 +2045,The << parser >> uses [[ bit-vector operations ]] to parallelise the basic parsing operations .,2045,3 +2046,"In this paper , we propose a [[ partially-blurred-image classification and analysis framework ]] for << automatically detecting images >> containing blurred regions and recognizing the blur types for those regions without needing to perform blur kernel estimation and image deblurring .",2046,3 +2047,"In this paper , we propose a partially-blurred-image classification and analysis framework for automatically detecting << images >> containing [[ blurred regions ]] and recognizing the blur types for those regions without needing to perform blur kernel estimation and image deblurring .",2047,4 +2048,"In this paper , we propose a partially-blurred-image classification and analysis framework for automatically detecting images containing blurred regions and recognizing the blur types for those regions without needing to perform [[ blur kernel estimation ]] and << image deblurring >> .",2048,0 +2049,"We develop several << blur features >> modeled by [[ image color ]] , gradient , and spectrum information , and use feature parameter training to robustly classify blurred images .",2049,3 +2050,"We develop several blur features modeled by [[ image color ]] , << gradient >> , and spectrum information , and use feature parameter training to robustly classify blurred images .",2050,0 +2051,"We develop several << blur features >> modeled by image color , [[ gradient ]] , and spectrum information , and use feature parameter training to robustly classify blurred images .",2051,3 +2052,"We develop several blur features modeled by image color , [[ gradient ]] , and << spectrum information >> , and use feature parameter training to robustly classify blurred images .",2052,0 +2053,"We develop several << blur features >> modeled by image color , gradient , and [[ spectrum information ]] , and use feature parameter training to robustly classify blurred images .",2053,3 +2054,"We develop several blur features modeled by image color , gradient , and spectrum information , and use [[ feature parameter training ]] to robustly classify << blurred images >> .",2054,3 +2055,"Our << blur detection >> is based on [[ image patches ]] , making region-wise training and classification in one image efficient .",2055,3 +2056,"Our << blur detection >> is based on image patches , making [[ region-wise training and classification ]] in one image efficient .",2056,3 +2057,"Extensive experiments show that our [[ method ]] works satisfactorily on challenging image data , which establishes a technical foundation for solving several << computer vision problems >> , such as motion analysis and image restoration , using the blur information .",2057,3 +2058,"Extensive experiments show that our << method >> works satisfactorily on challenging [[ image data ]] , which establishes a technical foundation for solving several computer vision problems , such as motion analysis and image restoration , using the blur information .",2058,6 +2059,"Extensive experiments show that our method works satisfactorily on challenging image data , which establishes a technical foundation for solving several << computer vision problems >> , such as [[ motion analysis ]] and image restoration , using the blur information .",2059,2 +2060,"Extensive experiments show that our method works satisfactorily on challenging image data , which establishes a technical foundation for solving several computer vision problems , such as [[ motion analysis ]] and << image restoration >> , using the blur information .",2060,0 +2061,"Extensive experiments show that our method works satisfactorily on challenging image data , which establishes a technical foundation for solving several << computer vision problems >> , such as motion analysis and [[ image restoration ]] , using the blur information .",2061,2 +2062,"Extensive experiments show that our << method >> works satisfactorily on challenging image data , which establishes a technical foundation for solving several computer vision problems , such as motion analysis and image restoration , using the [[ blur information ]] .",2062,3 +2063,"We have recently reported on two new << word-sense disambiguation systems >> , [[ one ]] trained on bilingual material -LRB- the Canadian Hansards -RRB- and the other trained on monolingual material -LRB- Roget 's Thesaurus and Grolier 's Encyclopedia -RRB- .",2063,2 +2064,"We have recently reported on two new word-sense disambiguation systems , [[ one ]] trained on bilingual material -LRB- the Canadian Hansards -RRB- and the << other >> trained on monolingual material -LRB- Roget 's Thesaurus and Grolier 's Encyclopedia -RRB- .",2064,0 +2065,"We have recently reported on two new word-sense disambiguation systems , << one >> trained on [[ bilingual material ]] -LRB- the Canadian Hansards -RRB- and the other trained on monolingual material -LRB- Roget 's Thesaurus and Grolier 's Encyclopedia -RRB- .",2065,6 +2066,"We have recently reported on two new << word-sense disambiguation systems >> , one trained on bilingual material -LRB- the Canadian Hansards -RRB- and the [[ other ]] trained on monolingual material -LRB- Roget 's Thesaurus and Grolier 's Encyclopedia -RRB- .",2066,2 +2067,"We have recently reported on two new word-sense disambiguation systems , one trained on bilingual material -LRB- the Canadian Hansards -RRB- and the << other >> trained on [[ monolingual material ]] -LRB- Roget 's Thesaurus and Grolier 's Encyclopedia -RRB- .",2067,3 +2068,"We have recently reported on two new word-sense disambiguation systems , one trained on bilingual material -LRB- the Canadian Hansards -RRB- and the other trained on << monolingual material >> -LRB- [[ Roget 's Thesaurus ]] and Grolier 's Encyclopedia -RRB- .",2068,2 +2069,"We have recently reported on two new word-sense disambiguation systems , one trained on bilingual material -LRB- the Canadian Hansards -RRB- and the other trained on monolingual material -LRB- [[ Roget 's Thesaurus ]] and << Grolier 's Encyclopedia >> -RRB- .",2069,0 +2070,"We have recently reported on two new word-sense disambiguation systems , one trained on bilingual material -LRB- the Canadian Hansards -RRB- and the other trained on << monolingual material >> -LRB- Roget 's Thesaurus and [[ Grolier 's Encyclopedia ]] -RRB- .",2070,2 +2071,"In addition , [[ it ]] could also be used to help evaluate << disambiguation algorithms >> that did not make use of the discourse constraint .",2071,6 +2072,We study and compare two novel [[ embedding methods ]] for << segmenting feature points of piece-wise planar structures >> from two -LRB- uncalibrated -RRB- perspective images .,2072,3 +2073,"We show that a set of different << homographies >> can be embedded in different ways to a [[ higher-dimensional real or complex space ]] , so that each homography corresponds to either a complex bilinear form or a real quadratic form .",2073,1 +2074,"We show that a set of different homographies can be embedded in different ways to a higher-dimensional real or complex space , so that each << homography >> corresponds to either a [[ complex bilinear form ]] or a real quadratic form .",2074,1 +2075,"We show that a set of different homographies can be embedded in different ways to a higher-dimensional real or complex space , so that each homography corresponds to either a [[ complex bilinear form ]] or a << real quadratic form >> .",2075,0 +2076,"We show that a set of different homographies can be embedded in different ways to a higher-dimensional real or complex space , so that each << homography >> corresponds to either a complex bilinear form or a [[ real quadratic form ]] .",2076,1 +2077,We give a << closed-form segmentation solution >> for each case by utilizing these properties based on [[ subspace-segmentation methods ]] .,2077,3 +2078,These theoretical results show that one can intrinsically segment a << piece-wise planar scene >> from [[ 2-D images ]] without explicitly performing any 3-D reconstruction .,2078,1 +2079,[[ Background maintenance ]] is a frequent element of << video surveillance systems >> .,2079,4 +2080,"We develop Wallflower , a [[ three-component system ]] for << background maintenance >> : the pixel-level component performs Wiener filtering to make probabilistic predictions of the expected background ; the region-level component fills in homogeneous regions of foreground objects ; and the frame-level component detects sudden , global changes in the image and swaps in better approximations of the background .",2080,3 +2081,"We develop Wallflower , a << three-component system >> for background maintenance : the [[ pixel-level component ]] performs Wiener filtering to make probabilistic predictions of the expected background ; the region-level component fills in homogeneous regions of foreground objects ; and the frame-level component detects sudden , global changes in the image and swaps in better approximations of the background .",2081,4 +2082,"We develop Wallflower , a three-component system for background maintenance : the [[ pixel-level component ]] performs Wiener filtering to make probabilistic predictions of the expected background ; the << region-level component >> fills in homogeneous regions of foreground objects ; and the frame-level component detects sudden , global changes in the image and swaps in better approximations of the background .",2082,0 +2083,"We develop Wallflower , a three-component system for background maintenance : the << pixel-level component >> performs [[ Wiener filtering ]] to make probabilistic predictions of the expected background ; the region-level component fills in homogeneous regions of foreground objects ; and the frame-level component detects sudden , global changes in the image and swaps in better approximations of the background .",2083,3 +2084,"We develop Wallflower , a three-component system for background maintenance : the pixel-level component performs [[ Wiener filtering ]] to make << probabilistic predictions of the expected background >> ; the region-level component fills in homogeneous regions of foreground objects ; and the frame-level component detects sudden , global changes in the image and swaps in better approximations of the background .",2084,3 +2085,"We develop Wallflower , a << three-component system >> for background maintenance : the pixel-level component performs Wiener filtering to make probabilistic predictions of the expected background ; the [[ region-level component ]] fills in homogeneous regions of foreground objects ; and the frame-level component detects sudden , global changes in the image and swaps in better approximations of the background .",2085,4 +2086,"We develop Wallflower , a three-component system for background maintenance : the pixel-level component performs Wiener filtering to make probabilistic predictions of the expected background ; the [[ region-level component ]] fills in << homogeneous regions of foreground objects >> ; and the frame-level component detects sudden , global changes in the image and swaps in better approximations of the background .",2086,3 +2087,"We develop Wallflower , a three-component system for background maintenance : the pixel-level component performs Wiener filtering to make probabilistic predictions of the expected background ; the [[ region-level component ]] fills in homogeneous regions of foreground objects ; and the << frame-level component >> detects sudden , global changes in the image and swaps in better approximations of the background .",2087,0 +2088,"We develop Wallflower , a << three-component system >> for background maintenance : the pixel-level component performs Wiener filtering to make probabilistic predictions of the expected background ; the region-level component fills in homogeneous regions of foreground objects ; and the [[ frame-level component ]] detects sudden , global changes in the image and swaps in better approximations of the background .",2088,4 +2089,We compare our [[ system ]] with 8 other << background subtraction algorithms >> .,2089,5 +2090,[[ Wallflower ]] is shown to outperform previous << algorithms >> by handling a greater set of the difficult situations that can occur .,2090,5 +2091,"Finally , we analyze the experimental results and propose [[ normative principles ]] for << background maintenance >> .",2091,3 +2092,"Is it possible to use [[ out-of-domain acoustic training data ]] to improve a << speech recognizer >> 's performance on a speciic , independent application ?",2092,3 +2093,"In our experiments , we use [[ Wallstreet Journal -LRB- WSJ -RRB- data ]] to train a << recognizer >> , which is adapted and evaluated in the Phonebook domain .",2093,3 +2094,"In our experiments , we use Wallstreet Journal -LRB- WSJ -RRB- data to train a << recognizer >> , which is adapted and evaluated in the [[ Phonebook domain ]] .",2094,6 +2095,"First , starting from the [[ WSJ-trained recognizer ]] , how much adaptation data -LRB- taken from the Phonebook training corpus -RRB- is necessary to achieve a reasonable << recognition >> performance in spite of the high degree of mismatch ?",2095,3 +2096,"First , starting from the << WSJ-trained recognizer >> , how much [[ adaptation data ]] -LRB- taken from the Phonebook training corpus -RRB- is necessary to achieve a reasonable recognition performance in spite of the high degree of mismatch ?",2096,3 +2097,"First , starting from the WSJ-trained recognizer , how much [[ adaptation data ]] -LRB- taken from the << Phonebook training corpus >> -RRB- is necessary to achieve a reasonable recognition performance in spite of the high degree of mismatch ?",2097,4 +2098,"Second , is it possible to improve the << recognition >> performance of a [[ Phonebook-trained baseline acoustic model ]] by using additional out-of-domain training data ?",2098,3 +2099,"Second , is it possible to improve the recognition performance of a << Phonebook-trained baseline acoustic model >> by using additional [[ out-of-domain training data ]] ?",2099,3 +2100,This paper proposes an [[ approach ]] to << full parsing >> suitable for Information Extraction from texts .,2100,3 +2101,This paper proposes an approach to [[ full parsing ]] suitable for << Information Extraction >> from texts .,2101,3 +2102,"[[ It ]] was implemented in the << IE module >> of FACILE , a EU project for multilingual text classification and IE .",2102,3 +2103,"It was implemented in the [[ IE module ]] of << FACILE , a EU project for multilingual text classification and IE >> .",2103,4 +2104,"It then presents an implemented << graphic interpretation system >> that takes into account a variety of [[ communicative signals ]] , and an evaluation study showing that evidence obtained from shallow processing of the graphic 's caption has a significant impact on the system 's success .",2104,3 +2105,"It then presents an implemented graphic interpretation system that takes into account a variety of communicative signals , and an evaluation study showing that evidence obtained from [[ shallow processing ]] of the graphic 's caption has a significant impact on the << system >> 's success .",2105,3 +2106,"It then presents an implemented graphic interpretation system that takes into account a variety of communicative signals , and an evaluation study showing that evidence obtained from << shallow processing >> of the [[ graphic 's caption ]] has a significant impact on the system 's success .",2106,3 +2107,[[ Graphical models ]] such as Bayesian Networks -LRB- BNs -RRB- are being increasingly applied to various << computer vision problems >> .,2107,3 +2108,<< Graphical models >> such as [[ Bayesian Networks -LRB- BNs -RRB- ]] are being increasingly applied to various computer vision problems .,2108,2 +2109,"One bottleneck in using BN is that learning the << BN model parameters >> often requires a large amount of reliable and [[ representative training data ]] , which proves to be difficult to acquire for many computer vision tasks .",2109,3 +2110,"One bottleneck in using BN is that learning the BN model parameters often requires a large amount of reliable and [[ representative training data ]] , which proves to be difficult to acquire for many << computer vision tasks >> .",2110,3 +2111,"On the other hand , there is often available [[ qualitative prior knowledge ]] about the << model >> .",2111,1 +2112,Such << knowledge >> comes either from [[ domain experts ]] based on their experience or from various physical or geometric constraints that govern the objects we try to model .,2112,3 +2113,Such knowledge comes either from [[ domain experts ]] based on their experience or from various << physical or geometric constraints >> that govern the objects we try to model .,2113,0 +2114,Such << knowledge >> comes either from domain experts based on their experience or from various [[ physical or geometric constraints ]] that govern the objects we try to model .,2114,3 +2115,"Unlike the [[ quantitative prior ]] , the << qualitative prior >> is often ignored due to the difficulty of incorporating them into the model learning process .",2115,5 +2116,"Unlike the quantitative prior , the qualitative prior is often ignored due to the difficulty of incorporating [[ them ]] into the << model learning process >> .",2116,4 +2117,"In this paper , we introduce a closed-form solution to systematically combine the [[ limited training data ]] with some generic << qualitative knowledge >> for BN parameter learning .",2117,0 +2118,"In this paper , we introduce a closed-form solution to systematically combine the [[ limited training data ]] with some generic qualitative knowledge for << BN parameter learning >> .",2118,3 +2119,"In this paper , we introduce a closed-form solution to systematically combine the limited training data with some generic [[ qualitative knowledge ]] for << BN parameter learning >> .",2119,3 +2120,"In this paper , we introduce a << closed-form solution >> to systematically combine the limited training data with some generic qualitative knowledge for [[ BN parameter learning ]] .",2120,3 +2121,"To validate our method , we compare [[ it ]] with the << Maximum Likelihood -LRB- ML -RRB- estimation method >> under sparse data and with the Expectation Maximization -LRB- EM -RRB- algorithm under incomplete data respectively .",2121,5 +2122,"To validate our method , we compare [[ it ]] with the Maximum Likelihood -LRB- ML -RRB- estimation method under sparse data and with the << Expectation Maximization -LRB- EM -RRB- algorithm >> under incomplete data respectively .",2122,5 +2123,"To validate our method , we compare << it >> with the Maximum Likelihood -LRB- ML -RRB- estimation method under [[ sparse data ]] and with the Expectation Maximization -LRB- EM -RRB- algorithm under incomplete data respectively .",2123,3 +2124,"To validate our method , we compare it with the << Maximum Likelihood -LRB- ML -RRB- estimation method >> under [[ sparse data ]] and with the Expectation Maximization -LRB- EM -RRB- algorithm under incomplete data respectively .",2124,3 +2125,"To validate our method , we compare << it >> with the Maximum Likelihood -LRB- ML -RRB- estimation method under sparse data and with the Expectation Maximization -LRB- EM -RRB- algorithm under [[ incomplete data ]] respectively .",2125,3 +2126,"To validate our method , we compare it with the Maximum Likelihood -LRB- ML -RRB- estimation method under sparse data and with the << Expectation Maximization -LRB- EM -RRB- algorithm >> under [[ incomplete data ]] respectively .",2126,3 +2127,"To further demonstrate its applications for << computer vision >> , we apply [[ it ]] to learn a BN model for facial Action Unit -LRB- AU -RRB- recognition from real image data .",2127,3 +2128,"To further demonstrate its applications for computer vision , we apply [[ it ]] to learn a << BN model >> for facial Action Unit -LRB- AU -RRB- recognition from real image data .",2128,3 +2129,"To further demonstrate its applications for computer vision , we apply it to learn a [[ BN model ]] for << facial Action Unit -LRB- AU -RRB- recognition >> from real image data .",2129,3 +2130,"To further demonstrate its applications for computer vision , we apply it to learn a BN model for << facial Action Unit -LRB- AU -RRB- recognition >> from [[ real image data ]] .",2130,3 +2131,"The experimental results show that with simple and [[ generic qualitative constraints ]] and using only a small amount of << training data >> , our method can robustly and accurately estimate the BN model parameters .",2131,0 +2132,"The experimental results show that with simple and [[ generic qualitative constraints ]] and using only a small amount of training data , our << method >> can robustly and accurately estimate the BN model parameters .",2132,3 +2133,"The experimental results show that with simple and generic qualitative constraints and using only a small amount of [[ training data ]] , our << method >> can robustly and accurately estimate the BN model parameters .",2133,3 +2134,"The experimental results show that with simple and generic qualitative constraints and using only a small amount of training data , our [[ method ]] can robustly and accurately estimate the << BN model parameters >> .",2134,3 +2135,"In this paper we introduce a [[ modal language LT ]] for imposing << constraints on trees >> , and an extension LT -LRB- LF -RRB- for imposing constraints on trees decorated with feature structures .",2135,3 +2136,"In this paper we introduce a modal language LT for imposing constraints on trees , and an [[ extension LT -LRB- LF -RRB- ]] for imposing << constraints on trees decorated with feature structures >> .",2136,3 +2137,"The motivation for introducing these [[ languages ]] is to provide tools for formalising << grammatical frameworks >> perspicuously , and the paper illustrates this by showing how the leading ideas of GPSG can be captured in LT -LRB- LF -RRB- .",2137,3 +2138,"The motivation for introducing these languages is to provide tools for formalising grammatical frameworks perspicuously , and the paper illustrates this by showing how the leading ideas of [[ GPSG ]] can be captured in << LT -LRB- LF -RRB- >> .",2138,3 +2139,Previous research has demonstrated the utility of [[ clustering ]] in << inducing semantic verb classes >> from undisambiguated corpus data .,2139,3 +2140,Previous research has demonstrated the utility of << clustering >> in inducing semantic verb classes from [[ undisambiguated corpus data ]] .,2140,3 +2141,We describe a new << approach >> which involves [[ clustering subcategorization frame -LRB- SCF -RRB- distributions ]] using the Information Bottleneck and nearest neighbour methods .,2141,4 +2142,We describe a new approach which involves << clustering subcategorization frame -LRB- SCF -RRB- distributions >> using the [[ Information Bottleneck and nearest neighbour methods ]] .,2142,3 +2143,"A novel [[ evaluation scheme ]] is proposed which accounts for the effect of << polysemy >> on the clusters , offering us a good insight into the potential and limitations of semantically classifying undisambiguated SCF data .",2143,3 +2144,"A novel [[ evaluation scheme ]] is proposed which accounts for the effect of polysemy on the clusters , offering us a good insight into the potential and limitations of << semantically classifying undisambiguated SCF data >> .",2144,6 +2145,"A novel evaluation scheme is proposed which accounts for the effect of [[ polysemy ]] on the << clusters >> , offering us a good insight into the potential and limitations of semantically classifying undisambiguated SCF data .",2145,1 +2146,"Due to the capacity of [[ pan-tilt-zoom -LRB- PTZ -RRB- cameras ]] to simultaneously cover a << panoramic area >> and maintain high resolution imagery , researches in automated surveillance systems with multiple PTZ cameras have become increasingly important .",2146,3 +2147,"Due to the capacity of [[ pan-tilt-zoom -LRB- PTZ -RRB- cameras ]] to simultaneously cover a panoramic area and maintain << high resolution imagery >> , researches in automated surveillance systems with multiple PTZ cameras have become increasingly important .",2147,3 +2148,"Due to the capacity of pan-tilt-zoom -LRB- PTZ -RRB- cameras to simultaneously cover a panoramic area and maintain high resolution imagery , researches in << automated surveillance systems >> with multiple [[ PTZ cameras ]] have become increasingly important .",2148,1 +2149,Most existing [[ algorithms ]] require the prior knowledge of intrinsic parameters of the PTZ camera to infer the << relative positioning >> and orientation among multiple PTZ cameras .,2149,3 +2150,Most existing [[ algorithms ]] require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and << orientation >> among multiple PTZ cameras .,2150,3 +2151,Most existing << algorithms >> require the [[ prior knowledge of intrinsic parameters of the PTZ camera ]] to infer the relative positioning and orientation among multiple PTZ cameras .,2151,3 +2152,Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the [[ relative positioning ]] and << orientation >> among multiple PTZ cameras .,2152,0 +2153,Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the [[ relative positioning ]] and orientation among multiple << PTZ cameras >> .,2153,1 +2154,Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and [[ orientation ]] among multiple << PTZ cameras >> .,2154,1 +2155,"To overcome this limitation , we propose a novel [[ mapping algorithm ]] that derives the << relative positioning >> and orientation between two PTZ cameras based on a unified polynomial model .",2155,3 +2156,"To overcome this limitation , we propose a novel [[ mapping algorithm ]] that derives the relative positioning and << orientation >> between two PTZ cameras based on a unified polynomial model .",2156,3 +2157,"To overcome this limitation , we propose a novel mapping algorithm that derives the [[ relative positioning ]] and << orientation >> between two PTZ cameras based on a unified polynomial model .",2157,0 +2158,"To overcome this limitation , we propose a novel mapping algorithm that derives the [[ relative positioning ]] and orientation between two << PTZ cameras >> based on a unified polynomial model .",2158,1 +2159,"To overcome this limitation , we propose a novel mapping algorithm that derives the relative positioning and [[ orientation ]] between two << PTZ cameras >> based on a unified polynomial model .",2159,1 +2160,"To overcome this limitation , we propose a novel << mapping algorithm >> that derives the relative positioning and orientation between two PTZ cameras based on a [[ unified polynomial model ]] .",2160,3 +2161,"Experimental results demonstrate that our proposed << algorithm >> presents substantially reduced [[ computational complexity ]] and improved flexibility at the cost of slightly decreased pixel accuracy , as compared with the work of Chen and Wang .",2161,6 +2162,"Experimental results demonstrate that our proposed << algorithm >> presents substantially reduced computational complexity and improved [[ flexibility ]] at the cost of slightly decreased pixel accuracy , as compared with the work of Chen and Wang .",2162,6 +2163,"Experimental results demonstrate that our proposed << algorithm >> presents substantially reduced computational complexity and improved flexibility at the cost of slightly decreased [[ pixel accuracy ]] , as compared with the work of Chen and Wang .",2163,6 +2164,This slightly decreased << pixel accuracy >> can be compensated by [[ consistent labeling approaches ]] without added cost for the application of automated surveillance systems along with changing configurations and a larger number of PTZ cameras .,2164,3 +2165,This paper presents a new [[ two-pass algorithm ]] for << Extra Large -LRB- more than 1M words -RRB- Vocabulary COntinuous Speech recognition >> based on the Information Retrieval -LRB- ELVIRCOS -RRB- .,2165,3 +2166,This paper presents a new << two-pass algorithm >> for Extra Large -LRB- more than 1M words -RRB- Vocabulary COntinuous Speech recognition based on the [[ Information Retrieval -LRB- ELVIRCOS -RRB- ]] .,2166,3 +2167,The principle of this approach is to decompose a recognition process into two << passes >> where the [[ first pass ]] builds the words subset for the second pass recognition by using information retrieval procedure .,2167,2 +2168,The principle of this approach is to decompose a recognition process into two << passes >> where the first pass builds the words subset for the [[ second pass recognition ]] by using information retrieval procedure .,2168,2 +2169,The principle of this approach is to decompose a recognition process into two passes where the first pass builds the words subset for the << second pass recognition >> by using [[ information retrieval procedure ]] .,2169,3 +2170,[[ Word graph composition ]] for << continuous speech >> is presented .,2170,3 +2171,With this [[ approach ]] a high performances for << large vocabulary speech recognition >> can be obtained .,2171,3 +2172,"First , images are partitioned into regions using << one-class classification >> and [[ patch-based clustering algorithms ]] where one-class classifiers model the regions with relatively uniform color and texture properties , and clustering of patches aims to detect structures in the remaining regions .",2172,0 +2173,"First , images are partitioned into regions using one-class classification and patch-based clustering algorithms where << one-class classifiers >> model the regions with relatively [[ uniform color and texture properties ]] , and clustering of patches aims to detect structures in the remaining regions .",2173,3 +2174,"Next , the resulting regions are clustered to obtain a codebook of region types , and two [[ models ]] are constructed for << scene representation >> : a '' bag of individual regions '' representation where each region is regarded separately , and a '' bag of region pairs '' representation where regions with particular spatial relationships are considered together .",2174,3 +2175,"Given these representations , << scene classification >> is done using [[ Bayesian classifiers ]] .",2175,3 +2176,Experiments on the [[ LabelMe data set ]] showed that the proposed << models >> significantly out-perform a baseline global feature-based approach .,2176,6 +2177,Experiments on the [[ LabelMe data set ]] showed that the proposed models significantly out-perform a << baseline global feature-based approach >> .,2177,6 +2178,Experiments on the LabelMe data set showed that the proposed [[ models ]] significantly out-perform a << baseline global feature-based approach >> .,2178,5 +2179,"The [[ model ]] is designed for use in << error correction >> , with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks .",2179,3 +2180,"The [[ model ]] is designed for use in error correction , with a focus on << post-processing >> the output of black-box OCR systems in order to make it more useful for NLP tasks .",2180,3 +2181,"The model is designed for use in << error correction >> , with a focus on [[ post-processing ]] the output of black-box OCR systems in order to make it more useful for NLP tasks .",2181,4 +2182,"The model is designed for use in error correction , with a focus on post-processing the output of black-box OCR systems in order to make [[ it ]] more useful for << NLP tasks >> .",2182,3 +2183,"We present an implementation of the << model >> based on [[ finite-state models ]] , demonstrate the model 's ability to significantly reduce character and word error rate , and provide evaluation results involving automatic extraction of translation lexicons from printed text .",2183,3 +2184,"We present an implementation of the model based on finite-state models , demonstrate the << model >> 's ability to significantly reduce [[ character and word error rate ]] , and provide evaluation results involving automatic extraction of translation lexicons from printed text .",2184,6 +2185,"We present an implementation of the model based on finite-state models , demonstrate the << model >> 's ability to significantly reduce character and word error rate , and provide evaluation results involving [[ automatic extraction of translation lexicons ]] from printed text .",2185,6 +2186,"We present an implementation of the model based on finite-state models , demonstrate the model 's ability to significantly reduce character and word error rate , and provide evaluation results involving << automatic extraction of translation lexicons >> from [[ printed text ]] .",2186,3 +2187,We present a [[ framework ]] for << word alignment >> based on log-linear models .,2187,3 +2188,We present a << framework >> for word alignment based on [[ log-linear models ]] .,2188,3 +2189,"All [[ knowledge sources ]] are treated as << feature functions >> , which depend on the source langauge sentence , the target language sentence and possible additional variables .",2189,3 +2190,[[ Log-linear models ]] allow << statistical alignment models >> to be easily extended by incorporating syntactic information .,2190,3 +2191,<< Log-linear models >> allow statistical alignment models to be easily extended by incorporating [[ syntactic information ]] .,2191,3 +2192,"In this paper , we use [[ IBM Model 3 alignment probabilities ]] , << POS correspondence >> , and bilingual dictionary coverage as features .",2192,0 +2193,"In this paper , we use [[ IBM Model 3 alignment probabilities ]] , POS correspondence , and bilingual dictionary coverage as << features >> .",2193,3 +2194,"In this paper , we use IBM Model 3 alignment probabilities , [[ POS correspondence ]] , and << bilingual dictionary coverage >> as features .",2194,0 +2195,"In this paper , we use IBM Model 3 alignment probabilities , [[ POS correspondence ]] , and bilingual dictionary coverage as << features >> .",2195,3 +2196,"In this paper , we use IBM Model 3 alignment probabilities , POS correspondence , and [[ bilingual dictionary coverage ]] as << features >> .",2196,3 +2197,Our experiments show that [[ log-linear models ]] significantly outperform << IBM translation models >> .,2197,5 +2198,"[[ Hough voting ]] in a geometric transformation space allows us to realize << spatial verification >> , but remains sensitive to feature detection errors because of the inflexible quan-tization of single feature correspondences .",2198,3 +2199,"<< Hough voting >> in a [[ geometric transformation space ]] allows us to realize spatial verification , but remains sensitive to feature detection errors because of the inflexible quan-tization of single feature correspondences .",2199,1 +2200,"To handle this problem , we propose a new [[ method ]] , called adaptive dither voting , for << robust spatial verification >> .",2200,3 +2201,"For each correspondence , instead of hard-mapping it to a single transformation , the << method >> augments its description by using [[ multiple dithered transformations ]] that are deterministically generated by the other correspondences .",2201,3 +2202,We also propose exploiting the [[ non-uniformity ]] of a << Hough histogram >> as the spatial similarity to handle multiple matching surfaces .,2202,1 +2203,We also propose exploiting the [[ non-uniformity ]] of a Hough histogram as the spatial similarity to handle << multiple matching surfaces >> .,2203,3 +2204,"The [[ method ]] outperforms its state-of-the-art counterparts in both accuracy and scalability , especially when it comes to the << retrieval of small , rotated objects >> .",2204,3 +2205,"The << method >> outperforms its state-of-the-art [[ counterparts ]] in both accuracy and scalability , especially when it comes to the retrieval of small , rotated objects .",2205,5 +2206,"The << method >> outperforms its state-of-the-art counterparts in both [[ accuracy ]] and scalability , especially when it comes to the retrieval of small , rotated objects .",2206,6 +2207,"The method outperforms its state-of-the-art << counterparts >> in both [[ accuracy ]] and scalability , especially when it comes to the retrieval of small , rotated objects .",2207,6 +2208,"The << method >> outperforms its state-of-the-art counterparts in both accuracy and [[ scalability ]] , especially when it comes to the retrieval of small , rotated objects .",2208,6 +2209,"The method outperforms its state-of-the-art << counterparts >> in both accuracy and [[ scalability ]] , especially when it comes to the retrieval of small , rotated objects .",2209,6 +2210,We propose a novel technique called [[ bispectral photo-metric stereo ]] that makes effective use of fluorescence for << shape reconstruction >> .,2210,3 +2211,We propose a novel technique called << bispectral photo-metric stereo >> that makes effective use of [[ fluorescence ]] for shape reconstruction .,2211,3 +2212,"Due to the [[ complexity ]] of its << emission process >> , fluo-rescence tends to be excluded from most algorithms in computer vision and image processing .",2212,6 +2213,"Due to the complexity of its emission process , fluo-rescence tends to be excluded from most [[ algorithms ]] in << computer vision >> and image processing .",2213,3 +2214,"Due to the complexity of its emission process , fluo-rescence tends to be excluded from most [[ algorithms ]] in computer vision and << image processing >> .",2214,3 +2215,"Due to the complexity of its emission process , fluo-rescence tends to be excluded from most algorithms in [[ computer vision ]] and << image processing >> .",2215,0 +2216,"Moreover , [[ fluorescence 's wavelength-shifting property ]] enables us to estimate the << shape >> of an object by applying photomet-ric stereo to emission-only images without suffering from specular reflection .",2216,3 +2217,"Moreover , fluorescence 's wavelength-shifting property enables us to estimate the << shape >> of an object by applying [[ photomet-ric stereo ]] to emission-only images without suffering from specular reflection .",2217,3 +2218,"Moreover , fluorescence 's wavelength-shifting property enables us to estimate the shape of an object by applying << photomet-ric stereo >> to [[ emission-only images ]] without suffering from specular reflection .",2218,3 +2219,This is the significant advantage of the << fluorescence-based method >> over previous [[ methods ]] based on reflection .,2219,5 +2220,"In this paper , we present an [[ approach ]] for learning a << visual representation >> from the raw spatiotemporal signals in videos .",2220,3 +2221,"In this paper , we present an approach for learning a << visual representation >> from the [[ raw spatiotemporal signals in videos ]] .",2221,3 +2222,"We formulate our << method >> as an [[ unsupervised sequential verification task ]] , i.e. , we determine whether a sequence of frames from a video is in the correct temporal order .",2222,3 +2223,"With this simple [[ task ]] and no semantic labels , we learn a powerful << visual representation >> using a Convolutional Neural Network -LRB- CNN -RRB- .",2223,3 +2224,"With this simple task and no semantic labels , we learn a powerful << visual representation >> using a [[ Convolutional Neural Network -LRB- CNN -RRB- ]] .",2224,3 +2225,The << representation >> contains [[ complementary information ]] to that learned from supervised image datasets like ImageNet .,2225,4 +2226,The representation contains << complementary information >> to that learned from [[ supervised image datasets ]] like ImageNet .,2226,3 +2227,The representation contains complementary information to that learned from << supervised image datasets >> like [[ ImageNet ]] .,2227,2 +2228,"Qualitative results show that our [[ method ]] captures information that is temporally varying , such as << human pose >> .",2228,3 +2229,"When used as [[ pre-training ]] for << action recognition >> , our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51 .",2229,3 +2230,"When used as << pre-training >> for action recognition , [[ our method ]] gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51 .",2230,3 +2231,"When used as pre-training for action recognition , [[ our method ]] gives significant gains over << learning without external data >> on benchmark datasets like UCF101 and HMDB51 .",2231,5 +2232,"When used as pre-training for action recognition , << our method >> gives significant gains over learning without external data on [[ benchmark datasets ]] like UCF101 and HMDB51 .",2232,6 +2233,"When used as pre-training for action recognition , our method gives significant gains over << learning without external data >> on [[ benchmark datasets ]] like UCF101 and HMDB51 .",2233,6 +2234,"When used as pre-training for action recognition , our method gives significant gains over learning without external data on << benchmark datasets >> like [[ UCF101 ]] and HMDB51 .",2234,2 +2235,"When used as pre-training for action recognition , our method gives significant gains over learning without external data on benchmark datasets like [[ UCF101 ]] and << HMDB51 >> .",2235,0 +2236,"When used as pre-training for action recognition , our method gives significant gains over learning without external data on << benchmark datasets >> like UCF101 and [[ HMDB51 ]] .",2236,2 +2237,"To demonstrate its sensitivity to human pose , we show results for << pose estimation >> on the [[ FLIC and MPII datasets ]] that are competitive , or better than approaches using significantly more supervision .",2237,6 +2238,"To demonstrate its sensitivity to human pose , we show results for pose estimation on the FLIC and MPII datasets that are competitive , or better than << approaches >> using significantly more [[ supervision ]] .",2238,3 +2239,<< Our method >> can be combined with [[ supervised representations ]] to provide an additional boost in accuracy .,2239,0 +2240,<< Our method >> can be combined with supervised representations to provide an additional boost in [[ accuracy ]] .,2240,6 +2241,"`` To explain complex phenomena , an [[ explanation system ]] must be able to select information from a formal representation of domain knowledge , organize the selected information into multisentential discourse plans , and realize the << discourse plans >> in text .",2241,3 +2242,"This paper reports on a seven-year effort to empirically study << explanation generation >> from [[ semantically rich , large-scale knowledge bases ]] .",2242,3 +2243,"In particular , it describes a [[ robust explanation system ]] that constructs << multisentential and multi-paragraph explanations >> from the a large-scale knowledge base in the domain of botanical anatomy , physiology , and development .",2243,3 +2244,"In particular , it describes a << robust explanation system >> that constructs multisentential and multi-paragraph explanations from the a [[ large-scale knowledge base ]] in the domain of botanical anatomy , physiology , and development .",2244,3 +2245,"In particular , it describes a robust explanation system that constructs multisentential and multi-paragraph explanations from the a << large-scale knowledge base >> in the domain of [[ botanical anatomy ]] , physiology , and development .",2245,1 +2246,"In particular , it describes a robust explanation system that constructs multisentential and multi-paragraph explanations from the a large-scale knowledge base in the domain of [[ botanical anatomy ]] , << physiology >> , and development .",2246,0 +2247,"In particular , it describes a robust explanation system that constructs multisentential and multi-paragraph explanations from the a << large-scale knowledge base >> in the domain of botanical anatomy , [[ physiology ]] , and development .",2247,1 +2248,"In particular , it describes a robust explanation system that constructs multisentential and multi-paragraph explanations from the a large-scale knowledge base in the domain of botanical anatomy , [[ physiology ]] , and << development >> .",2248,0 +2249,"In particular , it describes a robust explanation system that constructs multisentential and multi-paragraph explanations from the a << large-scale knowledge base >> in the domain of botanical anatomy , physiology , and [[ development ]] .",2249,1 +2250,We introduce the evaluation methodology and describe how performance was assessed with this [[ methodology ]] in the most extensive empirical evaluation conducted on an << explanation system >> .,2250,6 +2251,We present an << operable definition >> of focus which is argued to be of a [[ cognito-pragmatic nature ]] and explore how it is determined in discourse in a formalized manner .,2251,1 +2252,"For this purpose , a file card model of discourse model and knowledge store is introduced enabling the decomposition and formal representation of its determination process as a << programmable algorithm >> -LRB- [[ FDA ]] -RRB- .",2252,2 +2253,"Interdisciplinary evidence from social and cognitive psychology is cited and the prospect of the integration of focus via [[ FDA ]] as a << discourse-level construct >> into speech synthesis systems , in particular , concept-to-speech systems , is also briefly discussed .",2253,3 +2254,"Interdisciplinary evidence from social and cognitive psychology is cited and the prospect of the integration of focus via FDA as a [[ discourse-level construct ]] into << speech synthesis systems >> , in particular , concept-to-speech systems , is also briefly discussed .",2254,4 +2255,"Interdisciplinary evidence from social and cognitive psychology is cited and the prospect of the integration of focus via FDA as a discourse-level construct into << speech synthesis systems >> , in particular , [[ concept-to-speech systems ]] , is also briefly discussed .",2255,2 +2256,"[[ Inference ]] in these << models >> involves solving a combinatorial optimization problem , with methods such as graph cuts , belief propagation .",2256,3 +2257,"<< Inference >> in these models involves solving a [[ combinatorial optimization problem ]] , with methods such as graph cuts , belief propagation .",2257,4 +2258,"Inference in these models involves solving a << combinatorial optimization problem >> , with [[ methods ]] such as graph cuts , belief propagation .",2258,3 +2259,"Inference in these models involves solving a combinatorial optimization problem , with << methods >> such as [[ graph cuts ]] , belief propagation .",2259,3 +2260,"Inference in these models involves solving a combinatorial optimization problem , with methods such as [[ graph cuts ]] , << belief propagation >> .",2260,0 +2261,"Inference in these models involves solving a combinatorial optimization problem , with << methods >> such as graph cuts , [[ belief propagation ]] .",2261,3 +2262,"To overcome this , state-of-the-art [[ structured learning methods ]] frame the << problem >> as one of large margin estimation .",2262,3 +2263,"To overcome this , state-of-the-art structured learning methods frame the << problem >> as one of [[ large margin estimation ]] .",2263,3 +2264,[[ Iterative solutions ]] have been proposed to solve the resulting << convex optimization problem >> .,2264,3 +2265,"We show how the resulting << optimization problem >> can be reduced to an equivalent [[ convex problem ]] with a small number of constraints , and solve it using an efficient scheme .",2265,3 +2266,[[ Interpreting metaphors ]] is an integral and inescapable process in << human understanding of natural language >> .,2266,2 +2267,This paper discusses a [[ method ]] of << analyzing metaphors >> based on the existence of a small number of generalized metaphor mappings .,2267,3 +2268,This paper discusses a method of << analyzing metaphors >> based on the existence of a small number of [[ generalized metaphor mappings ]] .,2268,3 +2269,"Each << generalized metaphor >> contains a [[ recognition network ]] , a basic mapping , additional transfer mappings , and an implicit intention component .",2269,4 +2270,"Each generalized metaphor contains a [[ recognition network ]] , a << basic mapping >> , additional transfer mappings , and an implicit intention component .",2270,0 +2271,"Each << generalized metaphor >> contains a recognition network , a [[ basic mapping ]] , additional transfer mappings , and an implicit intention component .",2271,4 +2272,"Each << generalized metaphor >> contains a recognition network , a basic mapping , additional [[ transfer mappings ]] , and an implicit intention component .",2272,4 +2273,"Each generalized metaphor contains a recognition network , a << basic mapping >> , additional [[ transfer mappings ]] , and an implicit intention component .",2273,0 +2274,"Each generalized metaphor contains a recognition network , a basic mapping , additional [[ transfer mappings ]] , and an << implicit intention component >> .",2274,0 +2275,"Each << generalized metaphor >> contains a recognition network , a basic mapping , additional transfer mappings , and an [[ implicit intention component ]] .",2275,4 +2276,It is argued that the [[ method ]] reduces << metaphor interpretation >> from a reconstruction to a recognition task .,2276,3 +2277,It is argued that the method reduces << metaphor interpretation >> from a reconstruction to a [[ recognition task ]] .,2277,3 +2278,"This study presents a << method to automatically acquire paraphrases >> using [[ bilingual corpora ]] , which utilizes the bilingual dependency relations obtained by projecting a monolingual dependency parse onto the other language sentence based on statistical alignment techniques .",2278,3 +2279,"This study presents a << method to automatically acquire paraphrases >> using bilingual corpora , which utilizes the [[ bilingual dependency relations ]] obtained by projecting a monolingual dependency parse onto the other language sentence based on statistical alignment techniques .",2279,3 +2280,"This study presents a method to automatically acquire paraphrases using bilingual corpora , which utilizes the << bilingual dependency relations >> obtained by projecting a [[ monolingual dependency parse ]] onto the other language sentence based on statistical alignment techniques .",2280,3 +2281,"This study presents a method to automatically acquire paraphrases using bilingual corpora , which utilizes the << bilingual dependency relations >> obtained by projecting a monolingual dependency parse onto the other language sentence based on [[ statistical alignment techniques ]] .",2281,3 +2282,"Since the << paraphrasing method >> is capable of clearly disambiguating the sense of an original phrase using the [[ bilingual context of dependency relation ]] , it would be possible to obtain interchangeable paraphrases under a given context .",2282,3 +2283,"Also , we provide an advanced [[ method ]] to acquire << generalized translation knowledge >> using the extracted paraphrases .",2283,3 +2284,"Also , we provide an advanced << method >> to acquire generalized translation knowledge using the extracted [[ paraphrases ]] .",2284,3 +2285,We applied the [[ method ]] to acquire the << generalized translation knowledge >> for Korean-English translation .,2285,3 +2286,We applied the method to acquire the [[ generalized translation knowledge ]] for << Korean-English translation >> .,2286,3 +2287,"Through experiments with parallel corpora of a Korean and English language pairs , we show that our [[ paraphrasing method ]] effectively extracts << paraphrases >> with high precision , 94.3 % and 84.6 % respectively for Korean and English , and the translation knowledge extracted from the bilingual corpora could be generalized successfully using the paraphrases with the 12.5 % compression ratio .",2287,3 +2288,"Through experiments with parallel corpora of a Korean and English language pairs , we show that our << paraphrasing method >> effectively extracts paraphrases with high [[ precision ]] , 94.3 % and 84.6 % respectively for Korean and English , and the translation knowledge extracted from the bilingual corpora could be generalized successfully using the paraphrases with the 12.5 % compression ratio .",2288,6 +2289,"Through experiments with parallel corpora of a Korean and English language pairs , we show that our paraphrasing method effectively extracts paraphrases with high precision , 94.3 % and 84.6 % respectively for [[ Korean ]] and << English >> , and the translation knowledge extracted from the bilingual corpora could be generalized successfully using the paraphrases with the 12.5 % compression ratio .",2289,0 +2290,"Through experiments with parallel corpora of a Korean and English language pairs , we show that our paraphrasing method effectively extracts paraphrases with high precision , 94.3 % and 84.6 % respectively for Korean and English , and the << translation knowledge >> extracted from the [[ bilingual corpora ]] could be generalized successfully using the paraphrases with the 12.5 % compression ratio .",2290,3 +2291,"Through experiments with parallel corpora of a Korean and English language pairs , we show that our paraphrasing method effectively extracts paraphrases with high precision , 94.3 % and 84.6 % respectively for Korean and English , and the << translation knowledge >> extracted from the bilingual corpora could be generalized successfully using the [[ paraphrases ]] with the 12.5 % compression ratio .",2291,3 +2292,"Through experiments with parallel corpora of a Korean and English language pairs , we show that our paraphrasing method effectively extracts paraphrases with high precision , 94.3 % and 84.6 % respectively for Korean and English , and the << translation knowledge >> extracted from the bilingual corpora could be generalized successfully using the paraphrases with the 12.5 % [[ compression ratio ]] .",2292,6 +2293,"We provide a << logical definition of Minimalist grammars >> , that are [[ Stabler 's formalization of Chomsky 's minimalist program ]] .",2293,2 +2294,"Our [[ logical definition ]] leads to a neat relation to categorial grammar , -LRB- yielding a treatment of << Montague semantics >> -RRB- , a parsing-as-deduction in a resource sensitive logic , and a learning algorithm from structured data -LRB- based on a typing-algorithm and type-unification -RRB- .",2294,3 +2295,"Our logical definition leads to a neat relation to categorial grammar , -LRB- yielding a treatment of Montague semantics -RRB- , a [[ parsing-as-deduction ]] in a << resource sensitive logic >> , and a learning algorithm from structured data -LRB- based on a typing-algorithm and type-unification -RRB- .",2295,3 +2296,"Our logical definition leads to a neat relation to categorial grammar , -LRB- yielding a treatment of Montague semantics -RRB- , a parsing-as-deduction in a resource sensitive logic , and a << learning algorithm >> from [[ structured data ]] -LRB- based on a typing-algorithm and type-unification -RRB- .",2296,3 +2297,"Our logical definition leads to a neat relation to categorial grammar , -LRB- yielding a treatment of Montague semantics -RRB- , a parsing-as-deduction in a resource sensitive logic , and a << learning algorithm >> from structured data -LRB- based on a [[ typing-algorithm ]] and type-unification -RRB- .",2297,3 +2298,"Our logical definition leads to a neat relation to categorial grammar , -LRB- yielding a treatment of Montague semantics -RRB- , a parsing-as-deduction in a resource sensitive logic , and a learning algorithm from structured data -LRB- based on a [[ typing-algorithm ]] and << type-unification >> -RRB- .",2298,0 +2299,"Our logical definition leads to a neat relation to categorial grammar , -LRB- yielding a treatment of Montague semantics -RRB- , a parsing-as-deduction in a resource sensitive logic , and a << learning algorithm >> from structured data -LRB- based on a typing-algorithm and [[ type-unification ]] -RRB- .",2299,3 +2300,"There are several [[ approaches ]] that model << information extraction >> as a token classification task , using various tagging strategies to combine multiple tokens .",2300,3 +2301,"There are several approaches that model [[ information extraction ]] as a << token classification task >> , using various tagging strategies to combine multiple tokens .",2301,2 +2302,"There are several approaches that model information extraction as a << token classification task >> , using various [[ tagging strategies ]] to combine multiple tokens .",2302,3 +2303,"We also introduce a new strategy , called Begin/After tagging or BIA , and show that [[ it ]] is competitive to the best other << strategies >> .",2303,5 +2304,"The objective is a generic [[ system ]] of tools , including a core English lexicon , grammar , and concept representations , for building << natural language processing -LRB- NLP -RRB- systems >> for text understanding .",2304,3 +2305,"The objective is a generic << system >> of tools , including a [[ core English lexicon ]] , grammar , and concept representations , for building natural language processing -LRB- NLP -RRB- systems for text understanding .",2305,4 +2306,"The objective is a generic << system >> of tools , including a core English lexicon , [[ grammar ]] , and concept representations , for building natural language processing -LRB- NLP -RRB- systems for text understanding .",2306,4 +2307,"The objective is a generic << system >> of tools , including a core English lexicon , grammar , and [[ concept representations ]] , for building natural language processing -LRB- NLP -RRB- systems for text understanding .",2307,4 +2308,"The objective is a generic system of tools , including a core English lexicon , grammar , and concept representations , for building [[ natural language processing -LRB- NLP -RRB- systems ]] for << text understanding >> .",2308,3 +2309,Systems built with [[ PAKTUS ]] are intended to generate input to << knowledge based systems >> ordata base systems .,2309,3 +2310,"Input to the << NLP system >> is typically derived from an existing [[ electronic message stream ]] , such as a news wire .",2310,3 +2311,"Input to the NLP system is typically derived from an existing << electronic message stream >> , such as a [[ news wire ]] .",2311,2 +2312,"[[ PAKTUS ]] supports the adaptation of the generic core to a variety of domains : << JINTACCS messages >> , RAINFORM messages , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and discourse patterns .",2312,3 +2313,"[[ PAKTUS ]] supports the adaptation of the generic core to a variety of domains : JINTACCS messages , << RAINFORM messages >> , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and discourse patterns .",2313,3 +2314,"[[ PAKTUS ]] supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , << news reports >> about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and discourse patterns .",2314,3 +2315,"PAKTUS supports the adaptation of the generic core to a variety of domains : [[ JINTACCS messages ]] , << RAINFORM messages >> , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and discourse patterns .",2315,0 +2316,"PAKTUS supports the adaptation of the generic core to a variety of domains : JINTACCS messages , [[ RAINFORM messages ]] , << news reports >> about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and discourse patterns .",2316,0 +2317,"PAKTUS supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , << news reports >> about a specific type of [[ event ]] , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and discourse patterns .",2317,1 +2318,"PAKTUS supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of << event >> , such as [[ financial transfers ]] or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and discourse patterns .",2318,2 +2319,"PAKTUS supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of event , such as [[ financial transfers ]] or << terrorist acts >> , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and discourse patterns .",2319,0 +2320,"PAKTUS supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of << event >> , such as financial transfers or [[ terrorist acts ]] , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and discourse patterns .",2320,2 +2321,"<< PAKTUS >> supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring [[ sublanguage and domain-specific grammar ]] , words , conceptual mappings , and discourse patterns .",2321,3 +2322,"PAKTUS supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring [[ sublanguage and domain-specific grammar ]] , << words >> , conceptual mappings , and discourse patterns .",2322,0 +2323,"<< PAKTUS >> supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , [[ words ]] , conceptual mappings , and discourse patterns .",2323,3 +2324,"PAKTUS supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , [[ words ]] , << conceptual mappings >> , and discourse patterns .",2324,0 +2325,"<< PAKTUS >> supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , [[ conceptual mappings ]] , and discourse patterns .",2325,3 +2326,"PAKTUS supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , [[ conceptual mappings ]] , and << discourse patterns >> .",2326,0 +2327,"<< PAKTUS >> supports the adaptation of the generic core to a variety of domains : JINTACCS messages , RAINFORM messages , news reports about a specific type of event , such as financial transfers or terrorist acts , etc. , by acquiring sublanguage and domain-specific grammar , words , conceptual mappings , and [[ discourse patterns ]] .",2327,3 +2328,"In this paper the << LIMSI recognizer >> which was evaluated in the [[ ARPA NOV93 CSR test ]] is described , and experimental results on the WSJ and BREF corpora under closely matched conditions are reported .",2328,6 +2329,"In this paper the << LIMSI recognizer >> which was evaluated in the ARPA NOV93 CSR test is described , and experimental results on the [[ WSJ and BREF corpora ]] under closely matched conditions are reported .",2329,6 +2330,For both [[ corpora ]] << word recognition >> experiments were carried out with vocabularies containing up to 20k words .,2330,6 +2331,The << recognizer >> makes use of [[ continuous density HMM ]] with Gaussian mixture for acoustic modeling and n-gram statistics estimated on the newspaper texts for language modeling .,2331,3 +2332,The recognizer makes use of [[ continuous density HMM ]] with << Gaussian mixture >> for acoustic modeling and n-gram statistics estimated on the newspaper texts for language modeling .,2332,0 +2333,The recognizer makes use of [[ continuous density HMM ]] with Gaussian mixture for << acoustic modeling >> and n-gram statistics estimated on the newspaper texts for language modeling .,2333,3 +2334,The recognizer makes use of [[ continuous density HMM ]] with Gaussian mixture for acoustic modeling and << n-gram statistics >> estimated on the newspaper texts for language modeling .,2334,0 +2335,The recognizer makes use of continuous density HMM with [[ Gaussian mixture ]] for << acoustic modeling >> and n-gram statistics estimated on the newspaper texts for language modeling .,2335,3 +2336,The << recognizer >> makes use of continuous density HMM with Gaussian mixture for acoustic modeling and [[ n-gram statistics ]] estimated on the newspaper texts for language modeling .,2336,3 +2337,The recognizer makes use of continuous density HMM with Gaussian mixture for acoustic modeling and [[ n-gram statistics ]] estimated on the newspaper texts for << language modeling >> .,2337,3 +2338,The recognizer makes use of continuous density HMM with Gaussian mixture for acoustic modeling and << n-gram statistics >> estimated on the [[ newspaper texts ]] for language modeling .,2338,6 +2339,The << recognizer >> uses a [[ time-synchronous graph-search strategy ]] which is shown to still be viable with a 20k-word vocabulary when used with bigram back-off language models .,2339,3 +2340,The << recognizer >> uses a time-synchronous graph-search strategy which is shown to still be viable with a 20k-word vocabulary when used with [[ bigram back-off language models ]] .,2340,3 +2341,The recognizer uses a << time-synchronous graph-search strategy >> which is shown to still be viable with a 20k-word vocabulary when used with [[ bigram back-off language models ]] .,2341,0 +2342,"A second forward pass , which makes use of a << word graph >> generated with the [[ bigram ]] , incorporates a trigram language model .",2342,3 +2343,"A second forward pass , which makes use of a << word graph >> generated with the bigram , incorporates a [[ trigram language model ]] .",2343,0 +2344,"<< Acoustic modeling >> uses [[ cepstrum-based features ]] , context-dependent phone models -LRB- intra and interword -RRB- , phone duration models , and sex-dependent models .",2344,3 +2345,"Acoustic modeling uses [[ cepstrum-based features ]] , << context-dependent phone models -LRB- intra and interword -RRB- >> , phone duration models , and sex-dependent models .",2345,0 +2346,"<< Acoustic modeling >> uses cepstrum-based features , [[ context-dependent phone models -LRB- intra and interword -RRB- ]] , phone duration models , and sex-dependent models .",2346,3 +2347,"Acoustic modeling uses cepstrum-based features , [[ context-dependent phone models -LRB- intra and interword -RRB- ]] , << phone duration models >> , and sex-dependent models .",2347,0 +2348,"<< Acoustic modeling >> uses cepstrum-based features , context-dependent phone models -LRB- intra and interword -RRB- , [[ phone duration models ]] , and sex-dependent models .",2348,3 +2349,"Acoustic modeling uses cepstrum-based features , context-dependent phone models -LRB- intra and interword -RRB- , [[ phone duration models ]] , and << sex-dependent models >> .",2349,0 +2350,"<< Acoustic modeling >> uses cepstrum-based features , context-dependent phone models -LRB- intra and interword -RRB- , phone duration models , and [[ sex-dependent models ]] .",2350,3 +2351,"The [[ co-occurrence pattern ]] , a combination of binary or local features , is more discriminative than individual features and has shown its advantages in << object , scene , and action recognition >> .",2351,3 +2352,"The << co-occurrence pattern >> , a combination of [[ binary or local features ]] , is more discriminative than individual features and has shown its advantages in object , scene , and action recognition .",2352,4 +2353,"Then we propose a novel [[ data mining method ]] to efficiently discover the << optimal co-occurrence pattern >> with minimum empirical error , despite the noisy training dataset .",2353,3 +2354,"Then we propose a novel data mining method to efficiently discover the << optimal co-occurrence pattern >> with [[ minimum empirical error ]] , despite the noisy training dataset .",2354,1 +2355,"Then we propose a novel << data mining method >> to efficiently discover the optimal co-occurrence pattern with minimum empirical error , despite the [[ noisy training dataset ]] .",2355,3 +2356,"This [[ mining procedure ]] of << AND and OR patterns >> is readily integrated to boosting , which improves the generalization ability over the conventional boosting decision trees and boosting decision stumps .",2356,3 +2357,"This mining procedure of [[ AND and OR patterns ]] is readily integrated to << boosting >> , which improves the generalization ability over the conventional boosting decision trees and boosting decision stumps .",2357,4 +2358,"This mining procedure of AND and OR patterns is readily integrated to [[ boosting ]] , which improves the generalization ability over the conventional << boosting decision trees >> and boosting decision stumps .",2358,5 +2359,"This mining procedure of AND and OR patterns is readily integrated to [[ boosting ]] , which improves the generalization ability over the conventional boosting decision trees and << boosting decision stumps >> .",2359,5 +2360,"This mining procedure of AND and OR patterns is readily integrated to << boosting >> , which improves the [[ generalization ability ]] over the conventional boosting decision trees and boosting decision stumps .",2360,6 +2361,"This mining procedure of AND and OR patterns is readily integrated to boosting , which improves the [[ generalization ability ]] over the conventional << boosting decision trees >> and boosting decision stumps .",2361,6 +2362,"This mining procedure of AND and OR patterns is readily integrated to boosting , which improves the [[ generalization ability ]] over the conventional boosting decision trees and << boosting decision stumps >> .",2362,6 +2363,"This mining procedure of AND and OR patterns is readily integrated to boosting , which improves the generalization ability over the conventional [[ boosting decision trees ]] and << boosting decision stumps >> .",2363,0 +2364,"Our versatile experiments on [[ object , scene , and action cat-egorization ]] validate the advantages of the discovered << dis-criminative co-occurrence patterns >> .",2364,6 +2365,"Empirical experience and observations have shown us when powerful and highly tunable [[ classifiers ]] such as maximum entropy classifiers , boosting and SVMs are applied to << language processing tasks >> , it is possible to achieve high accuracies , but eventually their performances all tend to plateau out at around the same point .",2365,3 +2366,"Empirical experience and observations have shown us when powerful and highly tunable << classifiers >> such as [[ maximum entropy classifiers ]] , boosting and SVMs are applied to language processing tasks , it is possible to achieve high accuracies , but eventually their performances all tend to plateau out at around the same point .",2366,2 +2367,"Empirical experience and observations have shown us when powerful and highly tunable classifiers such as [[ maximum entropy classifiers ]] , << boosting >> and SVMs are applied to language processing tasks , it is possible to achieve high accuracies , but eventually their performances all tend to plateau out at around the same point .",2367,0 +2368,"Empirical experience and observations have shown us when powerful and highly tunable << classifiers >> such as maximum entropy classifiers , [[ boosting ]] and SVMs are applied to language processing tasks , it is possible to achieve high accuracies , but eventually their performances all tend to plateau out at around the same point .",2368,2 +2369,"Empirical experience and observations have shown us when powerful and highly tunable classifiers such as maximum entropy classifiers , [[ boosting ]] and << SVMs >> are applied to language processing tasks , it is possible to achieve high accuracies , but eventually their performances all tend to plateau out at around the same point .",2369,0 +2370,"Empirical experience and observations have shown us when powerful and highly tunable << classifiers >> such as maximum entropy classifiers , boosting and [[ SVMs ]] are applied to language processing tasks , it is possible to achieve high accuracies , but eventually their performances all tend to plateau out at around the same point .",2370,2 +2371,"In recent work , we introduced [[ N-fold Templated Piped Correction , or NTPC -LRB- `` nitpick '' -RRB- ]] , an intriguing << error corrector >> that is designed to work in these extreme operating conditions .",2371,2 +2372,"Despite its simplicity , [[ it ]] consistently and robustly improves the accuracy of existing highly accurate << base models >> .",2372,5 +2373,"Focused interaction of this kind is facilitated by a [[ construction-specific approach ]] to << flexible parsing >> , with specialized parsing techniques for each type of construction , and specialized ambiguity representations for each type of ambiguity that a particular construction can give rise to .",2373,3 +2374,"Focused interaction of this kind is facilitated by a [[ construction-specific approach ]] to flexible parsing , with << specialized parsing techniques >> for each type of construction , and specialized ambiguity representations for each type of ambiguity that a particular construction can give rise to .",2374,0 +2375,"Focused interaction of this kind is facilitated by a construction-specific approach to flexible parsing , with [[ specialized parsing techniques ]] for each type of << construction >> , and specialized ambiguity representations for each type of ambiguity that a particular construction can give rise to .",2375,3 +2376,"Focused interaction of this kind is facilitated by a construction-specific approach to flexible parsing , with [[ specialized parsing techniques ]] for each type of construction , and specialized << ambiguity representations >> for each type of ambiguity that a particular construction can give rise to .",2376,0 +2377,"Focused interaction of this kind is facilitated by a construction-specific approach to flexible parsing , with specialized parsing techniques for each type of construction , and specialized [[ ambiguity representations ]] for each type of << ambiguity >> that a particular construction can give rise to .",2377,3 +2378,"A [[ construction-specific approach ]] also aids in << task-specific language development >> by allowing a language definition that is natural in terms of the task domain to be interpreted directly without compilation into a uniform grammar formalism , thus greatly speeding the testing of changes to the language definition .",2378,3 +2379,"A proposal to deal with << French tenses >> in the framework of [[ Discourse Representation Theory ]] is presented , as it has been implemented for a fragment at the IMS .",2379,3 +2380,"A proposal to deal with French tenses in the framework of Discourse Representation Theory is presented , as [[ it ]] has been implemented for a fragment at the << IMS >> .",2380,3 +2381,<< It >> is based on the [[ theory of tenses ]] of H. Kamp and Ch .,2381,3 +2382,Instead of using [[ operators ]] to express the << meaning of the tenses >> the Reichenbachian point of view is adopted and refined such that the impact of the tenses with respect to the meaning of the text is understood as contribution to the integration of the events of a sentence in the event structure of the preceeding text .,2382,3 +2383,Thereby a << system of relevant times >> provided by the [[ preceeding text ]] and by the temporal adverbials of the sentence being processed is used .,2383,3 +2384,Thereby a system of relevant times provided by the [[ preceeding text ]] and by the << temporal adverbials >> of the sentence being processed is used .,2384,0 +2385,Thereby a << system of relevant times >> provided by the preceeding text and by the [[ temporal adverbials ]] of the sentence being processed is used .,2385,3 +2386,"This << system >> consists of one or more [[ reference times ]] and temporal perspective times , the speech time and the location time .",2386,4 +2387,"This system consists of one or more [[ reference times ]] and << temporal perspective times >> , the speech time and the location time .",2387,0 +2388,"This << system >> consists of one or more reference times and [[ temporal perspective times ]] , the speech time and the location time .",2388,4 +2389,"This system consists of one or more reference times and [[ temporal perspective times ]] , the << speech time >> and the location time .",2389,0 +2390,"This << system >> consists of one or more reference times and temporal perspective times , the [[ speech time ]] and the location time .",2390,4 +2391,"This system consists of one or more reference times and temporal perspective times , the [[ speech time ]] and the << location time >> .",2391,0 +2392,"This << system >> consists of one or more reference times and temporal perspective times , the speech time and the [[ location time ]] .",2392,4 +2393,In opposition to the approach of Kamp and Rohrer the exact << meaning of the tenses >> is fixed by the [[ resolution component ]] and not in the process of syntactic analysis .,2393,3 +2394,In opposition to the approach of Kamp and Rohrer the exact meaning of the tenses is fixed by the [[ resolution component ]] and not in the process of << syntactic analysis >> .,2394,5 +2395,The work presented in this paper is the first step in a project which aims to cluster and summarise [[ electronic discussions ]] in the context of << help-desk applications >> .,2395,4 +2396,"In this paper , we identify [[ features ]] of << electronic discussions >> that influence the clustering process , and offer a filtering mechanism that removes undesirable influences .",2396,1 +2397,"We tested the << clustering and filtering processes >> on [[ electronic newsgroup discussions ]] , and evaluated their performance by means of two experiments : coarse-level clustering simple information retrieval .",2397,6 +2398,"We tested the << clustering and filtering processes >> on electronic newsgroup discussions , and evaluated their performance by means of two [[ experiments ]] : coarse-level clustering simple information retrieval .",2398,6 +2399,"We tested the << clustering and filtering processes >> on electronic newsgroup discussions , and evaluated their performance by means of two [[ experiments ]] : coarse-level clustering simple information retrieval .",2399,6 +2400,"We tested the clustering and filtering processes on electronic newsgroup discussions , and evaluated their performance by means of two << experiments >> : [[ coarse-level clustering ]] simple information retrieval .",2400,2 +2401,"We tested the clustering and filtering processes on electronic newsgroup discussions , and evaluated their performance by means of two << experiments >> : coarse-level clustering simple [[ information retrieval ]] .",2401,2 +2402,The paper presents a [[ method ]] for << word sense disambiguation >> based on parallel corpora .,2402,3 +2403,The paper presents a << method >> for word sense disambiguation based on [[ parallel corpora ]] .,2403,3 +2404,The [[ method ]] exploits recent advances in << word alignment >> and word clustering based on automatic extraction of translation equivalents and being supported by available aligned wordnets for the languages in the corpus .,2404,3 +2405,The [[ method ]] exploits recent advances in word alignment and << word clustering >> based on automatic extraction of translation equivalents and being supported by available aligned wordnets for the languages in the corpus .,2405,3 +2406,The method exploits recent advances in [[ word alignment ]] and << word clustering >> based on automatic extraction of translation equivalents and being supported by available aligned wordnets for the languages in the corpus .,2406,0 +2407,The << method >> exploits recent advances in word alignment and word clustering based on [[ automatic extraction of translation equivalents ]] and being supported by available aligned wordnets for the languages in the corpus .,2407,3 +2408,The << method >> exploits recent advances in word alignment and word clustering based on automatic extraction of translation equivalents and being supported by available [[ aligned wordnets ]] for the languages in the corpus .,2408,3 +2409,"The same [[ system ]] used in a validation mode , can be used to check and spot << alignment errors in multilingually aligned wordnets >> as BalkaNet and EuroWordNet .",2409,3 +2410,"The same system used in a validation mode , can be used to check and spot alignment errors in << multilingually aligned wordnets >> as [[ BalkaNet ]] and EuroWordNet .",2410,2 +2411,"The same system used in a validation mode , can be used to check and spot alignment errors in multilingually aligned wordnets as [[ BalkaNet ]] and << EuroWordNet >> .",2411,0 +2412,"The same system used in a validation mode , can be used to check and spot alignment errors in << multilingually aligned wordnets >> as BalkaNet and [[ EuroWordNet ]] .",2412,2 +2413,This paper investigates critical configurations for << projective reconstruction >> from multiple [[ images ]] taken by a camera moving in a straight line .,2413,3 +2414,"Projective reconstruction refers to a determination of the [[ 3D geometrical configuration ]] of a set of << 3D points and cameras >> , given only correspondences between points in the images .",2414,1 +2415,"Porting a [[ Natural Language Processing -LRB- NLP -RRB- system ]] to a << new domain >> remains one of the bottlenecks in syntactic parsing , because of the amount of effort required to fix gaps in the lexicon , and to attune the existing grammar to the idiosyncracies of the new sublanguage .",2415,3 +2416,"Porting a Natural Language Processing -LRB- NLP -RRB- system to a new domain remains one of the bottlenecks in syntactic parsing , because of the amount of effort required to fix gaps in the lexicon , and to attune the existing [[ grammar ]] to the << idiosyncracies of the new sublanguage >> .",2416,3 +2417,This paper shows how the process of fitting a lexicalized grammar to a domain can be automated to a great extent by using a << hybrid system >> that combines traditional [[ knowledge-based techniques ]] with a corpus-based approach .,2417,4 +2418,This paper shows how the process of fitting a lexicalized grammar to a domain can be automated to a great extent by using a hybrid system that combines traditional [[ knowledge-based techniques ]] with a << corpus-based approach >> .,2418,0 +2419,This paper shows how the process of fitting a lexicalized grammar to a domain can be automated to a great extent by using a << hybrid system >> that combines traditional knowledge-based techniques with a [[ corpus-based approach ]] .,2419,4 +2420,"Unification is often the appropriate [[ method ]] for expressing << relations between representations >> in the form of feature structures ; however , there are circumstances in which a different approach is desirable .",2420,3 +2421,"Unification is often the appropriate method for expressing << relations between representations >> in the form of [[ feature structures ]] ; however , there are circumstances in which a different approach is desirable .",2421,3 +2422,"Unification is often the appropriate << method >> for expressing relations between representations in the form of feature structures ; however , there are circumstances in which a different [[ approach ]] is desirable .",2422,5 +2423,"A << declarative formalism >> is presented which permits [[ direct mappings of one feature structure into another ]] , and illustrative examples are given of its application to areas of current interest .",2423,1 +2424,"To support engaging human users in robust , mixed-initiative speech dialogue interactions which reach beyond current capabilities in dialogue systems , the DARPA Communicator program -LSB- 1 -RSB- is funding the development of a [[ distributed message-passing infrastructure ]] for << dialogue systems >> which all Communicator participants are using .",2424,3 +2425,We propose a novel [[ limited-memory stochastic block BFGS update ]] for << incorporating enriched curvature information in stochastic approximation methods >> .,2425,3 +2426,"In our method , the estimate of the << inverse Hessian matrix >> that is maintained by [[ it ]] , is updated at each iteration using a sketch of the Hessian , i.e. , a randomly generated compressed form of the Hessian .",2426,3 +2427,"In our method , the estimate of the inverse Hessian matrix that is maintained by << it >> , is updated at each iteration using a sketch of the [[ Hessian ]] , i.e. , a randomly generated compressed form of the Hessian .",2427,3 +2428,"In our method , the estimate of the inverse Hessian matrix that is maintained by it , is updated at each iteration using a sketch of the << Hessian >> , i.e. , a [[ randomly generated compressed form of the Hessian ]] .",2428,2 +2429,"We propose several sketching strategies , present a new [[ quasi-Newton method ]] that uses stochastic block BFGS updates combined with the variance reduction approach SVRG to compute << batch stochastic gradients >> , and prove linear convergence of the resulting method .",2429,3 +2430,"We propose several sketching strategies , present a new << quasi-Newton method >> that uses [[ stochastic block BFGS updates ]] combined with the variance reduction approach SVRG to compute batch stochastic gradients , and prove linear convergence of the resulting method .",2430,3 +2431,"We propose several sketching strategies , present a new quasi-Newton method that uses [[ stochastic block BFGS updates ]] combined with the << variance reduction approach SVRG >> to compute batch stochastic gradients , and prove linear convergence of the resulting method .",2431,0 +2432,"We propose several sketching strategies , present a new << quasi-Newton method >> that uses stochastic block BFGS updates combined with the [[ variance reduction approach SVRG ]] to compute batch stochastic gradients , and prove linear convergence of the resulting method .",2432,3 +2433,"We propose several sketching strategies , present a new quasi-Newton method that uses stochastic block BFGS updates combined with the variance reduction approach SVRG to compute batch stochastic gradients , and prove [[ linear convergence ]] of the resulting << method >> .",2433,1 +2434,Numerical tests on [[ large-scale logistic regression problems ]] reveal that our << method >> is more robust and substantially outperforms current state-of-the-art methods .,2434,6 +2435,Numerical tests on [[ large-scale logistic regression problems ]] reveal that our method is more robust and substantially outperforms current << state-of-the-art methods >> .,2435,6 +2436,Numerical tests on large-scale logistic regression problems reveal that our [[ method ]] is more robust and substantially outperforms current << state-of-the-art methods >> .,2436,5 +2437,The goal of this research is to develop a [[ spoken language system ]] that will demonstrate the usefulness of voice input for << interactive problem solving >> .,2437,3 +2438,The goal of this research is to develop a spoken language system that will demonstrate the usefulness of [[ voice input ]] for << interactive problem solving >> .,2438,3 +2439,"Combining [[ speech recognition ]] and << natural language processing >> to achieve speech understanding , the system will be demonstrated in an application domain relevant to the DoD .",2439,0 +2440,"Combining [[ speech recognition ]] and natural language processing to achieve << speech understanding >> , the system will be demonstrated in an application domain relevant to the DoD .",2440,3 +2441,"Combining speech recognition and [[ natural language processing ]] to achieve << speech understanding >> , the system will be demonstrated in an application domain relevant to the DoD .",2441,3 +2442,The objective of this project is to develop a << robust and high-performance speech recognition system >> using a [[ segment-based approach ]] to phonetic recognition .,2442,3 +2443,The objective of this project is to develop a robust and high-performance speech recognition system using a [[ segment-based approach ]] to << phonetic recognition >> .,2443,3 +2444,The objective of this project is to develop a << robust and high-performance speech recognition system >> using a segment-based approach to [[ phonetic recognition ]] .,2444,3 +2445,The [[ recognition system ]] will eventually be integrated with natural language processing to achieve << spoken language understanding >> .,2445,3 +2446,The << recognition system >> will eventually be integrated with [[ natural language processing ]] to achieve spoken language understanding .,2446,0 +2447,The recognition system will eventually be integrated with [[ natural language processing ]] to achieve << spoken language understanding >> .,2447,3 +2448,[[ Spelling-checkers ]] have become an integral part of most << text processing software >> .,2448,4 +2449,From different reasons among which the speed of processing prevails << they >> are usually based on [[ dictionaries of word forms ]] instead of words .,2449,3 +2450,"This << approach >> is sufficient for [[ languages ]] with little inflection such as English , but fails for highly inflective languages such as Czech , Russian , Slovak or other Slavonic languages .",2450,3 +2451,"This approach is sufficient for << languages >> with little [[ inflection ]] such as English , but fails for highly inflective languages such as Czech , Russian , Slovak or other Slavonic languages .",2451,1 +2452,"This approach is sufficient for << languages >> with little inflection such as [[ English ]] , but fails for highly inflective languages such as Czech , Russian , Slovak or other Slavonic languages .",2452,2 +2453,"This approach is sufficient for languages with little inflection such as English , but fails for << highly inflective languages >> such as [[ Czech ]] , Russian , Slovak or other Slavonic languages .",2453,2 +2454,"This approach is sufficient for languages with little inflection such as English , but fails for highly inflective languages such as [[ Czech ]] , << Russian >> , Slovak or other Slavonic languages .",2454,0 +2455,"This approach is sufficient for languages with little inflection such as English , but fails for << highly inflective languages >> such as Czech , [[ Russian ]] , Slovak or other Slavonic languages .",2455,2 +2456,"This approach is sufficient for languages with little inflection such as English , but fails for highly inflective languages such as Czech , [[ Russian ]] , << Slovak >> or other Slavonic languages .",2456,0 +2457,"This approach is sufficient for languages with little inflection such as English , but fails for << highly inflective languages >> such as Czech , Russian , [[ Slovak ]] or other Slavonic languages .",2457,2 +2458,"This approach is sufficient for languages with little inflection such as English , but fails for highly inflective languages such as Czech , Russian , [[ Slovak ]] or other << Slavonic languages >> .",2458,0 +2459,"This approach is sufficient for languages with little inflection such as English , but fails for << highly inflective languages >> such as Czech , Russian , Slovak or other [[ Slavonic languages ]] .",2459,2 +2460,We have developed a special [[ method ]] for describing << inflection >> for the purpose of building spelling-checkers for such languages .,2460,3 +2461,We have developed a special [[ method ]] for describing inflection for the purpose of building << spelling-checkers >> for such languages .,2461,3 +2462,We have developed a special method for describing inflection for the purpose of building [[ spelling-checkers ]] for such << languages >> .,2462,3 +2463,"The speed of the resulting program lies somewhere in the middle of the scale of existing << spelling-checkers >> for [[ English ]] and the main dictionary fits into the standard 360K floppy , whereas the number of recognized word forms exceeds 6 million -LRB- for Czech -RRB- .",2463,3 +2464,"Further , a special [[ method ]] has been developed for easy << word classification >> .",2464,3 +2465,"We present a new HMM tagger that exploits context on both sides of a word to be tagged , and evaluate << it >> in both the [[ unsupervised and supervised case ]] .",2465,6 +2466,"Along the way , we present the first comprehensive comparison of [[ unsupervised methods ]] for << part-of-speech tagging >> , noting that published results to date have not been comparable across corpora or lexicons .",2466,3 +2467,"Observing that the quality of the lexicon greatly impacts the [[ accuracy ]] that can be achieved by the << algorithms >> , we present a method of HMM training that improves accuracy when training of lexical probabilities is unstable .",2467,6 +2468,"Finally , we show how this new << tagger >> achieves state-of-the-art results in a [[ supervised , non-training intensive framework ]] .",2468,6 +2469,We propose a family of [[ non-uniform sampling strategies ]] to provably speed up a class of << stochastic optimization algorithms >> with linear convergence including Stochastic Variance Reduced Gradient -LRB- SVRG -RRB- and Stochastic Dual Coordinate Ascent -LRB- SDCA -RRB- .,2469,3 +2470,We propose a family of non-uniform sampling strategies to provably speed up a class of << stochastic optimization algorithms >> with [[ linear convergence ]] including Stochastic Variance Reduced Gradient -LRB- SVRG -RRB- and Stochastic Dual Coordinate Ascent -LRB- SDCA -RRB- .,2470,1 +2471,We propose a family of non-uniform sampling strategies to provably speed up a class of << stochastic optimization algorithms >> with linear convergence including [[ Stochastic Variance Reduced Gradient -LRB- SVRG -RRB- ]] and Stochastic Dual Coordinate Ascent -LRB- SDCA -RRB- .,2471,2 +2472,We propose a family of non-uniform sampling strategies to provably speed up a class of stochastic optimization algorithms with linear convergence including [[ Stochastic Variance Reduced Gradient -LRB- SVRG -RRB- ]] and << Stochastic Dual Coordinate Ascent -LRB- SDCA -RRB- >> .,2472,0 +2473,We propose a family of non-uniform sampling strategies to provably speed up a class of << stochastic optimization algorithms >> with linear convergence including Stochastic Variance Reduced Gradient -LRB- SVRG -RRB- and [[ Stochastic Dual Coordinate Ascent -LRB- SDCA -RRB- ]] .,2473,2 +2474,"For a large family of << penalized empirical risk minimization problems >> , our [[ methods ]] exploit data dependent local smoothness of the loss functions near the optimum , while maintaining convergence guarantees .",2474,3 +2475,"For a large family of penalized empirical risk minimization problems , our << methods >> exploit [[ data dependent local smoothness ]] of the loss functions near the optimum , while maintaining convergence guarantees .",2475,3 +2476,"For a large family of penalized empirical risk minimization problems , our methods exploit [[ data dependent local smoothness ]] of the << loss functions >> near the optimum , while maintaining convergence guarantees .",2476,1 +2477,"Additionally we present << algorithms >> exploiting [[ local smoothness ]] in more aggressive ways , which perform even better in practice .",2477,3 +2478,"Statistical language modeling remains a challenging [[ task ]] , in particular for << morphologically rich languages >> .",2478,3 +2479,"Recently , new << approaches >> based on [[ factored language models ]] have been developed to address this problem .",2479,3 +2480,"These models provide principled ways of including additional << conditioning variables >> other than the preceding words , such as [[ morphological or syntactic features ]] .",2480,2 +2481,"This paper presents an [[ entirely data-driven model selection procedure ]] based on genetic search , which is shown to outperform both << knowledge-based and random selection procedures >> on two different language modeling tasks -LRB- Arabic and Turkish -RRB- .",2481,5 +2482,"This paper presents an << entirely data-driven model selection procedure >> based on [[ genetic search ]] , which is shown to outperform both knowledge-based and random selection procedures on two different language modeling tasks -LRB- Arabic and Turkish -RRB- .",2482,3 +2483,"This paper presents an entirely data-driven model selection procedure based on genetic search , which is shown to outperform both [[ knowledge-based and random selection procedures ]] on two different << language modeling tasks >> -LRB- Arabic and Turkish -RRB- .",2483,3 +2484,"This paper presents an entirely data-driven model selection procedure based on genetic search , which is shown to outperform both knowledge-based and random selection procedures on two different << language modeling tasks >> -LRB- [[ Arabic ]] and Turkish -RRB- .",2484,2 +2485,"This paper presents an entirely data-driven model selection procedure based on genetic search , which is shown to outperform both knowledge-based and random selection procedures on two different language modeling tasks -LRB- [[ Arabic ]] and << Turkish >> -RRB- .",2485,0 +2486,"This paper presents an entirely data-driven model selection procedure based on genetic search , which is shown to outperform both knowledge-based and random selection procedures on two different << language modeling tasks >> -LRB- Arabic and [[ Turkish ]] -RRB- .",2486,2 +2487,We address appropriate [[ user modeling ]] in order to generate << cooperative responses >> to each user in spoken dialogue systems .,2487,3 +2488,We address appropriate [[ user modeling ]] in order to generate cooperative responses to each user in << spoken dialogue systems >> .,2488,4 +2489,"Unlike previous [[ studies ]] that focus on user 's knowledge or typical kinds of users , the << user model >> we propose is more comprehensive .",2489,5 +2490,"Moreover , the << models >> are automatically derived by [[ decision tree learning ]] using real dialogue data collected by the system .",2490,3 +2491,"Moreover , the models are automatically derived by << decision tree learning >> using [[ real dialogue data ]] collected by the system .",2491,3 +2492,"Moreover , the models are automatically derived by decision tree learning using << real dialogue data >> collected by the [[ system ]] .",2492,3 +2493,[[ Dialogue strategies ]] based on the user modeling are implemented in << Kyoto city bus information system >> that has been developed at our laboratory .,2493,3 +2494,<< Dialogue strategies >> based on the [[ user modeling ]] are implemented in Kyoto city bus information system that has been developed at our laboratory .,2494,3 +2495,This paper proposes a novel [[ method ]] of << building polarity-tagged corpus >> from HTML documents .,2495,3 +2496,This paper proposes a novel << method >> of building polarity-tagged corpus from [[ HTML documents ]] .,2496,3 +2497,The characteristics of this method is that [[ it ]] is fully automatic and can be applied to arbitrary << HTML documents >> .,2497,3 +2498,The idea behind our << method >> is to utilize certain [[ layout structures ]] and linguistic pattern .,2498,3 +2499,The idea behind our method is to utilize certain [[ layout structures ]] and << linguistic pattern >> .,2499,0 +2500,The idea behind our << method >> is to utilize certain layout structures and [[ linguistic pattern ]] .,2500,3 +2501,Previous work has used [[ monolingual parallel corpora ]] to extract and generate << paraphrases >> .,2501,3 +2502,"We show that this << task >> can be done using [[ bilingual parallel corpora ]] , a much more commonly available resource .",2502,3 +2503,"Using [[ alignment techniques ]] from << phrase-based statistical machine translation >> , we show how paraphrases in one language can be identified using a phrase in another language as a pivot .",2503,3 +2504,"We define a paraphrase probability that allows [[ paraphrases ]] extracted from a << bilingual parallel corpus >> to be ranked using translation probabilities , and show how it can be refined to take contextual information into account .",2504,4 +2505,"We define a paraphrase probability that allows << paraphrases >> extracted from a bilingual parallel corpus to be ranked using [[ translation probabilities ]] , and show how it can be refined to take contextual information into account .",2505,3 +2506,"We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities , and show how << it >> can be refined to take [[ contextual information ]] into account .",2506,3 +2507,"We evaluate our << paraphrase extraction and ranking methods >> using a set of [[ manual word alignments ]] , and contrast the quality with paraphrases extracted from automatic alignments .",2507,6 +2508,"We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments , and contrast the [[ quality ]] with << paraphrases >> extracted from automatic alignments .",2508,6 +2509,"We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments , and contrast the quality with [[ paraphrases ]] extracted from << automatic alignments >> .",2509,4 +2510,"This paper proposes an automatic , essentially << domain-independent means of evaluating Spoken Language Systems -LRB- SLS -RRB- >> which combines [[ software ]] we have developed for that purpose -LRB- the '' Comparator '' -RRB- and a set of specifications for answer expressions -LRB- the '' Common Answer Specification '' , or CAS -RRB- .",2510,4 +2511,"This paper proposes an automatic , essentially domain-independent means of evaluating Spoken Language Systems -LRB- SLS -RRB- which combines [[ software ]] we have developed for that purpose -LRB- the '' Comparator '' -RRB- and a set of << specifications >> for answer expressions -LRB- the '' Common Answer Specification '' , or CAS -RRB- .",2511,0 +2512,"This paper proposes an automatic , essentially << domain-independent means of evaluating Spoken Language Systems -LRB- SLS -RRB- >> which combines software we have developed for that purpose -LRB- the '' Comparator '' -RRB- and a set of [[ specifications ]] for answer expressions -LRB- the '' Common Answer Specification '' , or CAS -RRB- .",2512,4 +2513,"This paper proposes an automatic , essentially domain-independent means of evaluating Spoken Language Systems -LRB- SLS -RRB- which combines software we have developed for that purpose -LRB- the '' Comparator '' -RRB- and a set of [[ specifications ]] for << answer expressions >> -LRB- the '' Common Answer Specification '' , or CAS -RRB- .",2513,3 +2514,"The [[ Common Answer Specification ]] determines the << syntax of answer expressions >> , the minimal content that must be included in them , the data to be included in and excluded from test corpora , and the procedures used by the Comparator .",2514,3 +2515,"This paper describes an [[ unsupervised learning method ]] for << associative relationships between verb phrases >> , which is important in developing reliable Q&A systems .",2515,3 +2516,"This paper describes an unsupervised learning method for [[ associative relationships between verb phrases ]] , which is important in developing reliable << Q&A systems >> .",2516,3 +2517,"Our aim is to develop an [[ unsupervised learning method ]] that can obtain such an << associative relationship >> , which we call scenario consistency .",2517,3 +2518,"The << method >> we are currently working on uses an [[ expectation-maximization -LRB- EM -RRB- based word-clustering algorithm ]] , and we have evaluated the effectiveness of this method using Japanese verb phrases .",2518,3 +2519,"The method we are currently working on uses an expectation-maximization -LRB- EM -RRB- based word-clustering algorithm , and we have evaluated the effectiveness of this << method >> using [[ Japanese verb phrases ]] .",2519,3 +2520,We describe the use of [[ text data ]] scraped from the web to augment << language models >> for Automatic Speech Recognition and Keyword Search for Low Resource Languages .,2520,3 +2521,We describe the use of << text data >> scraped from the [[ web ]] to augment language models for Automatic Speech Recognition and Keyword Search for Low Resource Languages .,2521,1 +2522,We describe the use of text data scraped from the web to augment [[ language models ]] for << Automatic Speech Recognition >> and Keyword Search for Low Resource Languages .,2522,3 +2523,We describe the use of text data scraped from the web to augment [[ language models ]] for Automatic Speech Recognition and << Keyword Search >> for Low Resource Languages .,2523,3 +2524,We describe the use of text data scraped from the web to augment language models for [[ Automatic Speech Recognition ]] and << Keyword Search >> for Low Resource Languages .,2524,0 +2525,We describe the use of text data scraped from the web to augment language models for << Automatic Speech Recognition >> and Keyword Search for [[ Low Resource Languages ]] .,2525,3 +2526,We describe the use of text data scraped from the web to augment language models for Automatic Speech Recognition and << Keyword Search >> for [[ Low Resource Languages ]] .,2526,3 +2527,"We scrape text from multiple << genres >> including [[ blogs ]] , online news , translated TED talks , and subtitles .",2527,2 +2528,"We scrape text from multiple genres including [[ blogs ]] , << online news >> , translated TED talks , and subtitles .",2528,0 +2529,"We scrape text from multiple << genres >> including blogs , [[ online news ]] , translated TED talks , and subtitles .",2529,2 +2530,"We scrape text from multiple genres including blogs , [[ online news ]] , << translated TED talks >> , and subtitles .",2530,0 +2531,"We scrape text from multiple << genres >> including blogs , online news , [[ translated TED talks ]] , and subtitles .",2531,2 +2532,"We scrape text from multiple genres including blogs , online news , [[ translated TED talks ]] , and << subtitles >> .",2532,0 +2533,"We scrape text from multiple << genres >> including blogs , online news , translated TED talks , and [[ subtitles ]] .",2533,2 +2534,"Using [[ linearly interpolated language models ]] , we find that blogs and movie subtitles are more relevant for << language modeling of conversational telephone speech >> and obtain large reductions in out-of-vocabulary keywords .",2534,3 +2535,"Using linearly interpolated language models , we find that [[ blogs ]] and << movie subtitles >> are more relevant for language modeling of conversational telephone speech and obtain large reductions in out-of-vocabulary keywords .",2535,0 +2536,"Using linearly interpolated language models , we find that [[ blogs ]] and movie subtitles are more relevant for << language modeling of conversational telephone speech >> and obtain large reductions in out-of-vocabulary keywords .",2536,3 +2537,"Using linearly interpolated language models , we find that blogs and [[ movie subtitles ]] are more relevant for << language modeling of conversational telephone speech >> and obtain large reductions in out-of-vocabulary keywords .",2537,3 +2538,"Furthermore , we show that the [[ web data ]] can improve Term Error Rate Performance by 3.8 % absolute and Maximum Term-Weighted Value in << Keyword Search >> by 0.0076-0 .1059 absolute points .",2538,3 +2539,"Furthermore , we show that the web data can improve [[ Term Error Rate Performance ]] by 3.8 % absolute and Maximum Term-Weighted Value in << Keyword Search >> by 0.0076-0 .1059 absolute points .",2539,6 +2540,"Furthermore , we show that the web data can improve Term Error Rate Performance by 3.8 % absolute and [[ Maximum Term-Weighted Value ]] in << Keyword Search >> by 0.0076-0 .1059 absolute points .",2540,6 +2541,"Pipelined Natural Language Generation -LRB- NLG -RRB- systems have grown increasingly complex as [[ architectural modules ]] were added to support << language functionalities >> such as referring expressions , lexical choice , and revision .",2541,3 +2542,"Pipelined Natural Language Generation -LRB- NLG -RRB- systems have grown increasingly complex as architectural modules were added to support << language functionalities >> such as [[ referring expressions ]] , lexical choice , and revision .",2542,2 +2543,"Pipelined Natural Language Generation -LRB- NLG -RRB- systems have grown increasingly complex as architectural modules were added to support language functionalities such as [[ referring expressions ]] , << lexical choice >> , and revision .",2543,0 +2544,"Pipelined Natural Language Generation -LRB- NLG -RRB- systems have grown increasingly complex as architectural modules were added to support << language functionalities >> such as referring expressions , [[ lexical choice ]] , and revision .",2544,2 +2545,"Pipelined Natural Language Generation -LRB- NLG -RRB- systems have grown increasingly complex as architectural modules were added to support language functionalities such as referring expressions , [[ lexical choice ]] , and << revision >> .",2545,0 +2546,"Pipelined Natural Language Generation -LRB- NLG -RRB- systems have grown increasingly complex as architectural modules were added to support << language functionalities >> such as referring expressions , lexical choice , and [[ revision ]] .",2546,2 +2547,This has given rise to discussions about the relative placement of these new [[ modules ]] in the << overall architecture >> .,2547,4 +2548,"We present examples which suggest that in a pipelined NLG architecture , the best approach is to strongly tie [[ it ]] to a << revision component >> .",2548,0 +2549,"We present examples which suggest that in a << pipelined NLG architecture >> , the best approach is to strongly tie it to a [[ revision component ]] .",2549,4 +2550,"Finally , we evaluate the << approach >> in a working [[ multi-page system ]] .",2550,6 +2551,In this paper a [[ system ]] which understands and conceptualizes << scenes descriptions in natural language >> is presented .,2551,3 +2552,"Specifically , the following [[ components ]] of the << system >> are described : the syntactic analyzer , based on a Procedural Systemic Grammar , the semantic analyzer relying on the Conceptual Dependency Theory , and the dictionary .",2552,4 +2553,"Specifically , the following << components >> of the system are described : the [[ syntactic analyzer ]] , based on a Procedural Systemic Grammar , the semantic analyzer relying on the Conceptual Dependency Theory , and the dictionary .",2553,4 +2554,"Specifically , the following components of the system are described : the [[ syntactic analyzer ]] , based on a Procedural Systemic Grammar , the << semantic analyzer >> relying on the Conceptual Dependency Theory , and the dictionary .",2554,0 +2555,"Specifically , the following components of the system are described : the << syntactic analyzer >> , based on a [[ Procedural Systemic Grammar ]] , the semantic analyzer relying on the Conceptual Dependency Theory , and the dictionary .",2555,3 +2556,"Specifically , the following << components >> of the system are described : the syntactic analyzer , based on a Procedural Systemic Grammar , the [[ semantic analyzer ]] relying on the Conceptual Dependency Theory , and the dictionary .",2556,4 +2557,"Specifically , the following components of the system are described : the syntactic analyzer , based on a Procedural Systemic Grammar , the [[ semantic analyzer ]] relying on the Conceptual Dependency Theory , and the << dictionary >> .",2557,0 +2558,"Specifically , the following components of the system are described : the syntactic analyzer , based on a Procedural Systemic Grammar , the << semantic analyzer >> relying on the [[ Conceptual Dependency Theory ]] , and the dictionary .",2558,3 +2559,"Specifically , the following << components >> of the system are described : the syntactic analyzer , based on a Procedural Systemic Grammar , the semantic analyzer relying on the Conceptual Dependency Theory , and the [[ dictionary ]] .",2559,4 +2560,"The base parser produces a set of candidate parses for each input sentence , with associated probabilities that define an initial [[ ranking ]] of these << parses >> .",2560,1 +2561,"A second [[ model ]] then attempts to improve upon this initial << ranking >> , using additional features of the tree as evidence .",2561,3 +2562,"A second << model >> then attempts to improve upon this initial ranking , using additional [[ features ]] of the tree as evidence .",2562,3 +2563,"The strength of our approach is that it allows a tree to be represented as an arbitrary set of features , without concerns about how these features interact or overlap and without the need to define a derivation or a << generative model >> which takes these [[ features ]] into account .",2563,3 +2564,"We introduce a new [[ method ]] for the << reranking task >> , based on the boosting approach to ranking problems described in Freund et al. -LRB- 1998 -RRB- .",2564,3 +2565,"We introduce a new << method >> for the reranking task , based on the [[ boosting approach ]] to ranking problems described in Freund et al. -LRB- 1998 -RRB- .",2565,3 +2566,"We introduce a new method for the reranking task , based on the [[ boosting approach ]] to << ranking problems >> described in Freund et al. -LRB- 1998 -RRB- .",2566,3 +2567,We apply the [[ boosting method ]] to << parsing >> the Wall Street Journal treebank .,2567,3 +2568,We apply the << boosting method >> to parsing the [[ Wall Street Journal treebank ]] .,2568,3 +2569,"The << method >> combined the [[ log-likelihood ]] under a baseline model -LRB- that of Collins -LSB- 1999 -RSB- -RRB- with evidence from an additional 500,000 features over parse trees that were not included in the original model .",2569,4 +2570,"The method combined the [[ log-likelihood ]] under a << baseline model >> -LRB- that of Collins -LSB- 1999 -RSB- -RRB- with evidence from an additional 500,000 features over parse trees that were not included in the original model .",2570,0 +2571,"The new << model >> achieved 89.75 % [[ F-measure ]] , a 13 % relative decrease in F-measure error over the baseline model 's score of 88.2 % .",2571,6 +2572,"The new model achieved 89.75 % F-measure , a 13 % relative decrease in [[ F-measure ]] error over the << baseline model >> 's score of 88.2 % .",2572,6 +2573,"The new << model >> achieved 89.75 % F-measure , a 13 % relative decrease in F-measure error over the [[ baseline model ]] 's score of 88.2 % .",2573,5 +2574,The article also introduces a new [[ algorithm ]] for the << boosting approach >> which takes advantage of the sparsity of the feature space in the parsing data .,2574,3 +2575,The article also introduces a new << algorithm >> for the boosting approach which takes advantage of the [[ sparsity of the feature space ]] in the parsing data .,2575,3 +2576,The article also introduces a new algorithm for the boosting approach which takes advantage of the [[ sparsity of the feature space ]] in the << parsing data >> .,2576,1 +2577,Experiments show significant efficiency gains for the new [[ algorithm ]] over the obvious implementation of the << boosting approach >> .,2577,5 +2578,We argue that the method is an appealing alternative - in terms of both simplicity and efficiency - to work on [[ feature selection methods ]] within << log-linear -LRB- maximum-entropy -RRB- models >> .,2578,4 +2579,"Although the experiments in this article are on natural language parsing -LRB- NLP -RRB- , the approach should be applicable to many other << NLP problems >> which are naturally framed as [[ ranking tasks ]] , for example , speech recognition , machine translation , or natural language generation .",2579,3 +2580,"Although the experiments in this article are on natural language parsing -LRB- NLP -RRB- , the approach should be applicable to many other << NLP problems >> which are naturally framed as ranking tasks , for example , [[ speech recognition ]] , machine translation , or natural language generation .",2580,2 +2581,"Although the experiments in this article are on natural language parsing -LRB- NLP -RRB- , the approach should be applicable to many other NLP problems which are naturally framed as << ranking tasks >> , for example , [[ speech recognition ]] , machine translation , or natural language generation .",2581,2 +2582,"Although the experiments in this article are on natural language parsing -LRB- NLP -RRB- , the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks , for example , [[ speech recognition ]] , << machine translation >> , or natural language generation .",2582,0 +2583,"Although the experiments in this article are on natural language parsing -LRB- NLP -RRB- , the approach should be applicable to many other << NLP problems >> which are naturally framed as ranking tasks , for example , speech recognition , [[ machine translation ]] , or natural language generation .",2583,2 +2584,"Although the experiments in this article are on natural language parsing -LRB- NLP -RRB- , the approach should be applicable to many other NLP problems which are naturally framed as << ranking tasks >> , for example , speech recognition , [[ machine translation ]] , or natural language generation .",2584,2 +2585,"Although the experiments in this article are on natural language parsing -LRB- NLP -RRB- , the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks , for example , speech recognition , [[ machine translation ]] , or << natural language generation >> .",2585,0 +2586,"Although the experiments in this article are on natural language parsing -LRB- NLP -RRB- , the approach should be applicable to many other << NLP problems >> which are naturally framed as ranking tasks , for example , speech recognition , machine translation , or [[ natural language generation ]] .",2586,2 +2587,"Although the experiments in this article are on natural language parsing -LRB- NLP -RRB- , the approach should be applicable to many other NLP problems which are naturally framed as << ranking tasks >> , for example , speech recognition , machine translation , or [[ natural language generation ]] .",2587,2 +2588,A [[ model ]] is presented to characterize the << class of languages >> obtained by adding reduplication to context-free languages .,2588,3 +2589,A model is presented to characterize the class of languages obtained by adding [[ reduplication ]] to << context-free languages >> .,2589,3 +2590,The [[ model ]] is a << pushdown automaton >> augmented with the ability to check reduplication by using the stack in a new way .,2590,2 +2591,The model is a << pushdown automaton >> augmented with the ability to check reduplication by using the [[ stack ]] in a new way .,2591,3 +2592,The model is a pushdown automaton augmented with the ability to check << reduplication >> by using the [[ stack ]] in a new way .,2592,3 +2593,The class of languages generated is shown to lie strictly between the [[ context-free languages ]] and the << indexed languages >> .,2593,0 +2594,"The [[ model ]] appears capable of accommodating the sort of << reduplications >> that have been observed to occur in natural languages , but it excludes many of the unnatural constructions that other formal models have permitted .",2594,3 +2595,We present an << image set classification algorithm >> based on [[ unsupervised clustering ]] of labeled training and unla-beled test data where labels are only used in the stopping criterion .,2595,3 +2596,We present an image set classification algorithm based on << unsupervised clustering >> of [[ labeled training and unla-beled test data ]] where labels are only used in the stopping criterion .,2596,3 +2597,The [[ probability distribution ]] of each class over the set of clusters is used to define a true << set based similarity measure >> .,2597,3 +2598,"In each iteration , a [[ proximity matrix ]] is efficiently recomputed to better represent the << local subspace structure >> .",2598,3 +2599,[[ Initial clusters ]] capture the << global data structure >> and finer clusters at the later stages capture the subtle class differences not visible at the global scale .,2599,3 +2600,Initial clusters capture the global data structure and [[ finer clusters ]] at the later stages capture the << subtle class differences >> not visible at the global scale .,2600,3 +2601,<< Image sets >> are compactly represented with multiple [[ Grass-mannian manifolds ]] which are subsequently embedded in Euclidean space with the proposed spectral clustering algorithm .,2601,3 +2602,Image sets are compactly represented with multiple Grass-mannian manifolds which are subsequently embedded in << Euclidean space >> with the proposed [[ spectral clustering algorithm ]] .,2602,3 +2603,We also propose an efficient [[ eigenvector solver ]] which not only reduces the computational cost of << spectral clustering >> by many folds but also improves the clustering quality and final classification results .,2603,3 +2604,We also propose an efficient eigenvector solver which not only reduces the [[ computational cost ]] of << spectral clustering >> by many folds but also improves the clustering quality and final classification results .,2604,6 +2605,We also propose an efficient eigenvector solver which not only reduces the computational cost of << spectral clustering >> by many folds but also improves the [[ clustering quality ]] and final classification results .,2605,6 +2606,We also propose an efficient eigenvector solver which not only reduces the computational cost of << spectral clustering >> by many folds but also improves the clustering quality and final [[ classification results ]] .,2606,6 +2607,This paper investigates some [[ computational problems ]] associated with << probabilistic translation models >> that have recently been adopted in the literature on machine translation .,2607,1 +2608,This paper investigates some computational problems associated with [[ probabilistic translation models ]] that have recently been adopted in the literature on << machine translation >> .,2608,3 +2609,These << models >> can be viewed as pairs of [[ probabilistic context-free grammars ]] working in a ` synchronous ' way .,2609,1 +2610,[[ Active shape models ]] are a powerful and widely used tool to interpret << complex image data >> .,2610,3 +2611,By building << models of shape variation >> they enable [[ search algorithms ]] to use a pri-ori knowledge in an efficient and gainful way .,2611,3 +2612,By building models of shape variation they enable << search algorithms >> to use a [[ pri-ori knowledge ]] in an efficient and gainful way .,2612,3 +2613,"However , due to the [[ linearity ]] of << PCA >> , non-linearities like rotations or independently moving sub-parts in the data can deteriorate the resulting model considerably .",2613,1 +2614,"However , due to the linearity of PCA , << non-linearities >> like [[ rotations ]] or independently moving sub-parts in the data can deteriorate the resulting model considerably .",2614,2 +2615,"Although << non-linear extensions of active shape models >> have been proposed and application specific solutions have been used , they still need a certain amount of [[ user interaction ]] during model building .",2615,3 +2616,"In particular , we propose an << algorithm >> based on the [[ minimum description length principle ]] to find an optimal subdivision of the data into sub-parts , each adequate for linear modeling .",2616,3 +2617,Which in turn leads to a better << model >> in terms of [[ modes of variations ]] .,2617,1 +2618,"The proposed << method >> is evaluated on [[ synthetic data ]] , medical images and hand contours .",2618,6 +2619,"The proposed method is evaluated on [[ synthetic data ]] , << medical images >> and hand contours .",2619,0 +2620,"The proposed << method >> is evaluated on synthetic data , [[ medical images ]] and hand contours .",2620,6 +2621,"The proposed method is evaluated on synthetic data , [[ medical images ]] and << hand contours >> .",2621,0 +2622,"The proposed << method >> is evaluated on synthetic data , medical images and [[ hand contours ]] .",2622,6 +2623,We describe a set of experiments to explore [[ statistical techniques ]] for << ranking >> and selecting the best translations in a graph of translation hypotheses .,2623,3 +2624,"In a previous paper -LRB- Carl , 2007 -RRB- we have described how the << hypotheses graph >> is generated through [[ shallow mapping ]] and permutation rules .",2624,3 +2625,"In a previous paper -LRB- Carl , 2007 -RRB- we have described how the hypotheses graph is generated through [[ shallow mapping ]] and << permutation rules >> .",2625,0 +2626,"In a previous paper -LRB- Carl , 2007 -RRB- we have described how the << hypotheses graph >> is generated through shallow mapping and [[ permutation rules ]] .",2626,3 +2627,This paper describes a number of [[ methods ]] for elaborating << statistical feature functions >> from some of the vector components .,2627,3 +2628,This paper describes a number of << methods >> for elaborating statistical feature functions from some of the [[ vector components ]] .,2628,3 +2629,The feature functions are trained off-line on different types of text and their [[ log-linear combination ]] is then used to retrieve the best M << translation paths >> in the graph .,2629,3 +2630,The feature functions are trained off-line on different types of text and their log-linear combination is then used to retrieve the best M [[ translation paths ]] in the << graph >> .,2630,4 +2631,"We compare two << language modelling toolkits >> , the [[ CMU and the SRI toolkit ]] and arrive at three results : 1 -RRB- word-lemma based feature function models produce better results than token-based models , 2 -RRB- adding a PoS-tag feature function to the word-lemma model improves the output and 3 -RRB- weights for lexical translations are suitable if the training material is similar to the texts to be translated .",2631,2 +2632,"We compare two language modelling toolkits , the CMU and the SRI toolkit and arrive at three results : 1 -RRB- [[ word-lemma based feature function models ]] produce better results than << token-based models >> , 2 -RRB- adding a PoS-tag feature function to the word-lemma model improves the output and 3 -RRB- weights for lexical translations are suitable if the training material is similar to the texts to be translated .",2632,5 +2633,"We compare two language modelling toolkits , the CMU and the SRI toolkit and arrive at three results : 1 -RRB- word-lemma based feature function models produce better results than token-based models , 2 -RRB- adding a [[ PoS-tag feature function ]] to the << word-lemma model >> improves the output and 3 -RRB- weights for lexical translations are suitable if the training material is similar to the texts to be translated .",2633,4 +2634,This paper presents a specialized << editor >> for a highly [[ structured dictionary ]] .,2634,3 +2635,The basic goal in building that [[ editor ]] was to provide an adequate tool to help lexicologists produce a valid and coherent << dictionary >> on the basis of a linguistic theory .,2635,3 +2636,The basic goal in building that editor was to provide an adequate tool to help lexicologists produce a valid and coherent << dictionary >> on the basis of a [[ linguistic theory ]] .,2636,3 +2637,Existing techniques extract term candidates by looking for << internal and contextual information >> associated with [[ domain specific terms ]] .,2637,1 +2638,This paper presents a novel [[ approach ]] for << term extraction >> based on delimiters which are much more stable and domain independent .,2638,3 +2639,This paper presents a novel << approach >> for term extraction based on [[ delimiters ]] which are much more stable and domain independent .,2639,3 +2640,The proposed [[ approach ]] is not as sensitive to << term frequency >> as that of previous works .,2640,5 +2641,"Consequently , the proposed approach can be applied to different domains easily and [[ it ]] is especially useful for << resource-limited domains >> .",2641,3 +2642,[[ Evaluations ]] conducted on two different domains for << Chinese term extraction >> show significant improvements over existing techniques which verifies its efficiency and domain independent nature .,2642,6 +2643,Experiments on new term extraction indicate that the proposed [[ approach ]] can also serve as an effective tool for << domain lexicon expansion >> .,2643,3 +2644,We describe a [[ method ]] for identifying << systematic patterns in translation data >> using part-of-speech tag sequences .,2644,3 +2645,We describe a << method >> for identifying systematic patterns in translation data using [[ part-of-speech tag sequences ]] .,2645,3 +2646,"We incorporate this [[ analysis ]] into a << diagnostic tool >> intended for developers of machine translation systems , and demonstrate how our application can be used by developers to explore patterns in machine translation output .",2646,4 +2647,"We incorporate this analysis into a [[ diagnostic tool ]] intended for developers of << machine translation systems >> , and demonstrate how our application can be used by developers to explore patterns in machine translation output .",2647,3 +2648,"We incorporate this analysis into a diagnostic tool intended for developers of machine translation systems , and demonstrate how our [[ application ]] can be used by developers to explore << patterns in machine translation output >> .",2648,3 +2649,"We study the [[ number of hidden layers ]] required by a << multilayer neu-ral network >> with threshold units to compute a function f from n d to -LCB- O , I -RCB- .",2649,3 +2650,"We study the << number of hidden layers >> required by a multilayer neu-ral network with [[ threshold units ]] to compute a function f from n d to -LCB- O , I -RCB- .",2650,3 +2651,"We show that adding these conditions to Gib-son 's assumptions is not sufficient to ensure global computability with one hidden layer , by exhibiting a new << non-local configuration >> , the [[ `` critical cycle '' ]] , which implies that f is not computable with one hidden layer .",2651,2 +2652,"This paper presents an [[ approach ]] to estimate the << intrinsic texture properties -LRB- albedo , shading , normal -RRB- of scenes >> from multiple view acquisition under unknown illumination conditions .",2652,3 +2653,"This paper presents an approach to estimate the << intrinsic texture properties -LRB- albedo , shading , normal -RRB- of scenes >> from [[ multiple view acquisition ]] under unknown illumination conditions .",2653,3 +2654,"This paper presents an approach to estimate the intrinsic texture properties -LRB- albedo , shading , normal -RRB- of scenes from << multiple view acquisition >> under [[ unknown illumination conditions ]] .",2654,1 +2655,"Unlike previous << video relighting methods >> , the [[ approach ]] does not assume regions of uniform albedo , which makes it applicable to richly textured scenes .",2655,5 +2656,"Unlike previous video relighting methods , the approach does not assume regions of uniform albedo , which makes [[ it ]] applicable to << richly textured scenes >> .",2656,3 +2657,"We show that [[ intrinsic image methods ]] can be used to refine an << initial , low-frequency shading estimate >> based on a global lighting reconstruction from an original texture and coarse scene geometry in order to resolve the inherent global ambiguity in shading .",2657,3 +2658,"We show that intrinsic image methods can be used to refine an [[ initial , low-frequency shading estimate ]] based on a global lighting reconstruction from an original texture and coarse scene geometry in order to resolve the << inherent global ambiguity in shading >> .",2658,3 +2659,"We show that << intrinsic image methods >> can be used to refine an initial , low-frequency shading estimate based on a [[ global lighting reconstruction ]] from an original texture and coarse scene geometry in order to resolve the inherent global ambiguity in shading .",2659,3 +2660,"We show that intrinsic image methods can be used to refine an initial , low-frequency shading estimate based on a << global lighting reconstruction >> from an original [[ texture and coarse scene geometry ]] in order to resolve the inherent global ambiguity in shading .",2660,1 +2661,The [[ method ]] is applied to << relight-ing of free-viewpoint rendering >> from multiple view video capture .,2661,3 +2662,The method is applied to << relight-ing of free-viewpoint rendering >> from [[ multiple view video capture ]] .,2662,3 +2663,This demonstrates << relighting >> with [[ reproduction of fine surface detail ]] .,2663,1 +2664,"Following recent developments in the [[ automatic evaluation ]] of << machine translation >> and document summarization , we present a similar approach , implemented in a measure called POURPRE , for automatically evaluating answers to definition questions .",2664,6 +2665,"Following recent developments in the [[ automatic evaluation ]] of machine translation and << document summarization >> , we present a similar approach , implemented in a measure called POURPRE , for automatically evaluating answers to definition questions .",2665,6 +2666,"Following recent developments in the automatic evaluation of [[ machine translation ]] and << document summarization >> , we present a similar approach , implemented in a measure called POURPRE , for automatically evaluating answers to definition questions .",2666,0 +2667,"Following recent developments in the automatic evaluation of machine translation and document summarization , we present a similar << approach >> , implemented in a [[ measure ]] called POURPRE , for automatically evaluating answers to definition questions .",2667,3 +2668,"Following recent developments in the automatic evaluation of machine translation and document summarization , we present a similar approach , implemented in a [[ measure ]] called POURPRE , for << automatically evaluating answers to definition questions >> .",2668,3 +2669,"Experiments with the [[ TREC 2003 and TREC 2004 QA tracks ]] indicate that rankings produced by our << metric >> correlate highly with official rankings , and that POURPRE outperforms direct application of existing metrics .",2669,6 +2670,"Experiments with the [[ TREC 2003 and TREC 2004 QA tracks ]] indicate that rankings produced by our metric correlate highly with official rankings , and that << POURPRE >> outperforms direct application of existing metrics .",2670,6 +2671,"Experiments with the [[ TREC 2003 and TREC 2004 QA tracks ]] indicate that rankings produced by our metric correlate highly with official rankings , and that POURPRE outperforms direct application of existing << metrics >> .",2671,6 +2672,"Experiments with the TREC 2003 and TREC 2004 QA tracks indicate that << rankings >> produced by our [[ metric ]] correlate highly with official rankings , and that POURPRE outperforms direct application of existing metrics .",2672,3 +2673,"Experiments with the TREC 2003 and TREC 2004 QA tracks indicate that rankings produced by our metric correlate highly with official rankings , and that [[ POURPRE ]] outperforms direct application of existing << metrics >> .",2673,5 +2674,Recent advances in [[ Automatic Speech Recognition technology ]] have put the goal of naturally sounding << dialog systems >> within reach .,2674,3 +2675,"The issue of [[ system response ]] to users has been extensively studied by the << natural language generation community >> , though rarely in the context of dialog systems .",2675,4 +2676,"The issue of system response to users has been extensively studied by the [[ natural language generation community ]] , though rarely in the context of << dialog systems >> .",2676,5 +2677,"We show how research in [[ generation ]] can be adapted to << dialog systems >> , and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques .",2677,3 +2678,"We show how research in generation can be adapted to dialog systems , and how the high cost of << hand-crafting knowledge-based generation systems >> can be overcome by employing [[ machine learning techniques ]] .",2678,3 +2679,"We present a << tool >> , called ILIMP , which takes as input a [[ raw text in French ]] and produces as output the same text in which every occurrence of the pronoun il is tagged either with tag -LSB- ANA -RSB- for anaphoric or -LSB- IMP -RSB- for impersonal or expletive .",2679,3 +2680,"This [[ tool ]] is therefore designed to distinguish between the << anaphoric occurrences of il >> , for which an anaphora resolution system has to look for an antecedent , and the expletive occurrences of this pronoun , for which it does not make sense to look for an antecedent .",2680,3 +2681,"This tool is therefore designed to distinguish between the << anaphoric occurrences of il >> , for which an [[ anaphora resolution system ]] has to look for an antecedent , and the expletive occurrences of this pronoun , for which it does not make sense to look for an antecedent .",2681,3 +2682,"The [[ precision rate ]] for << ILIMP >> is 97,5 % .",2682,6 +2683,"Other << tasks >> using the [[ method ]] developed for ILIMP are described briefly , as well as the use of ILIMP in a modular syntactic analysis system .",2683,3 +2684,"Other tasks using the [[ method ]] developed for << ILIMP >> are described briefly , as well as the use of ILIMP in a modular syntactic analysis system .",2684,3 +2685,"Other tasks using the method developed for ILIMP are described briefly , as well as the use of [[ ILIMP ]] in a << modular syntactic analysis system >> .",2685,3 +2686,Little is thus known about the << robustness >> of [[ speech cues ]] in the wild .,2686,1 +2687,"This study compares the effect of [[ noise ]] and << reverberation >> on depression prediction using 1 -RRB- standard mel-frequency cepstral coefficients -LRB- MFCCs -RRB- , and 2 -RRB- features designed for noise robustness , damped oscillator cepstral coefficients -LRB- DOCCs -RRB- .",2687,0 +2688,"This study compares the effect of [[ noise ]] and reverberation on << depression prediction >> using 1 -RRB- standard mel-frequency cepstral coefficients -LRB- MFCCs -RRB- , and 2 -RRB- features designed for noise robustness , damped oscillator cepstral coefficients -LRB- DOCCs -RRB- .",2688,1 +2689,"This study compares the effect of noise and [[ reverberation ]] on << depression prediction >> using 1 -RRB- standard mel-frequency cepstral coefficients -LRB- MFCCs -RRB- , and 2 -RRB- features designed for noise robustness , damped oscillator cepstral coefficients -LRB- DOCCs -RRB- .",2689,1 +2690,"This study compares the effect of noise and reverberation on << depression prediction >> using 1 -RRB- standard [[ mel-frequency cepstral coefficients -LRB- MFCCs -RRB- ]] , and 2 -RRB- features designed for noise robustness , damped oscillator cepstral coefficients -LRB- DOCCs -RRB- .",2690,3 +2691,"This study compares the effect of noise and reverberation on depression prediction using 1 -RRB- standard [[ mel-frequency cepstral coefficients -LRB- MFCCs -RRB- ]] , and 2 -RRB- << features >> designed for noise robustness , damped oscillator cepstral coefficients -LRB- DOCCs -RRB- .",2691,0 +2692,"This study compares the effect of noise and reverberation on depression prediction using 1 -RRB- standard mel-frequency cepstral coefficients -LRB- MFCCs -RRB- , and 2 -RRB- [[ features ]] designed for << noise robustness >> , damped oscillator cepstral coefficients -LRB- DOCCs -RRB- .",2692,3 +2693,"This study compares the effect of noise and reverberation on depression prediction using 1 -RRB- standard mel-frequency cepstral coefficients -LRB- MFCCs -RRB- , and 2 -RRB- [[ features ]] designed for noise robustness , << damped oscillator cepstral coefficients -LRB- DOCCs -RRB- >> .",2693,0 +2694,Results using [[ additive noise ]] and << reverberation >> reveal a consistent pattern of findings for multiple evaluation metrics under both matched and mismatched conditions .,2694,0 +2695,First and most notably : standard MFCC features suffer dramatically under test/train mismatch for both [[ noise ]] and << reverberation >> ; DOCC features are far more robust .,2695,0 +2696,First and most notably : standard << MFCC features >> suffer dramatically under test/train mismatch for both noise and reverberation ; [[ DOCC features ]] are far more robust .,2696,5 +2697,"Third , [[ artificial neural networks ]] tend to outperform << support vector regression >> .",2697,5 +2698,"Fourth , [[ spontaneous speech ]] appears to offer better robustness than << read speech >> .",2698,5 +2699,"Finally , a [[ cross-corpus -LRB- and cross-language -RRB- experiment ]] reveals better noise and reverberation robustness for << DOCCs >> than for MFCCs .",2699,6 +2700,"Finally , a [[ cross-corpus -LRB- and cross-language -RRB- experiment ]] reveals better noise and reverberation robustness for DOCCs than for << MFCCs >> .",2700,6 +2701,"Finally , a cross-corpus -LRB- and cross-language -RRB- experiment reveals better [[ noise and reverberation robustness ]] for << DOCCs >> than for MFCCs .",2701,6 +2702,"Finally , a cross-corpus -LRB- and cross-language -RRB- experiment reveals better [[ noise and reverberation robustness ]] for DOCCs than for << MFCCs >> .",2702,6 +2703,"Finally , a cross-corpus -LRB- and cross-language -RRB- experiment reveals better noise and reverberation robustness for [[ DOCCs ]] than for << MFCCs >> .",2703,5 +2704,This paper proposes [[ document oriented preference sets -LRB- DoPS -RRB- ]] for the << disambiguation of the dependency structure >> of sentences .,2704,3 +2705,<< Sentence ambiguities >> can be resolved by using [[ domain targeted preference knowledge ]] without using complicated large knowledgebases .,2705,3 +2706,Sentence ambiguities can be resolved by using [[ domain targeted preference knowledge ]] without using complicated large << knowledgebases >> .,2706,5 +2707,Implementation and empirical results are described for the the analysis of [[ dependency structures ]] of << Japanese patent claim sentences >> .,2707,1 +2708,<< Multimodal interfaces >> require effective [[ parsing ]] and understanding of utterances whose content is distributed across multiple input modes .,2708,3 +2709,Johnston 1998 presents an [[ approach ]] in which strategies for << multimodal integration >> are stated declaratively using a unification-based grammar that is used by a multidimensional chart parser to compose inputs .,2709,3 +2710,Johnston 1998 presents an approach in which strategies for << multimodal integration >> are stated declaratively using a [[ unification-based grammar ]] that is used by a multidimensional chart parser to compose inputs .,2710,3 +2711,Johnston 1998 presents an approach in which strategies for multimodal integration are stated declaratively using a [[ unification-based grammar ]] that is used by a << multidimensional chart parser >> to compose inputs .,2711,3 +2712,"In this paper , we present an alternative [[ approach ]] in which << multimodal parsing and understanding >> are achieved using a weighted finite-state device which takes speech and gesture streams as inputs and outputs their joint interpretation .",2712,3 +2713,"In this paper , we present an alternative approach in which << multimodal parsing and understanding >> are achieved using a [[ weighted finite-state device ]] which takes speech and gesture streams as inputs and outputs their joint interpretation .",2713,3 +2714,"In this paper , we present an alternative approach in which multimodal parsing and understanding are achieved using a << weighted finite-state device >> which takes [[ speech and gesture streams ]] as inputs and outputs their joint interpretation .",2714,3 +2715,"This [[ approach ]] is significantly more efficient , enables tight-coupling of multimodal understanding with speech recognition , and provides a general probabilistic framework for << multimodal ambiguity resolution >> .",2715,3 +2716,"This approach is significantly more efficient , enables tight-coupling of << multimodal understanding >> with [[ speech recognition ]] , and provides a general probabilistic framework for multimodal ambiguity resolution .",2716,0 +2717,"Recently , we initiated a project to develop a << phonetically-based spoken language understanding system >> called [[ SUMMIT ]] .",2717,2 +2718,"In contrast to many of the past efforts that make use of << heuristic rules >> whose development requires intense [[ knowledge engineering ]] , our approach attempts to express the speech knowledge within a formal framework using well-defined mathematical tools .",2718,3 +2719,"In contrast to many of the past efforts that make use of heuristic rules whose development requires intense knowledge engineering , our [[ approach ]] attempts to express the << speech knowledge >> within a formal framework using well-defined mathematical tools .",2719,3 +2720,"In contrast to many of the past efforts that make use of heuristic rules whose development requires intense knowledge engineering , our approach attempts to express the << speech knowledge >> within a formal framework using well-defined [[ mathematical tools ]] .",2720,3 +2721,"In our system , [[ features ]] and << decision strategies >> are discovered and trained automatically , using a large body of speech data .",2721,0 +2722,"In our system , features and << decision strategies >> are discovered and trained automatically , using a large body of [[ speech data ]] .",2722,3 +2723,This paper describes an implemented << program >> that takes a [[ tagged text corpus ]] and generates a partial list of the subcategorization frames in which each verb occurs .,2723,6 +2724,We present a [[ method ]] for estimating the << relative pose of two calibrated or uncalibrated non-overlapping surveillance cameras >> from observing a moving object .,2724,3 +2725,We show how to tackle the problem of << missing point correspondences >> heavily required by [[ SfM pipelines ]] and how to go beyond this basic paradigm .,2725,3 +2726,"We relax the [[ non-linear nature ]] of the << problem >> by accepting two assumptions which surveillance scenarios offer , ie .",2726,1 +2727,By those assumptions we cast the << problem >> as a [[ Quadratic Eigenvalue Problem ]] offering an elegant way of treating nonlinear monomials and delivering a quasi closed-form solution as a reliable starting point for a further bundle adjustment .,2727,3 +2728,By those assumptions we cast the problem as a [[ Quadratic Eigenvalue Problem ]] offering an elegant way of treating << nonlinear monomials >> and delivering a quasi closed-form solution as a reliable starting point for a further bundle adjustment .,2728,3 +2729,By those assumptions we cast the problem as a [[ Quadratic Eigenvalue Problem ]] offering an elegant way of treating nonlinear monomials and delivering a << quasi closed-form solution >> as a reliable starting point for a further bundle adjustment .,2729,3 +2730,By those assumptions we cast the problem as a Quadratic Eigenvalue Problem offering an elegant way of treating nonlinear monomials and delivering a [[ quasi closed-form solution ]] as a reliable starting point for a further << bundle adjustment >> .,2730,3 +2731,We are the first to bring the [[ closed form solution ]] to such a very practical << problem >> arising in video surveillance .,2731,3 +2732,We are the first to bring the closed form solution to such a very practical << problem >> arising in [[ video surveillance ]] .,2732,1 +2733,"In this paper , we propose a [[ human action recognition system ]] suitable for << embedded computer vision applications >> in security systems , human-computer interaction and intelligent environments .",2733,3 +2734,"In this paper , we propose a human action recognition system suitable for [[ embedded computer vision applications ]] in << security systems >> , human-computer interaction and intelligent environments .",2734,3 +2735,"In this paper , we propose a human action recognition system suitable for [[ embedded computer vision applications ]] in security systems , << human-computer interaction >> and intelligent environments .",2735,3 +2736,"In this paper , we propose a human action recognition system suitable for [[ embedded computer vision applications ]] in security systems , human-computer interaction and << intelligent environments >> .",2736,3 +2737,"In this paper , we propose a human action recognition system suitable for embedded computer vision applications in [[ security systems ]] , << human-computer interaction >> and intelligent environments .",2737,0 +2738,"In this paper , we propose a human action recognition system suitable for embedded computer vision applications in security systems , [[ human-computer interaction ]] and << intelligent environments >> .",2738,0 +2739,Our [[ system ]] is suitable for << embedded computer vision application >> based on three reasons .,2739,3 +2740,"Firstly , the << system >> was based on a [[ linear Support Vector Machine -LRB- SVM -RRB- classifier ]] where classification progress can be implemented easily and quickly in embedded hardware .",2740,3 +2741,"Firstly , the system was based on a linear Support Vector Machine -LRB- SVM -RRB- classifier where << classification progress >> can be implemented easily and quickly in [[ embedded hardware ]] .",2741,3 +2742,"Secondly , we use << compacted motion features >> easily obtained from [[ videos ]] .",2742,3 +2743,We address the limitations of the well known Motion History Image -LRB- MHI -RRB- and propose a new [[ Hierarchical Motion History Histogram -LRB- HMHH -RRB- feature ]] to represent the << motion information >> .,2743,3 +2744,"[[ HMHH ]] not only provides << rich motion information >> , but also remains computationally inexpensive .",2744,3 +2745,"Finally , we combine [[ MHI ]] and << HMHH >> together and extract a low dimension feature vector to be used in the SVM classifiers .",2745,0 +2746,"Finally , we combine [[ MHI ]] and HMHH together and extract a << low dimension feature vector >> to be used in the SVM classifiers .",2746,3 +2747,"Finally , we combine MHI and [[ HMHH ]] together and extract a << low dimension feature vector >> to be used in the SVM classifiers .",2747,3 +2748,"Finally , we combine MHI and HMHH together and extract a [[ low dimension feature vector ]] to be used in the << SVM classifiers >> .",2748,3 +2749,Experimental results show that our << system >> achieves significant improvement on the [[ recognition ]] performance .,2749,6 +2750,In this paper I will argue for a << model of grammatical processing >> that is based on [[ uniform processing ]] and knowledge sources .,2750,3 +2751,In this paper I will argue for a << model of grammatical processing >> that is based on uniform processing and [[ knowledge sources ]] .,2751,3 +2752,In this paper I will argue for a model of grammatical processing that is based on << uniform processing >> and [[ knowledge sources ]] .,2752,0 +2753,The main feature of this model is to view [[ parsing ]] and << generation >> as two strongly interleaved tasks performed by a single parametrized deduction process .,2753,0 +2754,The main feature of this model is to view [[ parsing ]] and generation as two strongly interleaved << tasks >> performed by a single parametrized deduction process .,2754,2 +2755,The main feature of this model is to view parsing and [[ generation ]] as two strongly interleaved << tasks >> performed by a single parametrized deduction process .,2755,2 +2756,The main feature of this model is to view parsing and generation as two strongly interleaved << tasks >> performed by a single [[ parametrized deduction process ]] .,2756,3 +2757,[[ Link detection ]] has been regarded as a core technology for the << Topic Detection and Tracking tasks of new event detection >> .,2757,3 +2758,In this paper we formulate [[ story link detection ]] and << new event detection >> as information retrieval task and hypothesize on the impact of precision and recall on both systems .,2758,0 +2759,In this paper we formulate [[ story link detection ]] and new event detection as information retrieval task and hypothesize on the impact of precision and recall on both << systems >> .,2759,2 +2760,In this paper we formulate story link detection and [[ new event detection ]] as information retrieval task and hypothesize on the impact of precision and recall on both << systems >> .,2760,2 +2761,In this paper we formulate << story link detection >> and new event detection as [[ information retrieval task ]] and hypothesize on the impact of precision and recall on both systems .,2761,3 +2762,In this paper we formulate story link detection and << new event detection >> as [[ information retrieval task ]] and hypothesize on the impact of precision and recall on both systems .,2762,3 +2763,In this paper we formulate story link detection and new event detection as information retrieval task and hypothesize on the impact of [[ precision ]] and << recall >> on both systems .,2763,0 +2764,In this paper we formulate story link detection and new event detection as information retrieval task and hypothesize on the impact of [[ precision ]] and recall on both << systems >> .,2764,6 +2765,In this paper we formulate story link detection and new event detection as information retrieval task and hypothesize on the impact of precision and [[ recall ]] on both << systems >> .,2765,6 +2766,"Motivated by these arguments , we introduce a number of new << performance enhancing techniques >> including [[ part of speech tagging ]] , new similarity measures and expanded stop lists .",2766,4 +2767,"Motivated by these arguments , we introduce a number of new performance enhancing techniques including [[ part of speech tagging ]] , new << similarity measures >> and expanded stop lists .",2767,0 +2768,"Motivated by these arguments , we introduce a number of new << performance enhancing techniques >> including part of speech tagging , new [[ similarity measures ]] and expanded stop lists .",2768,4 +2769,"Motivated by these arguments , we introduce a number of new performance enhancing techniques including part of speech tagging , new [[ similarity measures ]] and << expanded stop lists >> .",2769,0 +2770,"Motivated by these arguments , we introduce a number of new << performance enhancing techniques >> including part of speech tagging , new similarity measures and [[ expanded stop lists ]] .",2770,4 +2771,We attempt to understand << visual classification >> in humans using both [[ psy-chophysical and machine learning techniques ]] .,2771,3 +2772,[[ Frontal views of human faces ]] were used for a << gender classification task >> .,2772,3 +2773,Several [[ hyperplane learning algorithms ]] were used on the same << classification task >> using the Principal Components of the texture and flowfield representation of the faces .,2773,3 +2774,Several << hyperplane learning algorithms >> were used on the same classification task using the [[ Principal Components of the texture ]] and flowfield representation of the faces .,2774,3 +2775,Several << hyperplane learning algorithms >> were used on the same classification task using the Principal Components of the texture and [[ flowfield representation of the faces ]] .,2775,3 +2776,Several hyperplane learning algorithms were used on the same classification task using the << Principal Components of the texture >> and [[ flowfield representation of the faces ]] .,2776,0 +2777,"The << classification >> performance of the [[ learning algorithms ]] was estimated using the face database with the true gender of the faces as labels , and also with the gender estimated by the subjects .",2777,3 +2778,"The classification performance of the << learning algorithms >> was estimated using the [[ face database ]] with the true gender of the faces as labels , and also with the gender estimated by the subjects .",2778,6 +2779,Our results suggest that << human classification >> can be modeled by some [[ hyperplane algorithms ]] in the feature space we used .,2779,3 +2780,Our results suggest that human classification can be modeled by some << hyperplane algorithms >> in the [[ feature space ]] we used .,2780,1 +2781,"For classification , the brain needs more processing for stimuli close to that [[ hyperplane ]] than for << those >> further away .",2781,5 +2782,"In this paper , we present a [[ corpus-based supervised word sense disambiguation -LRB- WSD -RRB- system ]] for << Dutch >> which combines statistical classification -LRB- maximum entropy -RRB- with linguistic information .",2782,3 +2783,"In this paper , we present a << corpus-based supervised word sense disambiguation -LRB- WSD -RRB- system >> for Dutch which combines [[ statistical classification ]] -LRB- maximum entropy -RRB- with linguistic information .",2783,4 +2784,"In this paper , we present a << corpus-based supervised word sense disambiguation -LRB- WSD -RRB- system >> for Dutch which combines statistical classification -LRB- [[ maximum entropy ]] -RRB- with linguistic information .",2784,4 +2785,"In this paper , we present a << corpus-based supervised word sense disambiguation -LRB- WSD -RRB- system >> for Dutch which combines statistical classification -LRB- maximum entropy -RRB- with [[ linguistic information ]] .",2785,4 +2786,"In this paper , we present a corpus-based supervised word sense disambiguation -LRB- WSD -RRB- system for Dutch which combines statistical classification -LRB- << maximum entropy >> -RRB- with [[ linguistic information ]] .",2786,0 +2787,"Instead of building individual [[ classifiers ]] per ambiguous wordform , we introduce a << lemma-based approach >> .",2787,5 +2788,"Instead of building individual << classifiers >> per [[ ambiguous wordform ]] , we introduce a lemma-based approach .",2788,3 +2789,"The advantage of this novel method is that it clusters all [[ inflected forms ]] of an << ambiguous word >> in one classifier , therefore augmenting the training material available to the algorithm .",2789,1 +2790,"Testing the [[ lemma-based model ]] on the Dutch Senseval-2 test data , we achieve a significant increase in accuracy over the << wordform model >> .",2790,5 +2791,"Testing the << lemma-based model >> on the [[ Dutch Senseval-2 test data ]] , we achieve a significant increase in accuracy over the wordform model .",2791,6 +2792,"We propose an exact , general and efficient [[ coarse-to-fine energy minimization strategy ]] for << semantic video segmenta-tion >> .",2792,3 +2793,Our << strategy >> is based on a [[ hierarchical abstraction of the supervoxel graph ]] that allows us to minimize an energy defined at the finest level of the hierarchy by minimizing a series of simpler energies defined over coarser graphs .,2793,3 +2794,"It is general , i.e. , [[ it ]] can be used to minimize any << energy function >> -LRB- e.g. , unary , pairwise , and higher-order terms -RRB- with any existing energy minimization algorithm -LRB- e.g. , graph cuts and belief propagation -RRB- .",2794,3 +2795,"It is general , i.e. , [[ it ]] can be used to minimize any energy function -LRB- e.g. , unary , pairwise , and higher-order terms -RRB- with any existing << energy minimization algorithm >> -LRB- e.g. , graph cuts and belief propagation -RRB- .",2795,0 +2796,"It is general , i.e. , it can be used to minimize any << energy function >> -LRB- e.g. , unary , pairwise , and higher-order terms -RRB- with any existing [[ energy minimization algorithm ]] -LRB- e.g. , graph cuts and belief propagation -RRB- .",2796,3 +2797,"It is general , i.e. , it can be used to minimize any energy function -LRB- e.g. , unary , pairwise , and higher-order terms -RRB- with any existing << energy minimization algorithm >> -LRB- e.g. , [[ graph cuts ]] and belief propagation -RRB- .",2797,2 +2798,"It is general , i.e. , it can be used to minimize any energy function -LRB- e.g. , unary , pairwise , and higher-order terms -RRB- with any existing energy minimization algorithm -LRB- e.g. , [[ graph cuts ]] and << belief propagation >> -RRB- .",2798,0 +2799,"It is general , i.e. , it can be used to minimize any energy function -LRB- e.g. , unary , pairwise , and higher-order terms -RRB- with any existing << energy minimization algorithm >> -LRB- e.g. , graph cuts and [[ belief propagation ]] -RRB- .",2799,2 +2800,[[ It ]] also gives significant speedups in << inference >> for several datasets with varying degrees of spatio-temporal continuity .,2800,3 +2801,<< It >> also gives significant speedups in inference for several [[ datasets ]] with varying degrees of spatio-temporal continuity .,2801,6 +2802,It also gives significant speedups in inference for several << datasets >> with varying degrees of [[ spatio-temporal continuity ]] .,2802,1 +2803,"We also discuss the strengths and weaknesses of our [[ strategy ]] relative to existing << hierarchical approaches >> , and the kinds of image and video data that provide the best speedups .",2803,5 +2804,"Motivated by the success of [[ ensemble methods ]] in << machine learning >> and other areas of natural language processing , we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora .",2804,3 +2805,"Motivated by the success of [[ ensemble methods ]] in machine learning and other areas of << natural language processing >> , we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora .",2805,3 +2806,"Motivated by the success of ensemble methods in machine learning and other areas of natural language processing , we developed a [[ multi-strategy and multi-source approach ]] to << question answering >> which is based on combining the results from different answering agents searching for answers in multiple corpora .",2806,3 +2807,"The << answering agents >> adopt fundamentally different [[ strategies ]] , one utilizing primarily knowledge-based mechanisms and the other adopting statistical techniques .",2807,3 +2808,"The answering agents adopt fundamentally different << strategies >> , [[ one ]] utilizing primarily knowledge-based mechanisms and the other adopting statistical techniques .",2808,2 +2809,"The answering agents adopt fundamentally different strategies , << one >> utilizing primarily [[ knowledge-based mechanisms ]] and the other adopting statistical techniques .",2809,3 +2810,"The answering agents adopt fundamentally different << strategies >> , one utilizing primarily knowledge-based mechanisms and the [[ other ]] adopting statistical techniques .",2810,2 +2811,"The answering agents adopt fundamentally different strategies , one utilizing primarily knowledge-based mechanisms and the << other >> adopting [[ statistical techniques ]] .",2811,3 +2812,"We present our << multi-level answer resolution algorithm >> that combines results from the [[ answering agents ]] at the question , passage , and/or answer levels .",2812,3 +2813,"Experiments evaluating the effectiveness of our [[ answer resolution algorithm ]] show a 35.0 % relative improvement over our << baseline system >> in the number of questions correctly answered , and a 32.8 % improvement according to the average precision metric .",2813,5 +2814,"Experiments evaluating the effectiveness of our << answer resolution algorithm >> show a 35.0 % relative improvement over our baseline system in the number of questions correctly answered , and a 32.8 % improvement according to the [[ average precision metric ]] .",2814,6 +2815,"Experiments evaluating the effectiveness of our answer resolution algorithm show a 35.0 % relative improvement over our << baseline system >> in the number of questions correctly answered , and a 32.8 % improvement according to the [[ average precision metric ]] .",2815,6 +2816,[[ Word Identification ]] has been an important and active issue in << Chinese Natural Language Processing >> .,2816,2 +2817,"In this paper , a new [[ mechanism ]] , based on the concept of sublanguage , is proposed for identifying << unknown words >> , especially personal names , in Chinese newspapers .",2817,3 +2818,"In this paper , a new << mechanism >> , based on the concept of [[ sublanguage ]] , is proposed for identifying unknown words , especially personal names , in Chinese newspapers .",2818,3 +2819,"In this paper , a new mechanism , based on the concept of sublanguage , is proposed for identifying << unknown words >> , especially [[ personal names ]] , in Chinese newspapers .",2819,2 +2820,"In this paper , a new << mechanism >> , based on the concept of sublanguage , is proposed for identifying unknown words , especially personal names , in [[ Chinese newspapers ]] .",2820,3 +2821,"The proposed << mechanism >> includes [[ title-driven name recognition ]] , adaptive dynamic word formation , identification of 2-character and 3-character Chinese names without title .",2821,4 +2822,"The proposed mechanism includes [[ title-driven name recognition ]] , << adaptive dynamic word formation >> , identification of 2-character and 3-character Chinese names without title .",2822,0 +2823,"The proposed << mechanism >> includes title-driven name recognition , [[ adaptive dynamic word formation ]] , identification of 2-character and 3-character Chinese names without title .",2823,4 +2824,"The proposed mechanism includes title-driven name recognition , [[ adaptive dynamic word formation ]] , << identification of 2-character and 3-character Chinese names without title >> .",2824,0 +2825,"The proposed << mechanism >> includes title-driven name recognition , adaptive dynamic word formation , [[ identification of 2-character and 3-character Chinese names without title ]] .",2825,4 +2826,"This report describes [[ Paul ]] , a << computer text generation system >> designed to create cohesive text through the use of lexical substitutions .",2826,2 +2827,"This report describes Paul , a [[ computer text generation system ]] designed to create << cohesive text >> through the use of lexical substitutions .",2827,3 +2828,"This report describes << Paul >> , a computer text generation system designed to create cohesive text through the use of [[ lexical substitutions ]] .",2828,3 +2829,"Specifically , this system is designed to deterministically choose between [[ pronominalization ]] , << superordinate substitution >> , and definite noun phrase reiteration .",2829,5 +2830,"Specifically , this system is designed to deterministically choose between pronominalization , [[ superordinate substitution ]] , and << definite noun phrase reiteration >> .",2830,5 +2831,The [[ system ]] identifies a strength of << antecedence recovery >> for each of the lexical substitutions .,2831,3 +2832,The system identifies a strength of [[ antecedence recovery ]] for each of the << lexical substitutions >> .,2832,3 +2833,"It describes the automated training and evaluation of an Optimal Position Policy , a [[ method ]] of locating the likely << positions of topic-bearing sentences >> based on genre-specific regularities of discourse structure .",2833,3 +2834,"It describes the automated training and evaluation of an Optimal Position Policy , a << method >> of locating the likely positions of topic-bearing sentences based on [[ genre-specific regularities of discourse structure ]] .",2834,3 +2835,"This [[ method ]] can be used in << applications >> such as information retrieval , routing , and text summarization .",2835,3 +2836,"This method can be used in << applications >> such as [[ information retrieval ]] , routing , and text summarization .",2836,2 +2837,"This method can be used in applications such as [[ information retrieval ]] , << routing >> , and text summarization .",2837,0 +2838,"This method can be used in << applications >> such as information retrieval , [[ routing ]] , and text summarization .",2838,2 +2839,"This method can be used in applications such as information retrieval , [[ routing ]] , and << text summarization >> .",2839,0 +2840,"This method can be used in << applications >> such as information retrieval , routing , and [[ text summarization ]] .",2840,2 +2841,We describe a general [[ framework ]] for << online multiclass learning >> based on the notion of hypothesis sharing .,2841,3 +2842,We describe a general << framework >> for online multiclass learning based on the [[ notion of hypothesis sharing ]] .,2842,3 +2843,We generalize the [[ multiclass Perceptron ]] to our << framework >> and derive a unifying mistake bound analysis .,2843,3 +2844,We demonstrate the merits of our approach by comparing [[ it ]] to previous << methods >> on both synthetic and natural datasets .,2844,5 +2845,We demonstrate the merits of our approach by comparing << it >> to previous methods on both [[ synthetic and natural datasets ]] .,2845,6 +2846,We demonstrate the merits of our approach by comparing it to previous << methods >> on both [[ synthetic and natural datasets ]] .,2846,6 +2847,We describe a set of [[ supervised machine learning ]] experiments centering on the construction of << statistical models of WH-questions >> .,2847,3 +2848,"These << models >> , which are built from [[ shallow linguistic features of questions ]] , are employed to predict target variables which represent a user 's informational goals .",2848,3 +2849,"We argue in favor of the the use of [[ labeled directed graph ]] to represent various types of << linguistic structures >> , and illustrate how this allows one to view NLP tasks as graph transformations .",2849,3 +2850,"We argue in favor of the the use of [[ labeled directed graph ]] to represent various types of linguistic structures , and illustrate how this allows one to view << NLP tasks >> as graph transformations .",2850,3 +2851,"We argue in favor of the the use of labeled directed graph to represent various types of linguistic structures , and illustrate how [[ this ]] allows one to view << NLP tasks >> as graph transformations .",2851,3 +2852,We present a general [[ method ]] for learning such << transformations >> from an annotated corpus and describe experiments with two applications of the method : identification of non-local depenencies -LRB- using Penn Treebank data -RRB- and semantic role labeling -LRB- using Proposition Bank data -RRB- .,2852,3 +2853,We present a general << method >> for learning such transformations from an [[ annotated corpus ]] and describe experiments with two applications of the method : identification of non-local depenencies -LRB- using Penn Treebank data -RRB- and semantic role labeling -LRB- using Proposition Bank data -RRB- .,2853,3 +2854,We present a general method for learning such transformations from an annotated corpus and describe experiments with two << applications >> of the [[ method ]] : identification of non-local depenencies -LRB- using Penn Treebank data -RRB- and semantic role labeling -LRB- using Proposition Bank data -RRB- .,2854,3 +2855,We present a general method for learning such transformations from an annotated corpus and describe experiments with two << applications >> of the method : [[ identification of non-local depenencies ]] -LRB- using Penn Treebank data -RRB- and semantic role labeling -LRB- using Proposition Bank data -RRB- .,2855,2 +2856,We present a general method for learning such transformations from an annotated corpus and describe experiments with two applications of the method : << identification of non-local depenencies >> -LRB- using [[ Penn Treebank data ]] -RRB- and semantic role labeling -LRB- using Proposition Bank data -RRB- .,2856,3 +2857,We present a general method for learning such transformations from an annotated corpus and describe experiments with two << applications >> of the method : identification of non-local depenencies -LRB- using Penn Treebank data -RRB- and [[ semantic role labeling ]] -LRB- using Proposition Bank data -RRB- .,2857,2 +2858,We present a general method for learning such transformations from an annotated corpus and describe experiments with two applications of the method : identification of non-local depenencies -LRB- using Penn Treebank data -RRB- and << semantic role labeling >> -LRB- using [[ Proposition Bank data ]] -RRB- .,2858,3 +2859,"We describe a generative probabilistic model of natural language , which we call HBG , that takes advantage of detailed [[ linguistic information ]] to resolve << ambiguity >> .",2859,3 +2860,"[[ HBG ]] incorporates lexical , syntactic , semantic , and structural information from the parse tree into the << disambiguation process >> in a novel way .",2860,3 +2861,"<< HBG >> incorporates [[ lexical , syntactic , semantic , and structural information ]] from the parse tree into the disambiguation process in a novel way .",2861,3 +2862,"We use a [[ corpus of bracketed sentences ]] , called a Treebank , in combination with << decision tree building >> to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence .",2862,0 +2863,"We use a [[ corpus of bracketed sentences ]] , called a Treebank , in combination with decision tree building to tease out the relevant aspects of a << parse tree >> that will determine the correct parse of a sentence .",2863,3 +2864,"We use a corpus of bracketed sentences , called a Treebank , in combination with [[ decision tree building ]] to tease out the relevant aspects of a << parse tree >> that will determine the correct parse of a sentence .",2864,3 +2865,"We use a corpus of bracketed sentences , called a Treebank , in combination with decision tree building to tease out the relevant aspects of a [[ parse tree ]] that will determine the correct << parse >> of a sentence .",2865,3 +2866,This stands in contrast to the usual approach of further [[ grammar tailoring ]] via the usual linguistic introspection in the hope of generating the correct << parse >> .,2866,3 +2867,This stands in contrast to the usual approach of further << grammar tailoring >> via the usual [[ linguistic introspection ]] in the hope of generating the correct parse .,2867,3 +2868,"In head-to-head tests against one of the best existing << robust probabilistic parsing models >> , which we call [[ P-CFG ]] , the HBG model significantly outperforms P-CFG , increasing the parsing accuracy rate from 60 % to 75 % , a 37 % reduction in error .",2868,2 +2869,"In head-to-head tests against one of the best existing robust probabilistic parsing models , which we call P-CFG , the [[ HBG model ]] significantly outperforms << P-CFG >> , increasing the parsing accuracy rate from 60 % to 75 % , a 37 % reduction in error .",2869,5 +2870,"In head-to-head tests against one of the best existing robust probabilistic parsing models , which we call P-CFG , the << HBG model >> significantly outperforms P-CFG , increasing the [[ parsing accuracy rate ]] from 60 % to 75 % , a 37 % reduction in error .",2870,6 +2871,The framework of the << analysis >> is [[ model-theoretic semantics ]] .,2871,3 +2872,This paper addresses the issue of << word-sense ambiguity >> in extraction from [[ machine-readable resources ]] for the construction of large-scale knowledge sources .,2872,3 +2873,This paper addresses the issue of word-sense ambiguity in extraction from [[ machine-readable resources ]] for the << construction of large-scale knowledge sources >> .,2873,3 +2874,"We describe two experiments : one which ignored word-sense distinctions , resulting in 6.3 % [[ accuracy ]] for << semantic classification >> of verbs based on -LRB- Levin , 1993 -RRB- ; and one which exploited word-sense distinctions , resulting in 97.9 % accuracy .",2874,6 +2875,"These experiments were dual purpose : -LRB- 1 -RRB- to validate the central thesis of the work of -LRB- Levin , 1993 -RRB- , i.e. , that [[ verb semantics ]] and << syntactic behavior >> are predictably related ; -LRB- 2 -RRB- to demonstrate that a 15-fold improvement can be achieved in deriving semantic information from syntactic cues if we first divide the syntactic cues into distinct groupings that correlate with different word senses .",2875,0 +2876,"These experiments were dual purpose : -LRB- 1 -RRB- to validate the central thesis of the work of -LRB- Levin , 1993 -RRB- , i.e. , that verb semantics and syntactic behavior are predictably related ; -LRB- 2 -RRB- to demonstrate that a 15-fold improvement can be achieved in deriving << semantic information >> from [[ syntactic cues ]] if we first divide the syntactic cues into distinct groupings that correlate with different word senses .",2876,3 +2877,"Finally , we show that we can provide effective acquisition [[ techniques ]] for novel << word senses >> using a combination of online sources .",2877,3 +2878,"Finally , we show that we can provide effective acquisition << techniques >> for novel word senses using a combination of [[ online sources ]] .",2878,3 +2879,The [[ TIPSTER Architecture ]] has been designed to enable a variety of different << text applications >> to use a set of common text processing modules .,2879,3 +2880,The TIPSTER Architecture has been designed to enable a variety of different << text applications >> to use a set of [[ common text processing modules ]] .,2880,3 +2881,"Since [[ user interfaces ]] work best when customized for particular << applications >> , it is appropriator that no particular user interface styles or conventions are described in the TIPSTER Architecture specification .",2881,3 +2882,"However , the Computing Research Laboratory -LRB- CRL -RRB- has constructed several << TIPSTER applications >> that use a common set of configurable [[ Graphical User Interface -LRB- GUI -RRB- functions ]] .",2882,3 +2883,These << GUIs >> were constructed using [[ CRL 's TIPSTER User Interface Toolkit -LRB- TUIT -RRB- ]] .,2883,3 +2884,[[ TUIT ]] is a << software library >> that can be used to construct multilingual TIPSTER user interfaces for a set of common user tasks .,2884,2 +2885,[[ TUIT ]] is a software library that can be used to construct << multilingual TIPSTER user interfaces >> for a set of common user tasks .,2885,3 +2886,CRL developed [[ TUIT ]] to support their work to integrate << TIPSTER modules >> for the 6 and 12 month TIPSTER II demonstrations as well as their Oleada and Temple demonstration projects .,2886,3 +2887,"While such decoding is an essential underpinning , much recent work suggests that natural language interfaces will never appear cooperative or graceful unless << they >> also incorporate numerous [[ non-literal aspects of communication ]] , such as robust communication procedures .",2887,4 +2888,"While such decoding is an essential underpinning , much recent work suggests that natural language interfaces will never appear cooperative or graceful unless they also incorporate numerous << non-literal aspects of communication >> , such as [[ robust communication procedures ]] .",2888,2 +2889,"This paper defends that view , but claims that direct imitation of human performance is not the best way to implement many of these non-literal aspects of communication ; that the new technology of powerful << personal computers >> with integral [[ graphics displays ]] offers techniques superior to those of humans for these aspects , while still satisfying human communication needs .",2889,4 +2890,This paper proposes a framework in which [[ Lagrangian Particle Dynamics ]] is used for the << segmentation of high density crowd flows >> and detection of flow instabilities .,2890,3 +2891,This paper proposes a framework in which [[ Lagrangian Particle Dynamics ]] is used for the segmentation of high density crowd flows and << detection of flow instabilities >> .,2891,3 +2892,This paper proposes a framework in which Lagrangian Particle Dynamics is used for the [[ segmentation of high density crowd flows ]] and << detection of flow instabilities >> .,2892,0 +2893,"For this purpose , a << flow field >> generated by a [[ moving crowd ]] is treated as an aperiodic dynamical system .",2893,3 +2894,"For this purpose , a << flow field >> generated by a moving crowd is treated as an [[ aperiodic dynamical system ]] .",2894,3 +2895,"A [[ grid of particles ]] is overlaid on the << flow field >> , and is advected using a numerical integration scheme .",2895,3 +2896,"A << grid of particles >> is overlaid on the flow field , and is advected using a [[ numerical integration scheme ]] .",2896,3 +2897,"The << evolution of particles >> through the flow is tracked using a [[ Flow Map ]] , whose spatial gradients are subsequently used to setup a Cauchy Green Deformation tensor for quantifying the amount by which the neighboring particles have diverged over the length of the integration .",2897,3 +2898,"The evolution of particles through the flow is tracked using a Flow Map , whose [[ spatial gradients ]] are subsequently used to setup a << Cauchy Green Deformation tensor >> for quantifying the amount by which the neighboring particles have diverged over the length of the integration .",2898,3 +2899,"The [[ maximum eigenvalue ]] of the << tensor >> is used to construct a Finite Time Lyapunov Exponent -LRB- FTLE -RRB- field , which reveals the Lagrangian Coherent Structures -LRB- LCS -RRB- present in the underlying flow .",2899,1 +2900,"The [[ maximum eigenvalue ]] of the tensor is used to construct a << Finite Time Lyapunov Exponent -LRB- FTLE -RRB- field >> , which reveals the Lagrangian Coherent Structures -LRB- LCS -RRB- present in the underlying flow .",2900,3 +2901,"The maximum eigenvalue of the tensor is used to construct a [[ Finite Time Lyapunov Exponent -LRB- FTLE -RRB- field ]] , which reveals the << Lagrangian Coherent Structures -LRB- LCS -RRB- >> present in the underlying flow .",2901,3 +2902,The [[ LCS ]] divide flow into regions of qualitatively different dynamics and are used to locate << boundaries of the flow segments >> in a normalized cuts framework .,2902,3 +2903,The LCS divide flow into regions of qualitatively different dynamics and are used to locate << boundaries of the flow segments >> in a [[ normalized cuts framework ]] .,2903,3 +2904,The experiments are conducted on a challenging set of videos taken from [[ Google Video ]] and a << National Geographic documentary >> .,2904,0 +2905,"Over the last decade , a variety of SMT algorithms have been built and empirically tested whereas little is known about the [[ computational complexity ]] of some of the fundamental << problems >> of SMT .",2905,6 +2906,"Over the last decade , a variety of SMT algorithms have been built and empirically tested whereas little is known about the computational complexity of some of the fundamental [[ problems ]] of << SMT >> .",2906,4 +2907,Our work aims at providing useful insights into the the [[ computational complexity ]] of those << problems >> .,2907,6 +2908,"We prove that while [[ IBM Models 1-2 ]] are conceptually and computationally simple , computations involving the higher -LRB- and more useful -RRB- << models >> are hard .",2908,5 +2909,"Since it is unlikely that there exists a [[ polynomial time solution ]] for any of these << hard problems >> -LRB- unless P = NP and P #P = P -RRB- , our results highlight and justify the need for developing polynomial time approximations for these computations .",2909,3 +2910,"Since it is unlikely that there exists a polynomial time solution for any of these hard problems -LRB- unless P = NP and P #P = P -RRB- , our results highlight and justify the need for developing [[ polynomial time approximations ]] for these << computations >> .",2910,3 +2911,Most state-of-the-art [[ evaluation measures ]] for << machine translation >> assign high costs to movements of word blocks .,2911,6 +2912,"In this paper , we will present a new [[ evaluation measure ]] which explicitly models << block reordering >> as an edit operation .",2912,3 +2913,"In this paper , we will present a new evaluation measure which explicitly models << block reordering >> as an [[ edit operation ]] .",2913,3 +2914,Our << measure >> can be exactly calculated in [[ quadratic time ]] .,2914,1 +2915,"Furthermore , we will show how some << evaluation measures >> can be improved by the introduction of [[ word-dependent substitution costs ]] .",2915,3 +2916,The correlation of the new [[ measure ]] with << human judgment >> has been investigated systematically on two different language pairs .,2916,5 +2917,The experimental results will show that [[ it ]] significantly outperforms state-of-the-art << approaches >> in sentence-level correlation .,2917,5 +2918,The experimental results will show that << it >> significantly outperforms state-of-the-art approaches in [[ sentence-level correlation ]] .,2918,6 +2919,The experimental results will show that it significantly outperforms state-of-the-art << approaches >> in [[ sentence-level correlation ]] .,2919,6 +2920,Results from experiments with word dependent substitution costs will demonstrate an additional increase of correlation between [[ automatic evaluation measures ]] and << human judgment >> .,2920,0 +2921,The [[ Rete and Treat algorithms ]] are considered the most efficient << implementation techniques >> for Forward Chaining rule systems .,2921,2 +2922,The [[ Rete and Treat algorithms ]] are considered the most efficient implementation techniques for << Forward Chaining rule systems >> .,2922,3 +2923,These [[ algorithms ]] support a << language of limited expressive power >> .,2923,3 +2924,In this paper we show how to support << full unification >> in these [[ algorithms ]] .,2924,3 +2925,We also show that : Supporting full unification is costly ; Full unification is not used frequently ; A combination of [[ compile time ]] and << run time >> checks can determine when full unification is not needed .,2925,0 +2926,We also show that : Supporting full unification is costly ; Full unification is not used frequently ; A combination of [[ compile time ]] and run time checks can determine when << full unification >> is not needed .,2926,6 +2927,We also show that : Supporting full unification is costly ; Full unification is not used frequently ; A combination of compile time and [[ run time ]] checks can determine when << full unification >> is not needed .,2927,6 +2928,A [[ method ]] for << error correction >> of ill-formed input is described that acquires dialogue patterns in typical usage and uses these patterns to predict new inputs .,2928,3 +2929,A method for << error correction >> of [[ ill-formed input ]] is described that acquires dialogue patterns in typical usage and uses these patterns to predict new inputs .,2929,3 +2930,A [[ dialogue acquisition and tracking algorithm ]] is presented along with a description of its implementation in a << voice interactive system >> .,2930,3 +2931,A series of tests are described that show the power of the << error correction methodology >> when [[ stereotypic dialogue ]] occurs .,2931,3 +2932,Traditional [[ linear Fukunaga-Koontz Transform -LRB- FKT -RRB- ]] -LSB- 1 -RSB- is a powerful << discriminative subspaces building approach >> .,2932,2 +2933,Previous work has successfully extended [[ FKT ]] to be able to deal with << small-sample-size >> .,2933,3 +2934,"In this paper , we extend traditional [[ linear FKT ]] to enable << it >> to work in multi-class problem and also in higher dimensional -LRB- kernel -RRB- subspaces and therefore provide enhanced discrimination ability .",2934,3 +2935,"In this paper , we extend traditional linear FKT to enable [[ it ]] to work in << multi-class problem >> and also in higher dimensional -LRB- kernel -RRB- subspaces and therefore provide enhanced discrimination ability .",2935,3 +2936,"In this paper , we extend traditional linear FKT to enable [[ it ]] to work in multi-class problem and also in << higher dimensional -LRB- kernel -RRB- subspaces >> and therefore provide enhanced discrimination ability .",2936,3 +2937,"In this paper , we extend traditional linear FKT to enable [[ it ]] to work in multi-class problem and also in higher dimensional -LRB- kernel -RRB- subspaces and therefore provide enhanced << discrimination ability >> .",2937,1 +2938,"In this paper , we extend traditional linear FKT to enable it to work in [[ multi-class problem ]] and also in << higher dimensional -LRB- kernel -RRB- subspaces >> and therefore provide enhanced discrimination ability .",2938,0 +2939,We verify the effectiveness of the proposed << Kernel Fukunaga-Koontz Transform >> by demonstrating its effectiveness in [[ face recognition applications ]] ; however the proposed non-linear generalization can be applied to any other domain specific problems .,2939,6 +2940,We verify the effectiveness of the proposed Kernel Fukunaga-Koontz Transform by demonstrating its effectiveness in face recognition applications ; however the proposed [[ non-linear generalization ]] can be applied to any other << domain specific problems >> .,2940,3 +2941,"While this [[ task ]] has much in common with << paraphrases acquisition >> which aims to discover semantic equivalence between verbs , the main challenge of entailment acquisition is to capture asymmetric , or directional , relations .",2941,5 +2942,"While this task has much in common with [[ paraphrases acquisition ]] which aims to discover << semantic equivalence >> between verbs , the main challenge of entailment acquisition is to capture asymmetric , or directional , relations .",2942,3 +2943,"While this task has much in common with paraphrases acquisition which aims to discover semantic equivalence between verbs , the main challenge of [[ entailment acquisition ]] is to capture << asymmetric , or directional , relations >> .",2943,3 +2944,"Motivated by the intuition that it often underlies the local structure of coherent text , we develop a [[ method ]] that discovers << verb entailment >> using evidence about discourse relations between clauses available in a parsed corpus .",2944,3 +2945,"Motivated by the intuition that it often underlies the local structure of coherent text , we develop a << method >> that discovers verb entailment using evidence about [[ discourse relations ]] between clauses available in a parsed corpus .",2945,3 +2946,"Motivated by the intuition that it often underlies the local structure of coherent text , we develop a method that discovers verb entailment using evidence about << discourse relations >> between clauses available in a [[ parsed corpus ]] .",2946,3 +2947,"In comparison with earlier work , the proposed [[ method ]] covers a much wider range of verb entailment types and learns the << mapping between verbs >> with highly varied argument structures .",2947,3 +2948,"In comparison with earlier work , the proposed method covers a much wider range of verb entailment types and learns the << mapping between verbs >> with [[ highly varied argument structures ]] .",2948,1 +2949,"In this paper , we cast the problem of << point cloud matching >> as a [[ shape matching problem ]] by transforming each of the given point clouds into a shape representation called the Schrödinger distance transform -LRB- SDT -RRB- representation .",2949,3 +2950,"In this paper , we cast the problem of point cloud matching as a shape matching problem by transforming each of the given << point clouds >> into a [[ shape representation ]] called the Schrödinger distance transform -LRB- SDT -RRB- representation .",2950,3 +2951,"In this paper , we cast the problem of point cloud matching as a shape matching problem by transforming each of the given point clouds into a << shape representation >> called the [[ Schrödinger distance transform -LRB- SDT -RRB- representation ]] .",2951,2 +2952,"The [[ SDT representation ]] is an << analytic expression >> and following the theoretical physics literature , can be normalized to have unit L2 norm-making it a square-root density , which is identified with a point on a unit Hilbert sphere , whose intrinsic geometry is fully known .",2952,2 +2953,"The SDT representation is an analytic expression and following the theoretical physics literature , can be normalized to have unit L2 norm-making << it >> a [[ square-root density ]] , which is identified with a point on a unit Hilbert sphere , whose intrinsic geometry is fully known .",2953,3 +2954,"The SDT representation is an analytic expression and following the theoretical physics literature , can be normalized to have unit L2 norm-making it a square-root density , which is identified with a point on a << unit Hilbert sphere >> , whose [[ intrinsic geometry ]] is fully known .",2954,1 +2955,"The Fisher-Rao metric , a [[ natural metric ]] for the << space of densities >> leads to analytic expressions for the geodesic distance between points on this sphere .",2955,3 +2956,"The Fisher-Rao metric , a natural metric for the space of densities leads to [[ analytic expressions ]] for the << geodesic distance >> between points on this sphere .",2956,3 +2957,"In this paper , we use the well known [[ Riemannian framework ]] never before used for << point cloud matching >> , and present a novel matching algorithm .",2957,3 +2958,We pose << point set matching >> under [[ rigid and non-rigid transformations ]] in this framework and solve for the transformations using standard nonlinear optimization techniques .,2958,3 +2959,We pose << point set matching >> under rigid and non-rigid transformations in this [[ framework ]] and solve for the transformations using standard nonlinear optimization techniques .,2959,3 +2960,We pose point set matching under rigid and non-rigid transformations in this framework and solve for the << transformations >> using standard [[ nonlinear optimization techniques ]] .,2960,3 +2961,The experiments show that our [[ algorithm ]] outperforms state-of-the-art << point set registration algorithms >> on many quantitative metrics .,2961,5 +2962,The experiments show that our << algorithm >> outperforms state-of-the-art point set registration algorithms on many [[ quantitative metrics ]] .,2962,6 +2963,The experiments show that our algorithm outperforms state-of-the-art << point set registration algorithms >> on many [[ quantitative metrics ]] .,2963,6 +2964,"Using [[ natural language processing ]] , we carried out a << trend survey on Japanese natural language processing studies >> that have been done over the last ten years .",2964,3 +2965,This paper is useful for both recognizing trends in Japanese NLP and constructing a method of supporting << trend surveys >> using [[ NLP ]] .,2965,3 +2966,"HFOs additionally serve as a prototypical example of challenges in the << analysis of discrete events >> in [[ high-temporal resolution , intracranial EEG data ]] .",2966,3 +2967,"However , previous << HFO analysis >> have assumed a [[ linear manifold ]] , global across time , space -LRB- i.e. recording electrode/channel -RRB- , and individual patients .",2967,3 +2968,We also estimate bounds on the Bayes classification error to quantify the distinction between two classes of << HFOs >> -LRB- [[ those ]] occurring during seizures and those occurring due to other processes -RRB- .,2968,2 +2969,We also estimate bounds on the Bayes classification error to quantify the distinction between two classes of HFOs -LRB- [[ those ]] occurring during seizures and << those >> occurring due to other processes -RRB- .,2969,0 +2970,We also estimate bounds on the Bayes classification error to quantify the distinction between two classes of << HFOs >> -LRB- those occurring during seizures and [[ those ]] occurring due to other processes -RRB- .,2970,2 +2971,"This analysis provides the foundation for future clinical use of HFO features and guides the analysis for other << discrete events >> , such as individual [[ action potentials ]] or multi-unit activity .",2971,2 +2972,"This analysis provides the foundation for future clinical use of HFO features and guides the analysis for other discrete events , such as individual [[ action potentials ]] or << multi-unit activity >> .",2972,0 +2973,"This analysis provides the foundation for future clinical use of HFO features and guides the analysis for other << discrete events >> , such as individual action potentials or [[ multi-unit activity ]] .",2973,2 +2974,"In this paper we present ONTOSCORE , a << system >> for scoring sets of concepts on the basis of an [[ ontology ]] .",2974,3 +2975,We apply our [[ system ]] to the task of scoring alternative << speech recognition hypotheses -LRB- SRH -RRB- >> in terms of their semantic coherence .,2975,3 +2976,We propose an efficient [[ dialogue management ]] for an << information navigation system >> based on a document knowledge base .,2976,3 +2977,We propose an efficient dialogue management for an << information navigation system >> based on a [[ document knowledge base ]] .,2977,3 +2978,It is expected that incorporation of appropriate [[ N-best candidates of ASR ]] and << contextual information >> will improve the system performance .,2978,0 +2979,It is expected that incorporation of appropriate [[ N-best candidates of ASR ]] and contextual information will improve the << system >> performance .,2979,3 +2980,It is expected that incorporation of appropriate N-best candidates of ASR and [[ contextual information ]] will improve the << system >> performance .,2980,3 +2981,The [[ system ]] also has several choices in << generating responses or confirmations >> .,2981,3 +2982,"In this paper , this selection is optimized as << minimization of Bayes risk >> based on [[ reward ]] for correct information presentation and penalty for redundant turns .",2982,3 +2983,"In this paper , this selection is optimized as minimization of Bayes risk based on [[ reward ]] for << correct information presentation >> and penalty for redundant turns .",2983,3 +2984,"In this paper , this selection is optimized as minimization of Bayes risk based on [[ reward ]] for correct information presentation and << penalty >> for redundant turns .",2984,0 +2985,"In this paper , this selection is optimized as << minimization of Bayes risk >> based on reward for correct information presentation and [[ penalty ]] for redundant turns .",2985,3 +2986,"In this paper , this selection is optimized as minimization of Bayes risk based on reward for correct information presentation and [[ penalty ]] for << redundant turns >> .",2986,3 +2987,"We have evaluated this << strategy >> with our [[ spoken dialogue system '' Dialogue Navigator for Kyoto City '' ]] , which also has question-answering capability .",2987,6 +2988,"We have evaluated this strategy with our << spoken dialogue system '' Dialogue Navigator for Kyoto City '' >> , which also has [[ question-answering capability ]] .",2988,1 +2989,Effectiveness of the proposed << framework >> was confirmed in the [[ success rate of retrieval ]] and the average number of turns for information access .,2989,6 +2990,Effectiveness of the proposed framework was confirmed in the [[ success rate of retrieval ]] and the << average number of turns >> for information access .,2990,0 +2991,Effectiveness of the proposed << framework >> was confirmed in the success rate of retrieval and the [[ average number of turns ]] for information access .,2991,6 +2992,Effectiveness of the proposed framework was confirmed in the success rate of retrieval and the [[ average number of turns ]] for << information access >> .,2992,3 +2993,"They are probability , [[ rank ]] , and << entropy >> .",2993,0 +2994,We evaluated the performance of the three [[ pruning criteria ]] in a real application of << Chinese text input >> in terms of character error rate -LRB- CER -RRB- .,2994,3 +2995,We evaluated the performance of the three << pruning criteria >> in a real application of Chinese text input in terms of [[ character error rate -LRB- CER -RRB- ]] .,2995,6 +2996,We also show that the high-performance of << rank >> lies in its strong correlation with [[ error rate ]] .,2996,6 +2997,We then present a novel [[ method ]] of combining two criteria in << model pruning >> .,2997,3 +2998,This paper proposes an [[ annotating scheme ]] that encodes << honorifics >> -LRB- respectful words -RRB- .,2998,3 +2999,This paper proposes an annotating scheme that encodes [[ honorifics ]] -LRB- << respectful words >> -RRB- .,2999,2 +3000,"[[ Honorifics ]] are used extensively in << Japanese >> , reflecting the social relationship -LRB- e.g. social ranks and age -RRB- of the referents .",3000,3 +3001,This [[ referential information ]] is vital for resolving << zero pronouns >> and improving machine translation outputs .,3001,3 +3002,This [[ referential information ]] is vital for resolving zero pronouns and improving << machine translation outputs >> .,3002,3 +3003,<< Visually-guided arm reaching movements >> are produced by [[ distributed neural networks ]] within parietal and frontal regions of the cerebral cortex .,3003,3 +3004,Experimental data indicate that -LRB- I -RRB- single neurons in these regions are broadly tuned to parameters of movement ; -LRB- 2 -RRB- appropriate commands are elaborated by populations of neurons ; -LRB- 3 -RRB- the << coordinated action of neu-rons >> can be visualized using a [[ neuronal population vector -LRB- NPV -RRB- ]] .,3004,3 +3005,"We designed a [[ model ]] of the << cortical motor command >> to investigate the relation between the desired direction of the movement , the actual direction of movement and the direction of the NPV in motor cortex .",3005,3 +3006,"We designed a model of the cortical motor command to investigate the relation between the desired direction of the movement , the actual direction of movement and the direction of the [[ NPV ]] in << motor cortex >> .",3006,3 +3007,The model is a [[ two-layer self-organizing neural network ]] which combines broadly-tuned -LRB- muscular -RRB- proprioceptive and -LRB- cartesian -RRB- visual information to calculate << -LRB- angular -RRB- motor commands >> for the initial part of the movement of a two-link arm .,3007,3 +3008,The model is a << two-layer self-organizing neural network >> which combines [[ broadly-tuned -LRB- muscular -RRB- proprioceptive ]] and -LRB- cartesian -RRB- visual information to calculate -LRB- angular -RRB- motor commands for the initial part of the movement of a two-link arm .,3008,3 +3009,The model is a two-layer self-organizing neural network which combines [[ broadly-tuned -LRB- muscular -RRB- proprioceptive ]] and << -LRB- cartesian -RRB- visual information >> to calculate -LRB- angular -RRB- motor commands for the initial part of the movement of a two-link arm .,3009,0 +3010,The model is a << two-layer self-organizing neural network >> which combines broadly-tuned -LRB- muscular -RRB- proprioceptive and [[ -LRB- cartesian -RRB- visual information ]] to calculate -LRB- angular -RRB- motor commands for the initial part of the movement of a two-link arm .,3010,3 +3011,These results suggest the NPV does not give a faithful << image of cortical processing >> during [[ arm reaching movements ]] .,3011,1 +3012,It is well-known that diversity among [[ base classifiers ]] is crucial for constructing a strong << ensemble >> .,3012,3 +3013,"In this paper , we propose an alternative way for << ensemble construction >> by [[ resampling pairwise constraints ]] that specify whether a pair of instances belongs to the same class or not .",3013,3 +3014,Using [[ pairwise constraints ]] for << ensemble construction >> is challenging because it remains unknown how to influence the base classifiers with the sampled pairwise constraints .,3014,3 +3015,"First , we transform the original instances into a new << data representation >> using [[ projections ]] learnt from pairwise constraints .",3015,3 +3016,"First , we transform the original instances into a new data representation using << projections >> learnt from [[ pairwise constraints ]] .",3016,3 +3017,"Then , we build the << base clas-sifiers >> with the new [[ data representation ]] .",3017,3 +3018,"We propose two methods for << resampling pairwise constraints >> following the standard [[ Bagging and Boosting algorithms ]] , respectively .",3018,3 +3019,A new [[ algorithm ]] for solving the three << dimensional container packing problem >> is proposed in this paper .,3019,3 +3020,This new [[ algorithm ]] deviates from the traditional << approach of wall building and layering >> .,3020,5 +3021,We tested our << method >> using all 760 test cases from the [[ OR-Library ]] .,3021,6 +3022,Experimental results indicate that the new << algorithm >> is able to achieve an [[ average packing utilization ]] of more than 87 % .,3022,6 +3023,"Current [[ approaches ]] to << object category recognition >> require datasets of training images to be manually prepared , with varying degrees of supervision .",3023,3 +3024,"Current << approaches >> to object category recognition require [[ datasets ]] of training images to be manually prepared , with varying degrees of supervision .",3024,3 +3025,"We present an [[ approach ]] that can learn an << object category >> from just its name , by utilizing the raw output of image search engines available on the Internet .",3025,3 +3026,"We develop a new model , << TSI-pLSA >> , which extends [[ pLSA ]] -LRB- as applied to visual words -RRB- to include spatial information in a translation and scale invariant manner .",3026,3 +3027,"We develop a new model , TSI-pLSA , which extends [[ pLSA ]] -LRB- as applied to << visual words >> -RRB- to include spatial information in a translation and scale invariant manner .",3027,3 +3028,"We develop a new model , << TSI-pLSA >> , which extends pLSA -LRB- as applied to visual words -RRB- to include [[ spatial information ]] in a translation and scale invariant manner .",3028,4 +3029,Our [[ approach ]] can handle the high << intra-class variability >> and large proportion of unrelated images returned by search engines .,3029,3 +3030,Our [[ approach ]] can handle the high intra-class variability and large proportion of << unrelated images >> returned by search engines .,3030,3 +3031,Our approach can handle the high [[ intra-class variability ]] and large proportion of << unrelated images >> returned by search engines .,3031,0 +3032,Our approach can handle the high intra-class variability and large proportion of << unrelated images >> returned by [[ search engines ]] .,3032,3 +3033,"We evaluate the << models >> on standard [[ test sets ]] , showing performance competitive with existing methods trained on hand prepared datasets .",3033,6 +3034,"We evaluate the models on standard [[ test sets ]] , showing performance competitive with existing << methods >> trained on hand prepared datasets .",3034,6 +3035,"We evaluate the << models >> on standard test sets , showing performance competitive with existing [[ methods ]] trained on hand prepared datasets .",3035,5 +3036,"We evaluate the models on standard test sets , showing performance competitive with existing << methods >> trained on [[ hand prepared datasets ]] .",3036,3 +3037,"The paper provides an overview of the research conducted at LIMSI in the field of [[ speech processing ]] , but also in the related areas of << Human-Machine Communication >> , including Natural Language Processing , Non Verbal and Multimodal Communication .",3037,0 +3038,"The paper provides an overview of the research conducted at LIMSI in the field of speech processing , but also in the related areas of << Human-Machine Communication >> , including [[ Natural Language Processing ]] , Non Verbal and Multimodal Communication .",3038,2 +3039,"The paper provides an overview of the research conducted at LIMSI in the field of speech processing , but also in the related areas of Human-Machine Communication , including [[ Natural Language Processing ]] , << Non Verbal and Multimodal Communication >> .",3039,0 +3040,"The paper provides an overview of the research conducted at LIMSI in the field of speech processing , but also in the related areas of << Human-Machine Communication >> , including Natural Language Processing , [[ Non Verbal and Multimodal Communication ]] .",3040,2 +3041,We have calculated << analytical expressions >> for how the bias and variance of the estimators provided by various temporal difference value estimation algorithms change with offline updates over trials in absorbing Markov chains using [[ lookup table representations ]] .,3041,3 +3042,"In this paper , we describe the [[ pronominal anaphora resolution module ]] of << Lucy >> , a portable English understanding system .",3042,4 +3043,"In this paper , we describe the pronominal anaphora resolution module of [[ Lucy ]] , a portable << English understanding system >> .",3043,2 +3044,"In this paper , we reported experiments of << unsupervised automatic acquisition of Italian and English verb subcategorization frames -LRB- SCFs -RRB- >> from [[ general and domain corpora ]] .",3044,3 +3045,The proposed << technique >> operates on [[ syntactically shallow-parsed corpora ]] on the basis of a limited number of search heuristics not relying on any previous lexico-syntactic knowledge about SCFs .,3045,3 +3046,The proposed << technique >> operates on syntactically shallow-parsed corpora on the basis of a limited number of [[ search heuristics ]] not relying on any previous lexico-syntactic knowledge about SCFs .,3046,3 +3047,The proposed technique operates on syntactically shallow-parsed corpora on the basis of a limited number of search heuristics not relying on any previous << lexico-syntactic knowledge >> about [[ SCFs ]] .,3047,1 +3048,[[ Graph-cuts optimization ]] is prevalent in << vision and graphics problems >> .,3048,3 +3049,It is thus of great practical importance to parallelize the << graph-cuts optimization >> using to-day 's ubiquitous [[ multi-core machines ]] .,3049,3 +3050,"However , the current best << serial algorithm >> by Boykov and Kolmogorov -LSB- 4 -RSB- -LRB- called the [[ BK algorithm ]] -RRB- still has the superior empirical performance .",3050,2 +3051,"In this paper , we propose a novel [[ adaptive bottom-up approach ]] to parallelize the << BK algorithm >> .",3051,3 +3052,Extensive experiments in common [[ applications ]] such as 2D/3D image segmentations and 3D surface fitting demonstrate the effectiveness of our << approach >> .,3052,6 +3053,Extensive experiments in common << applications >> such as [[ 2D/3D image segmentations ]] and 3D surface fitting demonstrate the effectiveness of our approach .,3053,2 +3054,Extensive experiments in common applications such as [[ 2D/3D image segmentations ]] and << 3D surface fitting >> demonstrate the effectiveness of our approach .,3054,0 +3055,Extensive experiments in common << applications >> such as 2D/3D image segmentations and [[ 3D surface fitting ]] demonstrate the effectiveness of our approach .,3055,2 +3056,We study the question of how to make loss-aware predictions in image segmentation settings where the << evaluation function >> is the [[ Intersection-over-Union -LRB- IoU -RRB- measure ]] that is used widely in evaluating image segmentation systems .,3056,2 +3057,We study the question of how to make loss-aware predictions in image segmentation settings where the evaluation function is the [[ Intersection-over-Union -LRB- IoU -RRB- measure ]] that is used widely in evaluating << image segmentation systems >> .,3057,6 +3058,"Currently , there are two << dominant approaches >> : the [[ first ]] approximates the Expected-IoU -LRB- EIoU -RRB- score as Expected-Intersection-over-Expected-Union -LRB- EIoEU -RRB- ; and the second approach is to compute exact EIoU but only over a small set of high-quality candidate solutions .",3058,2 +3059,"Currently , there are two << dominant approaches >> : the first approximates the Expected-IoU -LRB- EIoU -RRB- score as Expected-Intersection-over-Expected-Union -LRB- EIoEU -RRB- ; and the [[ second approach ]] is to compute exact EIoU but only over a small set of high-quality candidate solutions .",3059,2 +3060,Our new << methods >> use the [[ EIoEU approximation ]] paired with high quality candidate solutions .,3060,3 +3061,Experimentally we show that our new << approaches >> lead to improved performance on both [[ image segmentation tasks ]] .,3061,6 +3062,"Later , however , Breiman cast serious doubt on this explanation by introducing a << boosting algorithm >> , [[ arc-gv ]] , that can generate a higher margins distribution than AdaBoost and yet performs worse .",3062,2 +3063,"Later , however , Breiman cast serious doubt on this explanation by introducing a boosting algorithm , [[ arc-gv ]] , that can generate a higher << margins distribution >> than AdaBoost and yet performs worse .",3063,3 +3064,"Later , however , Breiman cast serious doubt on this explanation by introducing a boosting algorithm , [[ arc-gv ]] , that can generate a higher margins distribution than << AdaBoost >> and yet performs worse .",3064,5 +3065,"Although we can reproduce his main finding , we find that the poorer performance of arc-gv can be explained by the increased [[ complexity ]] of the << base classifiers >> it uses , an explanation supported by our experiments and entirely consistent with the margins theory .",3065,6 +3066,"Although we can reproduce his main finding , we find that the poorer performance of << arc-gv >> can be explained by the increased complexity of the [[ base classifiers ]] it uses , an explanation supported by our experiments and entirely consistent with the margins theory .",3066,2 +3067,"The [[ transfer phase ]] in << machine translation -LRB- MT -RRB- systems >> has been considered to be more complicated than analysis and generation , since it is inherently a conglomeration of individual lexical rules .",3067,4 +3068,"The [[ transfer phase ]] in machine translation -LRB- MT -RRB- systems has been considered to be more complicated than << analysis >> and generation , since it is inherently a conglomeration of individual lexical rules .",3068,5 +3069,"The [[ transfer phase ]] in machine translation -LRB- MT -RRB- systems has been considered to be more complicated than analysis and << generation >> , since it is inherently a conglomeration of individual lexical rules .",3069,5 +3070,"The transfer phase in machine translation -LRB- MT -RRB- systems has been considered to be more complicated than [[ analysis ]] and << generation >> , since it is inherently a conglomeration of individual lexical rules .",3070,0 +3071,"Currently some attempts are being made to use [[ case-based reasoning ]] in << machine translation >> , that is , to make decisions on the basis of translation examples at appropriate pints in MT .",3071,3 +3072,"This paper proposes a new type of << transfer system >> , called a [[ Similarity-driven Transfer System -LRB- SimTran -RRB- ]] , for use in such case-based MT -LRB- CBMT -RRB- .",3072,2 +3073,"This paper proposes a new type of transfer system , called a [[ Similarity-driven Transfer System -LRB- SimTran -RRB- ]] , for use in such << case-based MT -LRB- CBMT -RRB- >> .",3073,3 +3074,This paper addresses the problem of [[ optimal alignment of non-rigid surfaces ]] from multi-view video observations to obtain a << temporally consistent representation >> .,3074,3 +3075,This paper addresses the problem of << optimal alignment of non-rigid surfaces >> from [[ multi-view video observations ]] to obtain a temporally consistent representation .,3075,3 +3076,Conventional << non-rigid surface tracking >> performs [[ frame-to-frame alignment ]] which is subject to the accumulation of errors resulting in a drift over time .,3076,3 +3077,"Recently , << non-sequential tracking approaches >> have been introduced which reorder the input data based on a [[ dissimilarity measure ]] .",3077,3 +3078,They demonstrate a reduced drift and increased [[ robustness ]] to large << non-rigid deformations >> .,3078,1 +3079,"[[ Optimisation of the tree ]] for << non-sequential tracking >> , which minimises the errors in temporal consistency due to both the drift and the jumps , is proposed .",3079,3 +3080,"<< Optimisation of the tree >> for non-sequential tracking , which minimises the errors in [[ temporal consistency ]] due to both the drift and the jumps , is proposed .",3080,6 +3081,A novel [[ cluster tree ]] enforces << sequential tracking in local segments >> of the sequence while allowing global non-sequential traversal among these segments .,3081,3 +3082,A novel [[ cluster tree ]] enforces sequential tracking in local segments of the sequence while allowing << global non-sequential traversal >> among these segments .,3082,3 +3083,"Comprehensive evaluation is performed on a variety of challenging << non-rigid surfaces >> including [[ face ]] , cloth and people .",3083,2 +3084,"Comprehensive evaluation is performed on a variety of challenging non-rigid surfaces including [[ face ]] , << cloth >> and people .",3084,0 +3085,"Comprehensive evaluation is performed on a variety of challenging << non-rigid surfaces >> including face , [[ cloth ]] and people .",3085,2 +3086,"Comprehensive evaluation is performed on a variety of challenging non-rigid surfaces including face , [[ cloth ]] and << people >> .",3086,0 +3087,"Comprehensive evaluation is performed on a variety of challenging << non-rigid surfaces >> including face , cloth and [[ people ]] .",3087,2 +3088,It demonstrates that the proposed [[ cluster tree ]] achieves better temporal consistency than the previous << sequential and non-sequential tracking approaches >> .,3088,5 +3089,It demonstrates that the proposed << cluster tree >> achieves better [[ temporal consistency ]] than the previous sequential and non-sequential tracking approaches .,3089,6 +3090,Quantitative analysis on a created [[ synthetic facial performance ]] also shows an improvement by the << cluster tree >> .,3090,6 +3091,The << translation of English text into American Sign Language -LRB- ASL -RRB- animation >> tests the limits of traditional [[ MT architectural designs ]] .,3091,3 +3092,A new [[ semantic representation ]] is proposed that uses virtual reality 3D scene modeling software to produce << spatially complex ASL phenomena >> called '' classifier predicates . '',3092,3 +3093,A new << semantic representation >> is proposed that uses [[ virtual reality 3D scene modeling software ]] to produce spatially complex ASL phenomena called '' classifier predicates . '',3093,3 +3094,A new semantic representation is proposed that uses virtual reality 3D scene modeling software to produce << spatially complex ASL phenomena >> called '' [[ classifier predicates ]] . '',3094,2 +3095,The model acts as an interlingua within a new multi-pathway MT architecture design that also incorporates [[ transfer ]] and << direct approaches >> into a single system .,3095,0 +3096,The model acts as an interlingua within a new multi-pathway MT architecture design that also incorporates [[ transfer ]] and direct approaches into a single << system >> .,3096,4 +3097,The model acts as an interlingua within a new multi-pathway MT architecture design that also incorporates transfer and [[ direct approaches ]] into a single << system >> .,3097,4 +3098,"An << extension >> to the [[ GPSG grammatical formalism ]] is proposed , allowing non-terminals to consist of finite sequences of category labels , and allowing schematic variables to range over such sequences .",3098,3 +3099,"The [[ extension ]] is shown to be sufficient to provide a strongly adequate << grammar >> for crossed serial dependencies , as found in e.g. Dutch subordinate clauses .",3099,3 +3100,"The extension is shown to be sufficient to provide a strongly adequate [[ grammar ]] for << crossed serial dependencies >> , as found in e.g. Dutch subordinate clauses .",3100,3 +3101,The << extension >> is shown to be parseable by a simple [[ extension ]] to an existing parsing method for GPSG .,3101,3 +3102,The extension is shown to be parseable by a simple << extension >> to an existing [[ parsing method ]] for GPSG .,3102,3 +3103,The extension is shown to be parseable by a simple extension to an existing [[ parsing method ]] for << GPSG >> .,3103,3 +3104,This paper presents an [[ approach ]] to << localizing functional objects >> in surveillance videos without domain knowledge about semantic object classes that may appear in the scene .,3104,3 +3105,This paper presents an approach to << localizing functional objects >> in [[ surveillance videos ]] without domain knowledge about semantic object classes that may appear in the scene .,3105,3 +3106,This paper presents an approach to localizing functional objects in surveillance videos without << domain knowledge >> about [[ semantic object classes ]] that may appear in the scene .,3106,1 +3107,"A [[ Bayesian framework ]] is used to probabilistically model : << people 's trajectories and intents >> , constraint map of the scene , and locations of functional objects .",3107,3 +3108,"A [[ Bayesian framework ]] is used to probabilistically model : people 's trajectories and intents , << constraint map of the scene >> , and locations of functional objects .",3108,3 +3109,"A [[ Bayesian framework ]] is used to probabilistically model : people 's trajectories and intents , constraint map of the scene , and << locations of functional objects >> .",3109,3 +3110,"A Bayesian framework is used to probabilistically model : [[ people 's trajectories and intents ]] , << constraint map of the scene >> , and locations of functional objects .",3110,0 +3111,"A Bayesian framework is used to probabilistically model : people 's trajectories and intents , [[ constraint map of the scene ]] , and << locations of functional objects >> .",3111,0 +3112,A [[ data-driven Markov Chain Monte Carlo -LRB- MCMC -RRB- process ]] is used for << inference >> .,3112,3 +3113,Our evaluation on [[ videos of public squares and courtyards ]] demonstrates our effectiveness in << localizing functional objects >> and predicting people 's trajectories in unobserved parts of the video footage .,3113,6 +3114,Our evaluation on [[ videos of public squares and courtyards ]] demonstrates our effectiveness in localizing functional objects and << predicting people 's trajectories >> in unobserved parts of the video footage .,3114,6 +3115,Our evaluation on videos of public squares and courtyards demonstrates our effectiveness in [[ localizing functional objects ]] and << predicting people 's trajectories >> in unobserved parts of the video footage .,3115,0 +3116,"We propose a [[ process model ]] for << hierarchical perceptual sound organization >> , which recognizes perceptual sounds included in incoming sound signals .",3116,3 +3117,"We propose a process model for hierarchical perceptual sound organization , which recognizes [[ perceptual sounds ]] included in << incoming sound signals >> .",3117,4 +3118,We consider << perceptual sound organization >> as a [[ scene analysis problem ]] in the auditory domain .,3118,3 +3119,We consider perceptual sound organization as a << scene analysis problem >> in the [[ auditory domain ]] .,3119,1 +3120,Our << model >> consists of multiple [[ processing modules ]] and a hypothesis network for quantitative integration of multiple sources of information .,3120,4 +3121,Our model consists of multiple [[ processing modules ]] and a << hypothesis network >> for quantitative integration of multiple sources of information .,3121,0 +3122,Our << model >> consists of multiple processing modules and a [[ hypothesis network ]] for quantitative integration of multiple sources of information .,3122,4 +3123,"On the << hypothesis network >> , individual information is integrated and an optimal [[ internal model ]] of perceptual sounds is automatically constructed .",3123,4 +3124,"On the hypothesis network , individual information is integrated and an optimal [[ internal model ]] of << perceptual sounds >> is automatically constructed .",3124,3 +3125,"Based on the model , a [[ music scene analysis system ]] has been developed for << acoustic signals of ensemble music >> , which recognizes rhythm , chords , and source-separated musical notes .",3125,3 +3126,"Based on the model , a [[ music scene analysis system ]] has been developed for acoustic signals of ensemble music , which recognizes << rhythm >> , chords , and source-separated musical notes .",3126,3 +3127,"Based on the model , a [[ music scene analysis system ]] has been developed for acoustic signals of ensemble music , which recognizes rhythm , << chords >> , and source-separated musical notes .",3127,3 +3128,"Based on the model , a [[ music scene analysis system ]] has been developed for acoustic signals of ensemble music , which recognizes rhythm , chords , and << source-separated musical notes >> .",3128,3 +3129,"Based on the model , a music scene analysis system has been developed for acoustic signals of ensemble music , which recognizes [[ rhythm ]] , << chords >> , and source-separated musical notes .",3129,0 +3130,"Based on the model , a music scene analysis system has been developed for acoustic signals of ensemble music , which recognizes rhythm , [[ chords ]] , and << source-separated musical notes >> .",3130,0 +3131,"Experimental results show that our << method >> has permitted autonomous , stable and effective [[ information integration ]] to construct the internal model of hierarchical perceptual sounds .",3131,1 +3132,"Experimental results show that our method has permitted autonomous , stable and effective [[ information integration ]] to construct the << internal model >> of hierarchical perceptual sounds .",3132,3 +3133,"Experimental results show that our method has permitted autonomous , stable and effective information integration to construct the [[ internal model ]] of << hierarchical perceptual sounds >> .",3133,3 +3134,We directly investigate a subject of much recent debate : do [[ word sense disambigation models ]] help << statistical machine translation quality >> ?,3134,3 +3135,"Using a state-of-the-art [[ Chinese word sense disambiguation model ]] to choose << translation candidates >> for a typical IBM statistical MT system , we find that word sense disambiguation does not yield significantly better translation quality than the statistical machine translation system alone .",3135,3 +3136,"Using a state-of-the-art Chinese word sense disambiguation model to choose [[ translation candidates ]] for a typical << IBM statistical MT system >> , we find that word sense disambiguation does not yield significantly better translation quality than the statistical machine translation system alone .",3136,3 +3137,"Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system , we find that [[ word sense disambiguation ]] does not yield significantly better translation quality than the << statistical machine translation system >> alone .",3137,5 +3138,"Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system , we find that << word sense disambiguation >> does not yield significantly better [[ translation quality ]] than the statistical machine translation system alone .",3138,6 +3139,"Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system , we find that word sense disambiguation does not yield significantly better [[ translation quality ]] than the << statistical machine translation system >> alone .",3139,6 +3140,"[[ Image sequence processing techniques ]] are used to study << exchange , growth , and transport processes >> and to tackle key questions in environmental physics and biology .",3140,3 +3141,"Image sequence processing techniques are used to study exchange , growth , and transport processes and to tackle key questions in [[ environmental physics ]] and << biology >> .",3141,0 +3142,These applications require high [[ accuracy ]] for the << estimation of the motion field >> since the most interesting parameters of the dynamical processes studied are contained in first-order derivatives of the motion field or in dynamical changes of the moving objects .,3142,6 +3143,These << applications >> require high accuracy for the [[ estimation of the motion field ]] since the most interesting parameters of the dynamical processes studied are contained in first-order derivatives of the motion field or in dynamical changes of the moving objects .,3143,3 +3144,These applications require high accuracy for the estimation of the motion field since the most interesting parameters of the dynamical processes studied are contained in [[ first-order derivatives of the motion field ]] or in << dynamical changes of the moving objects >> .,3144,0 +3145,A << tensor method >> tuned with carefully optimized [[ derivative filters ]] yields reliable and dense displacement vector fields -LRB- DVF -RRB- with an accuracy of up to a few hundredth pixels/frame for real-world images .,3145,3 +3146,A tensor method tuned with carefully optimized derivative filters yields reliable and dense << displacement vector fields -LRB- DVF -RRB- >> with an accuracy of up to a few hundredth [[ pixels/frame ]] for real-world images .,3146,6 +3147,A tensor method tuned with carefully optimized derivative filters yields reliable and dense displacement vector fields -LRB- DVF -RRB- with an accuracy of up to a few hundredth << pixels/frame >> for [[ real-world images ]] .,3147,3 +3148,The [[ accuracy ]] of the << tensor method >> is verified with computer-generated sequences and a calibrated image sequence .,3148,6 +3149,The accuracy of the << tensor method >> is verified with [[ computer-generated sequences ]] and a calibrated image sequence .,3149,6 +3150,The accuracy of the tensor method is verified with [[ computer-generated sequences ]] and a << calibrated image sequence >> .,3150,0 +3151,The accuracy of the << tensor method >> is verified with computer-generated sequences and a [[ calibrated image sequence ]] .,3151,6 +3152,"With the improvements in [[ accuracy ]] the << motion estimation >> is now rather limited by imperfections in the CCD sensors , especially the spatial nonuni-formity in the responsivity .",3152,6 +3153,"With the improvements in accuracy the << motion estimation >> is now rather limited by imperfections in the [[ CCD sensors ]] , especially the spatial nonuni-formity in the responsivity .",3153,3 +3154,"With the improvements in accuracy the motion estimation is now rather limited by imperfections in the CCD sensors , especially the [[ spatial nonuni-formity ]] in the << responsivity >> .",3154,1 +3155,"With the improvements in accuracy the motion estimation is now rather limited by imperfections in the << CCD sensors >> , especially the spatial nonuni-formity in the [[ responsivity ]] .",3155,1 +3156,"The application of the [[ techniques ]] to the << analysis of plant growth >> , to ocean surface microturbulence in IR image sequences , and to sediment transport is demonstrated .",3156,3 +3157,"The application of the [[ techniques ]] to the analysis of plant growth , to << ocean surface microturbulence in IR image sequences >> , and to sediment transport is demonstrated .",3157,3 +3158,"The application of the [[ techniques ]] to the analysis of plant growth , to ocean surface microturbulence in IR image sequences , and to << sediment transport >> is demonstrated .",3158,3 +3159,"The application of the techniques to the [[ analysis of plant growth ]] , to << ocean surface microturbulence in IR image sequences >> , and to sediment transport is demonstrated .",3159,0 +3160,"The application of the techniques to the analysis of plant growth , to [[ ocean surface microturbulence in IR image sequences ]] , and to << sediment transport >> is demonstrated .",3160,0 +3161,We present a [[ Czech-English statistical machine translation system ]] which performs << tree-to-tree translation of dependency structures >> .,3161,3 +3162,The only << bilingual resource >> required is a [[ sentence-aligned parallel corpus ]] .,3162,3 +3163,We also refer to an evaluation method and plan to compare our [[ system ]] 's output with a << benchmark system >> .,3163,5 +3164,This paper describes the understanding process of the << spatial descriptions >> in [[ Japanese ]] .,3164,1 +3165,"To reconstruct the model , the authors extract the qualitative spatial constraints from the text , and represent them as the << numerical constraints >> on the [[ spatial attributes of the entities ]] .",3165,3 +3166,Such [[ context information ]] is therefore important to characterize the << intrinsic representation of a video frame >> .,3166,3 +3167,"In this paper , we present a novel [[ approach ]] to learn the << deep video representation >> by exploring both local and holistic contexts .",3167,3 +3168,"In this paper , we present a novel << approach >> to learn the deep video representation by exploring both [[ local and holistic contexts ]] .",3168,3 +3169,"Specifically , we propose a [[ triplet sampling mechanism ]] to encode the << local temporal relationship of adjacent frames >> based on their deep representations .",3169,3 +3170,"Specifically , we propose a << triplet sampling mechanism >> to encode the local temporal relationship of adjacent frames based on their [[ deep representations ]] .",3170,3 +3171,"In addition , we incorporate the [[ graph structure of the video ]] , as a << priori >> , to holistically preserve the inherent correlations among video frames .",3171,3 +3172,Our << approach >> is fully unsupervised and trained in an [[ end-to-end deep convolutional neu-ral network architecture ]] .,3172,3 +3173,"By extensive experiments , we show that our [[ learned representation ]] can significantly boost several video recognition tasks -LRB- retrieval , classification , and highlight detection -RRB- over traditional << video representations >> .",3173,5 +3174,"By extensive experiments , we show that our << learned representation >> can significantly boost several [[ video recognition tasks ]] -LRB- retrieval , classification , and highlight detection -RRB- over traditional video representations .",3174,6 +3175,"By extensive experiments , we show that our learned representation can significantly boost several [[ video recognition tasks ]] -LRB- retrieval , classification , and highlight detection -RRB- over traditional << video representations >> .",3175,6 +3176,"By extensive experiments , we show that our learned representation can significantly boost several << video recognition tasks >> -LRB- [[ retrieval ]] , classification , and highlight detection -RRB- over traditional video representations .",3176,2 +3177,"By extensive experiments , we show that our learned representation can significantly boost several video recognition tasks -LRB- [[ retrieval ]] , << classification >> , and highlight detection -RRB- over traditional video representations .",3177,0 +3178,"By extensive experiments , we show that our learned representation can significantly boost several << video recognition tasks >> -LRB- retrieval , [[ classification ]] , and highlight detection -RRB- over traditional video representations .",3178,2 +3179,"By extensive experiments , we show that our learned representation can significantly boost several video recognition tasks -LRB- retrieval , [[ classification ]] , and << highlight detection >> -RRB- over traditional video representations .",3179,0 +3180,"By extensive experiments , we show that our learned representation can significantly boost several << video recognition tasks >> -LRB- retrieval , classification , and [[ highlight detection ]] -RRB- over traditional video representations .",3180,2 +3181,"For << mobile speech application >> , [[ speaker DOA estimation accuracy ]] , interference robustness and compact physical size are three key factors .",3181,1 +3182,"For mobile speech application , [[ speaker DOA estimation accuracy ]] , << interference robustness >> and compact physical size are three key factors .",3182,0 +3183,"For << mobile speech application >> , speaker DOA estimation accuracy , [[ interference robustness ]] and compact physical size are three key factors .",3183,1 +3184,"For mobile speech application , speaker DOA estimation accuracy , [[ interference robustness ]] and << compact physical size >> are three key factors .",3184,0 +3185,"For << mobile speech application >> , speaker DOA estimation accuracy , interference robustness and [[ compact physical size ]] are three key factors .",3185,1 +3186,"[[ It ]] is achieved by deriving the inter-sensor data ratio model of an AVS in bispectrum domain -LRB- BISDR -RRB- and exploring the << favorable properties >> of bispectrum , such as zero value of Gaussian process and different distribution of speech and NSI .",3186,3 +3187,"It is achieved by deriving the [[ inter-sensor data ratio model ]] of an << AVS >> in bispectrum domain -LRB- BISDR -RRB- and exploring the favorable properties of bispectrum , such as zero value of Gaussian process and different distribution of speech and NSI .",3187,3 +3188,"It is achieved by deriving the inter-sensor data ratio model of an << AVS >> in [[ bispectrum domain -LRB- BISDR -RRB- ]] and exploring the favorable properties of bispectrum , such as zero value of Gaussian process and different distribution of speech and NSI .",3188,3 +3189,"It is achieved by deriving the inter-sensor data ratio model of an AVS in bispectrum domain -LRB- BISDR -RRB- and exploring the << favorable properties >> of bispectrum , such as [[ zero value of Gaussian process ]] and different distribution of speech and NSI .",3189,2 +3190,"It is achieved by deriving the inter-sensor data ratio model of an AVS in bispectrum domain -LRB- BISDR -RRB- and exploring the favorable properties of bispectrum , such as [[ zero value of Gaussian process ]] and different << distribution of speech and NSI >> .",3190,0 +3191,"It is achieved by deriving the inter-sensor data ratio model of an AVS in bispectrum domain -LRB- BISDR -RRB- and exploring the << favorable properties >> of bispectrum , such as zero value of Gaussian process and different [[ distribution of speech and NSI ]] .",3191,2 +3192,"Specifically , a reliable [[ bispectrum mask ]] is generated to guarantee that the << speaker DOA cues >> , derived from BISDR , are robust to NSI in terms of speech sparsity and large bispectrum amplitude of the captured signals .",3192,3 +3193,"Specifically , a reliable bispectrum mask is generated to guarantee that the << speaker DOA cues >> , derived from [[ BISDR ]] , are robust to NSI in terms of speech sparsity and large bispectrum amplitude of the captured signals .",3193,3 +3194,Intensive experiments demonstrate an improved performance of our proposed [[ algorithm ]] under various << NSI conditions >> even when SIR is smaller than 0dB .,3194,3 +3195,"In this paper , we want to show how the [[ morphological component ]] of an existing << NLP-system for Dutch -LRB- Dutch Medical Language Processor - DMLP -RRB- >> has been extended in order to produce output that is compatible with the language independent modules of the LSP-MLP system -LRB- Linguistic String Project - Medical Language Processor -RRB- of the New York University .",3195,4 +3196,"In this paper , we want to show how the morphological component of an existing NLP-system for Dutch -LRB- Dutch Medical Language Processor - DMLP -RRB- has been extended in order to produce output that is compatible with the [[ language independent modules ]] of the << LSP-MLP system -LRB- Linguistic String Project - Medical Language Processor -RRB- >> of the New York University .",3196,4 +3197,"The << former >> can take advantage of the language independent developments of the [[ latter ]] , while focusing on idiosyncrasies for Dutch .",3197,3 +3198,"The former can take advantage of the language independent developments of the latter , while focusing on << idiosyncrasies >> for [[ Dutch ]] .",3198,3 +3199,"This general strategy will be illustrated by a practical application , namely the highlighting of [[ relevant information ]] in a << patient discharge summary -LRB- PDS -RRB- >> by means of modern HyperText Mark-Up Language -LRB- HTML -RRB- technology .",3199,4 +3200,"This general strategy will be illustrated by a practical application , namely the << highlighting of relevant information >> in a patient discharge summary -LRB- PDS -RRB- by means of modern [[ HyperText Mark-Up Language -LRB- HTML -RRB- technology ]] .",3200,3 +3201,Such an [[ application ]] can be of use for << medical administrative purposes >> in a hospital environment .,3201,3 +3202,"<< CriterionSM Online Essay Evaluation Service >> includes a capability that labels sentences in student writing with [[ essay-based discourse elements ]] -LRB- e.g. , thesis statements -RRB- .",3202,4 +3203,"CriterionSM Online Essay Evaluation Service includes a capability that labels sentences in student writing with << essay-based discourse elements >> -LRB- e.g. , [[ thesis statements ]] -RRB- .",3203,2 +3204,"We describe a new [[ system ]] that enhances << Criterion 's capability >> , by evaluating multiple aspects of coherence in essays .",3204,3 +3205,"We describe a new << system >> that enhances Criterion 's capability , by evaluating multiple aspects of [[ coherence in essays ]] .",3205,6 +3206,This [[ system ]] identifies << features >> of sentences based on semantic similarity measures and discourse structure .,3206,3 +3207,This system identifies << features >> of sentences based on [[ semantic similarity measures ]] and discourse structure .,3207,3 +3208,This system identifies << features >> of sentences based on semantic similarity measures and [[ discourse structure ]] .,3208,3 +3209,This system identifies features of sentences based on << semantic similarity measures >> and [[ discourse structure ]] .,3209,0 +3210,A << support vector machine >> uses these [[ features ]] to capture breakdowns in coherence due to relatedness to the essay question and relatedness between discourse elements .,3210,3 +3211,A support vector machine uses these [[ features ]] to capture << breakdowns in coherence >> due to relatedness to the essay question and relatedness between discourse elements .,3211,3 +3212,<< Intra-sentential quality >> is evaluated with [[ rule-based heuristics ]] .,3212,6 +3213,Results indicate that the [[ system ]] yields higher performance than a << baseline >> on all three aspects .,3213,5 +3214,This paper presents an [[ algorithm ]] for << labeling curvilinear structure >> at multiple scales in line drawings and edge images Symbolic CURVE-ELEMENT tokens residing in a spatially-indexed and scale-indexed data structure denote circular arcs fit to image data .,3214,3 +3215,This paper presents an algorithm for << labeling curvilinear structure >> at multiple scales in [[ line drawings ]] and edge images Symbolic CURVE-ELEMENT tokens residing in a spatially-indexed and scale-indexed data structure denote circular arcs fit to image data .,3215,1 +3216,This paper presents an algorithm for labeling curvilinear structure at multiple scales in [[ line drawings ]] and << edge images >> Symbolic CURVE-ELEMENT tokens residing in a spatially-indexed and scale-indexed data structure denote circular arcs fit to image data .,3216,0 +3217,This paper presents an algorithm for << labeling curvilinear structure >> at multiple scales in line drawings and [[ edge images ]] Symbolic CURVE-ELEMENT tokens residing in a spatially-indexed and scale-indexed data structure denote circular arcs fit to image data .,3217,1 +3218,This paper presents an algorithm for labeling curvilinear structure at multiple scales in line drawings and edge images Symbolic [[ CURVE-ELEMENT tokens ]] residing in a << spatially-indexed and scale-indexed data structure >> denote circular arcs fit to image data .,3218,4