article
stringclasses
2 values
abstract
stringclasses
2 values
section_names
stringclasses
2 values
additive models @xcite provide an important family of models for semiparametric regression or classification . some reasons for the success of additive models are their increased flexibility when compared to linear or generalized linear models and their increased interpretability when compared to fully nonparametric models . it is well - known that good estimators in additive models are in general less prone to the curse of high dimensionality than good estimators in fully nonparametric models . many examples of such estimators belong to the large class of regularized kernel based methods over a reproducing kernel hilbert space @xmath0 , see e.g. @xcite . in the last years many interesting results on learning rates of regularized kernel based models for additive models have been published when the focus is on sparsity and when the classical least squares loss function is used , see e.g. @xcite , @xcite , @xcite , @xcite , @xcite , @xcite and the references therein . of course , the least squares loss function is differentiable and has many nice mathematical properties , but it is only locally lipschitz continuous and therefore regularized kernel based methods based on this loss function typically suffer on bad statistical robustness properties , even if the kernel is bounded . this is in sharp contrast to kernel methods based on a lipschitz continuous loss function and on a bounded loss function , where results on upper bounds for the maxbias bias and on a bounded influence function are known , see e.g. @xcite for the general case and @xcite for additive models . therefore , we will here consider the case of regularized kernel based methods based on a general convex and lipschitz continuous loss function , on a general kernel , and on the classical regularizing term @xmath1 for some @xmath2 which is a smoothness penalty but not a sparsity penalty , see e.g. @xcite . such regularized kernel based methods are now often called support vector machines ( svms ) , although the notation was historically used for such methods based on the special hinge loss function and for special kernels only , we refer to @xcite . in this paper we address the open question , whether an svm with an additive kernel can provide a substantially better learning rate in high dimensions than an svm with a general kernel , say a classical gaussian rbf kernel , if the assumption of an additive model is satisfied . our leading example covers learning rates for quantile regression based on the lipschitz continuous but non - differentiable pinball loss function , which is also called check function in the literature , see e.g. @xcite and @xcite for parametric quantile regression and @xcite , @xcite , and @xcite for kernel based quantile regression . we will not address the question how to check whether the assumption of an additive model is satisfied because this would be a topic of a paper of its own . of course , a practical approach might be to fit both models and compare their risks evaluated for test data . for the same reason we will also not cover sparsity . consistency of support vector machines generated by additive kernels for additive models was considered in @xcite . in this paper we establish learning rates for these algorithms . let us recall the framework with a complete separable metric space @xmath3 as the input space and a closed subset @xmath4 of @xmath5 as the output space . a borel probability measure @xmath6 on @xmath7 is used to model the learning problem and an independent and identically distributed sample @xmath8 is drawn according to @xmath6 for learning . a loss function @xmath9 is used to measure the quality of a prediction function @xmath10 by the local error @xmath11 . _ throughout the paper we assume that @xmath12 is measurable , @xmath13 , convex with respect to the third variable , and uniformly lipschitz continuous satisfying @xmath14 with a finite constant @xmath15 . _ support vector machines ( svms ) considered here are kernel - based regularization schemes in a reproducing kernel hilbert space ( rkhs ) @xmath0 generated by a mercer kernel @xmath16 . with a shifted loss function @xmath17 introduced for dealing even with heavy - tailed distributions as @xmath18 , they take the form @xmath19 where for a general borel measure @xmath20 on @xmath21 , the function @xmath22 is defined by @xmath23 where @xmath24 is a regularization parameter . the idea to shift a loss function has a long history , see e.g. @xcite in the context of m - estimators . it was shown in @xcite that @xmath22 is also a minimizer of the following optimization problem involving the original loss function @xmath12 if a minimizer exists : @xmath25 the additive model we consider consists of the _ input space decomposition _ @xmath26 with each @xmath27 a complete separable metric space and a _ hypothesis space _ @xmath28 where @xmath29 is a set of functions @xmath30 each of which is also identified as a map @xmath31 from @xmath3 to @xmath5 . hence the functions from @xmath32 take the additive form @xmath33 . we mention , that there is strictly speaking a notational problem here , because in the previous formula each quantity @xmath34 is an element of the set @xmath35 which is a subset of the full input space @xmath36 , @xmath37 , whereas in the definition of sample @xmath8 each quantity @xmath38 is an element of the full input space @xmath36 , where @xmath39 . because these notations will only be used in different places and because we do not expect any misunderstandings , we think this notation is easier and more intuitive than specifying these quantities with different symbols . the additive kernel @xmath40 is defined in terms of mercer kernels @xmath41 on @xmath27 as @xmath42 it generates an rkhs @xmath0 which can be written in terms of the rkhs @xmath43 generated by @xmath41 on @xmath27 corresponding to the form ( [ additive ] ) as @xmath44 with norm given by @xmath45 the norm of @xmath46 satisfies @xmath47 to illustrate advantages of additive models , we provide two examples of comparing additive with product kernels . the first example deals with gaussian rbf kernels . all proofs will be given in section [ proofsection ] . [ gaussadd ] let @xmath48 , @xmath49 $ ] and @xmath50 ^ 2.$ ] let @xmath51 and @xmath52.\ ] ] the additive kernel @xmath53 is given by @xmath54 furthermore , the product kernel @xmath55 is the standard gaussian kernel given by @xmath56 define a gaussian function @xmath57 on @xmath58 ^ 2 $ ] depending only on one variable by @xmath59 then @xmath60 but @xmath61 where @xmath62 denotes the rkhs generated by the standard gaussian rbf kernel @xmath63 . the second example is about sobolev kernels . [ sobolvadd ] let @xmath64 , @xmath65 $ ] and @xmath58^s.$ ] let @xmath66 : = \bigl\{u\in l_2([0,1 ] ) ; d^\alpha u \in l_2([0,1 ] ) \mbox{~for~all~}|\alpha|\le 1\bigr\}\ ] ] be the sobolev space consisting of all square integrable univariate functions whose derivative is also square integrable . it is an rkhs with a mercer kernel @xmath67 defined on @xmath68 ^ 2 $ ] . if we take all the mercer kernels @xmath69 to be @xmath67 , then @xmath70 $ ] for each @xmath71 . the additive kernel @xmath72 is also a mercer kernel and defines an rkhs @xmath73\right\}.\ ] ] however , the multivariate sobolev space @xmath74^s)$ ] , consisting of all square integrable functions whose partial derivatives are all square integrable , contains discontinuous functions and is not an rkhs . denote the marginal distribution of @xmath6 on @xmath27 as @xmath75 . under the assumption that @xmath76 for each @xmath71 and that @xmath43 is dense in @xmath29 in the @xmath77-metric , it was proved in @xcite that @xmath78 in probability as long as @xmath79 satisfies @xmath80 and @xmath81 . the rest of the paper has the following structure . section [ ratessection ] contains our main results on learning rates for svms based on additive kernels . learning rates for quantile regression are treated as important special cases . section [ comparisonsection ] contains a comparison of our results with other learning rates published recently . section [ proofsection ] contains all the proofs and some results which can be interesting in their own . in this paper we provide some learning rates for the support vector machines generated by additive kernels for additive models which helps improve the quantitative understanding presented in @xcite . the rates are about asymptotic behaviors of the excess risk @xmath82 and take the form @xmath83 with @xmath84 . they will be stated under three kinds of conditions involving the hypothesis space @xmath0 , the measure @xmath6 , the loss @xmath12 , and the choice of the regularization parameter @xmath85 . the first condition is about the approximation ability of the hypothesis space @xmath0 . since the output function @xmath19 is from the hypothesis space , the learning rates of the learning algorithm depend on the approximation ability of the hypothesis space @xmath0 with respect to the optimal risk @xmath86 measured by the following approximation error . [ defapprox ] the approximation error of the triple @xmath87 is defined as @xmath88 to estimate the approximation error , we make an assumption about the minimizer of the risk @xmath89 for each @xmath90 , define the integral operator @xmath91 associated with the kernel @xmath41 by @xmath92 we mention that @xmath93 is a compact and positive operator on @xmath94 . hence we can find its normalized eigenpairs @xmath95 such that @xmath96 is an orthonormal basis of @xmath94 and @xmath97 as @xmath98 . fix @xmath99 . then we can define the @xmath100-th power @xmath101 of @xmath93 by @xmath102 this is a positive and bounded operator and its range is well - defined . the assumption @xmath103 means @xmath104 lies in this range . [ assumption1 ] we assume @xmath105 and @xmath106 where for some @xmath107 and each @xmath108 , @xmath109 is a function of the form @xmath110 with some @xmath111 . the case @xmath112 of assumption [ assumption1 ] means each @xmath113 lies in the rkhs @xmath43 . a standard condition in the literature ( e.g. , @xcite ) for achieving decays of the form @xmath114 for the approximation error ( [ approxerrordef ] ) is @xmath115 with some @xmath116 . here the operator @xmath117 is defined by @xmath118 in general , this can not be written in an additive form . however , the hypothesis space ( [ additive ] ) takes an additive form @xmath119 . so it is natural for us to impose an additive expression @xmath120 for the target function @xmath121 with the component functions @xmath113 satisfying the power condition @xmath110 . the above natural assumption leads to a technical difficulty in estimating the approximation error : the function @xmath113 has no direct connection to the marginal distribution @xmath122 projected onto @xmath27 , hence existing methods in the literature ( e.g. , @xcite ) can not be applied directly . note that on the product space @xmath123 , there is no natural probability measure projected from @xmath6 , and the risk on @xmath124 is not defined . our idea to overcome the difficulty is to introduce an intermediate function @xmath125 . it may not minimize a risk ( which is not even defined ) . however , it approximates the component function @xmath113 well . when we add up such functions @xmath126 , we get a good approximation of the target function @xmath121 , and thereby a good estimate of the approximation error . this is the first novelty of the paper . [ approxerrorthm ] under assumption [ assumption1 ] , we have @xmath127 where @xmath128 is the constant given by @xmath129 the second condition for our learning rates is about the capacity of the hypothesis space measured by @xmath130-empirical covering numbers . let @xmath131 be a set of functions on @xmath21 and @xmath132 for every @xmath133 the * covering number of @xmath131 * with respect to the empirical metric @xmath134 , given by @xmath135 is defined as @xmath136 and the * @xmath130-empirical covering number * of @xmath137 is defined as @xmath138 [ assumption2 ] we assume @xmath139 and that for some @xmath140 , @xmath141 and every @xmath142 , the @xmath130-empirical covering number of the unit ball of @xmath43 satisfies @xmath143 the second novelty of this paper is to observe that the additive nature of the hypothesis space yields the following nice bound with a dimension - independent power exponent for the covering numbers of the balls of the hypothesis space @xmath0 , to be proved in section [ samplesection ] . [ capacitythm ] under assumption [ assumption2 ] , for any @xmath144 and @xmath145 , we have @xmath146 the bound for the covering numbers stated in theorem [ capacitythm ] is special : the power @xmath147 is independent of the number @xmath148 of the components in the additive model . it is well - known @xcite in the literature of function spaces that the covering numbers of balls of the sobolev space @xmath149 on the cube @xmath150^s$ ] of the euclidean space @xmath151 with regularity index @xmath152 has the following asymptotic behavior with @xmath153 : @xmath154 here the power @xmath155 depends linearly on the dimension @xmath148 . similar dimension - dependent bounds for the covering numbers of the rkhss associated with gaussian rbf - kernels can be found in @xcite . the special bound in theorem [ capacitythm ] demonstrates an advantage of the additive model in terms of capacity of the additive hypothesis space . the third condition for our learning rates is about the noise level in the measure @xmath6 with respect to the hypothesis space . before stating the general condition , we consider a special case for quantile regression , to illustrate our general results . let @xmath156 be a quantile parameter . the quantile regression function @xmath157 is defined by its value @xmath158 to be a @xmath159-quantile of @xmath160 , i.e. , a value @xmath161 satisfying @xmath162 the regularization scheme for quantile regression considered here takes the form ( [ algor ] ) with the loss function @xmath12 given by the pinball loss as @xmath163 a noise condition on @xmath6 for quantile regression is defined in @xcite as follows . to this end , let @xmath164 be a probability measure on @xmath165 and @xmath166 . then a real number @xmath167 is called @xmath159-quantile of @xmath164 , if and only if @xmath167 belongs to the set @xmath168\bigr ) \ge \tau \mbox{~~and~~ } q\bigl([t , \infty)\bigr ) \ge 1-\tau\bigr\}\,.\ ] ] it is well - known that @xmath169 is a compact interval . [ noisecond ] let @xmath166 . 1 . a probability measure @xmath164 on @xmath165 is said to have a * @xmath159-quantile of type @xmath170 * , if there exist a @xmath159-quantile @xmath171 and a constant @xmath172 such that , for all @xmath173 $ ] , we have @xmath174 2 . let @xmath175 $ ] . we say that a probability measure @xmath20 on @xmath176 has a * @xmath159-quantile of @xmath177-average type @xmath170 * if the conditional probability measure @xmath178 has @xmath179-almost surely a @xmath159-quantile of type @xmath170 and the function @xmath180 where @xmath181 is the constant defined in part ( 1 ) , satisfies @xmath182 . one can show that a distribution @xmath164 having a @xmath159-quantile of type @xmath170 has a unique @xmath159-quantile @xmath183 . moreover , if @xmath164 has a lebesgue density @xmath184 then @xmath164 has a @xmath159-quantile of type @xmath170 if @xmath184 is bounded away from zero on @xmath185 $ ] since we can use @xmath186\}$ ] in ( [ tauquantileoftype2formula ] ) . this assumption is general enough to cover many distributions used in parametric statistics such as gaussian , student s @xmath187 , and logistic distributions ( with @xmath188 ) , gamma and log - normal distributions ( with @xmath189 ) , and uniform and beta distributions ( with @xmath190 $ ] ) . the following theorem , to be proved in section [ proofsection ] , gives a learning rate for the regularization scheme ( [ algor ] ) in the special case of quantile regression . [ quantilethm ] suppose that @xmath191 almost surely for some constant @xmath192 , and that each kernel @xmath41 is @xmath193 with @xmath194 for some @xmath195 . if assumption [ assumption1 ] holds with @xmath112 and @xmath6 has a @xmath159-quantile of @xmath177-average type @xmath170 for some @xmath196 $ ] , then by taking @xmath197 , for any @xmath198 and @xmath199 , with confidence at least @xmath200 we have @xmath201 where @xmath202 is a constant independent of @xmath203 and @xmath204 and @xmath205 please note that the exponent @xmath206 given by ( [ quantilerates2 ] ) for the learning rate in ( [ quantilerates ] ) is independent of the quantile level @xmath159 , of the number @xmath148 of additive components in @xmath207 , and of the dimensions @xmath208 and @xmath209 further note that @xmath210 , if @xmath211 , and @xmath212 if @xmath213 . because @xmath214 can be arbitrarily close to @xmath215 , the learning rate , which is independent of the dimension @xmath216 and given by theorem [ quantilethm ] , is close to @xmath217 for large values of @xmath177 and is close to @xmath218 or better , if @xmath211 . to state our general learning rates , we need an assumption on a _ variance - expectation bound _ which is similar to definition [ noisecond ] in the special case of quantile regression . [ assumption3 ] we assume that there exist an exponent @xmath219 $ ] and a positive constant @xmath220 such that @xmath221 assumption [ assumption3 ] always holds true for @xmath222 . if the triple @xmath223 satisfies some conditions , the exponent @xmath224 can be larger . for example , when @xmath12 is the pinball loss ( [ pinloss ] ) and @xmath6 has a @xmath159-quantile of @xmath177-average type @xmath225 for some @xmath196 $ ] and @xmath226 as defined in @xcite , then @xmath227 . [ mainratesthm ] suppose that @xmath228 is bounded by a constant @xmath229 almost surely . under assumptions [ assumption1 ] to [ assumption3 ] , if we take @xmath198 and @xmath230 for some @xmath231 , then for any @xmath232 , with confidence at least @xmath200 we have @xmath233 where @xmath234 is given by @xmath235 and @xmath202 is constant independent of @xmath203 or @xmath204 ( to be given explicitly in the proof ) . we now add some theoretical and numerical comparisons on the goodness of our learning rates with those from the literature . as already mentioned in the introduction , some reasons for the popularity of additive models are flexibility , increased interpretability , and ( often ) a reduced proneness of the curse of high dimensions . hence it is important to check , whether the learning rate given in theorem [ mainratesthm ] under the assumption of an additive model favourably compares to ( essentially ) optimal learning rates without this assumption . in other words , we need to demonstrate that the main goal of this paper is achieved by theorem [ quantilethm ] and theorem [ mainratesthm ] , i.e. that an svm based on an additive kernel can provide a substantially better learning rate in high dimensions than an svm with a general kernel , say a classical gaussian rbf kernel , provided the assumption of an additive model is satisfied . our learning rate in theorem [ quantilethm ] is new and optimal in the literature of svm for quantile regression . most learning rates in the literature of svm for quantile regression are given for projected output functions @xmath236 , while it is well known that projections improve learning rates @xcite . here the projection operator @xmath237 is defined for any measurable function @xmath10 by @xmath238 sometimes this is called clipping . such results are given in @xcite . for example , under the assumptions that @xmath6 has a @xmath159-quantile of @xmath177-average type @xmath170 , the approximation error condition ( [ approxerrorb ] ) is satisfied for some @xmath239 , and that for some constants @xmath240 , the sequence of eigenvalues @xmath241 of the integral operator @xmath117 satisfies @xmath242 for every @xmath243 , it was shown in @xcite that with confidence at least @xmath200 , @xmath244 where @xmath245 here the parameter @xmath246 measures the capacity of the rkhs @xmath247 and it plays a similar role as half of the parameter @xmath147 in assumption 2 . for a @xmath193 kernel and @xmath112 , one can choose @xmath246 and @xmath147 to be arbitrarily small and the above power index @xmath248 can be taken as @xmath249 . the learning rate in theorem [ quantilethm ] may be improved by relaxing assumption 1 to a sobolev smoothness condition for @xmath121 and a regularity condition for the marginal distribution @xmath250 . for example , one may use a gaussian kernel @xmath251 depending on the sample size @xmath203 and @xcite achieve the approximation error condition ( [ approxerrorb ] ) for some @xmath252 . this is done for quantile regression in @xcite . since we are mainly interested in additive models , we shall not discuss such an extension . [ gaussmore ] let @xmath48 , @xmath49 $ ] and @xmath50 ^ 2.$ ] let @xmath51 and the additive kernel @xmath72 be given by ( [ gaussaddform ] ) with @xmath253 in example [ gaussadd ] as @xmath52.\ ] ] if the function @xmath121 is given by ( [ gaussfcn ] ) , @xmath191 almost surely for some constant @xmath192 , and @xmath6 has a @xmath159-quantile of @xmath177-average type @xmath170 for some @xmath196 $ ] , then by taking @xmath197 , for any @xmath145 and @xmath199 , ( [ quantilerates ] ) holds with confidence at least @xmath200 . it is unknown whether the above learning rate can be derived by existing approaches in the literature ( e.g. @xcite ) even after projection . note that the kernel in the above example is independent of the sample size . it would be interesting to see whether there exists some @xmath99 such that the function @xmath57 defined by ( [ gaussfcn ] ) lies in the range of the operator @xmath254 . the existence of such a positive index would lead to the approximation error condition ( [ approxerrorb ] ) , see @xcite . let us now add some numerical comparisons on the goodness of our learning rates given by theorem [ mainratesthm ] with those given by @xcite . their corollary 4.12 gives ( essentially ) minmax optimal learning rates for ( clipped ) svms in the context of nonparametric quantile regression using one gaussian rbf kernel on the whole input space under appropriate smoothness assumptions of the target function . let us consider the case that the distribution @xmath6 has a @xmath159-quantile of @xmath177-average type @xmath170 , where @xmath255 , and assume that both corollary 4.12 in @xcite and our theorem [ mainratesthm ] are applicable . i.e. , we assume in particular that @xmath6 is a probability measure on @xmath256 $ ] and that the marginal distribution @xmath257 has a lebesgue density @xmath258 for some @xmath259 . furthermore , suppose that the optimal decision function @xmath260 has ( to make theorem [ mainratesthm ] applicable with @xmath261 $ ] ) the additive structure @xmath207 with each @xmath104 as stated in assumption [ assumption1 ] , where @xmath262 and @xmath263 , with minimal risk @xmath86 and additionally fulfills ( to make corollary 4.12 in @xcite applicable ) @xmath264 where @xmath265 $ ] and @xmath266 denotes a besov space with smoothness parameter @xmath267 . the intuitive meaning of @xmath248 is , that increasing values of @xmath248 correspond to increased smoothness . we refer to ( * ? ? ? * and p. 44 ) for details on besov spaces . it is well - known that the besov space @xmath268 contains the sobolev space @xmath269 for @xmath270 , @xmath271 , and @xmath272 , and that @xmath273 . we mention that if all @xmath41 are suitably chosen wendland kernels , their reproducing kernel hilbert spaces @xmath43 are sobolev spaces , see ( * ? ? ? * thm . 10.35 , p. 160 ) . furthermore , we use the same sequence of regularizing parameters as in ( * ? ? ? 4.9 , cor . 4.12 ) , i.e. , @xmath274 where @xmath275 , @xmath276 , @xmath277 $ ] , and @xmath278 is some user - defined positive constant independent of @xmath279 . for reasons of simplicity , let us fix @xmath280 . then ( * ? ? ? 4.12 ) gives learning rates for the risk of svms for @xmath159-quantile regression , if a single gaussian rbf - kernel on @xmath281 is used for @xmath159-quantile functions of @xmath177-average type @xmath170 with @xmath255 , which are of order @xmath282 hence the learning rate in theorem [ quantilethm ] is better than the one in ( * ? ? ? 4.12 ) in this situation , if @xmath283 provided the assumption of the additive model is valid . table [ table1 ] lists the values of @xmath284 from ( [ explicitratescz2 ] ) for some finite values of the dimension @xmath216 , where @xmath285 . all of these values of @xmath284 are positive with the exceptions if @xmath286 or @xmath287 . this is in contrast to the corresponding exponent in the learning rate by ( * ? ? * cor . 4.12 ) , because @xmath288 table [ table2 ] and figures [ figure1 ] to [ figure2 ] give additional information on the limit @xmath289 . of course , higher values of the exponent indicates faster rates of convergence . it is obvious , that an svm based on an additive kernel has a significantly faster rate of convergence in higher dimensions @xmath216 compared to svm based on a single gaussian rbf kernel defined on the whole input space , of course under the assumption that the additive model is valid . the figures seem to indicate that our learning rate from theorem [ mainratesthm ] is probably not optimal for small dimensions . however , the main focus of the present paper is on high dimensions . .[table1 ] the table lists the limits of the exponents @xmath290 from ( * ? ? ? * cor . 4.12 ) and @xmath291 from theorem [ mainratesthm ] , respectively , if the regularizing parameter @xmath292 is chosen in an optimal manner for the nonparametric setup , i.e. @xmath293 , with @xmath294 for @xmath295 and @xmath296 . recall that @xmath297 $ ] . [ cols= " > , > , > , > " , ]
additive models play an important role in semiparametric statistics . this paper gives learning rates for regularized kernel based methods for additive models . these learning rates compare favourably in particular in high dimensions to recent results on optimal learning rates for purely nonparametric regularized kernel based quantile regression using the gaussian radial basis function kernel , provided the assumption of an additive model is valid . additionally , a concrete example is presented to show that a gaussian function depending only on one variable lies in a reproducing kernel hilbert space generated by an additive gaussian kernel , but does not belong to the reproducing kernel hilbert space generated by the multivariate gaussian kernel of the same variance . * key words and phrases . * additive model , kernel , quantile regression , semiparametric , rate of convergence , support vector machine .
introduction main results on learning rates comparison of learning rates
the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @xmath11 . because no strong interactions are present in the leptonic final state @xmath12 , such decays provide a clean way to probe the complex , strong interactions that bind the quark and antiquark within the initial - state meson . in these decays , strong interaction effects can be parametrized by a single quantity , @xmath13 , the pseudoscalar meson decay constant . the leptonic decay rate can be measured by experiment , and the decay constant can be determined by the equation ( ignoring radiative corrections ) @xmath14 where @xmath15 is the fermi coupling constant , @xmath16 is the cabibbo - kobayashi - maskawa ( ckm ) matrix @xcite element , @xmath17 is the mass of the meson , and @xmath18 is the mass of the charged lepton . the quantity @xmath13 describes the amplitude for the @xmath19 and @xmath20-quarks within the @xmath21 to have zero separation , a condition necessary for them to annihilate into the virtual @xmath22 boson that produces the @xmath12 pair . the experimental determination of decay constants is one of the most important tests of calculations involving nonperturbative qcd . such calculations have been performed using various models @xcite or using lattice qcd ( lqcd ) . the latter is now generally considered to be the most reliable way to calculate the quantity . knowledge of decay constants is important for describing several key processes , such as @xmath23 mixing , which depends on @xmath24 , a quantity that is also predicted by lqcd calculations . experimental determination @xcite of @xmath24 with the leptonic decay of a @xmath25 meson is , however , very limited as the rate is highly suppressed due to the smallness of the magnitude of the relevant ckm matrix element @xmath26 . the charm mesons , @xmath27 and @xmath28 , are better instruments to study the leptonic decays of heavy mesons since these decays are either less ckm suppressed or favored , _ i.e. _ , @xmath29 and @xmath30 are much larger than @xmath31 . thus , the decay constants @xmath32 and @xmath33 determined from charm meson decays can be used to test and validate the necessary lqcd calculations applicable to the @xmath34-meson sector . among the leptonic decays in the charm - quark sector , @xmath35 decays are more accessible since they are ckm favored . furthermore , the large mass of the @xmath11 lepton removes the helicity suppression that is present in the decays to lighter leptons . the existence of multiple neutrinos in the final state , however , makes measurement of this decay challenging . physics beyond the standard model ( sm ) might also affect leptonic decays of charmed mesons . depending on the non - sm features , the ratio of @xmath36 could be affected @xcite , as could the ratio @xcite @xmath37 . any of the individual widths might be increased or decreased . there is an indication of a discrepancy between the experimental determinations @xcite of @xmath33 and the most recent precision lqcd calculation @xcite . this disagreement is particularly puzzling since the cleo - c determination @xcite of @xmath32 agrees well with the lqcd calculation @xcite of that quantity . some @xcite conjecture that this discrepancy may be explained by a charged higgs boson or a leptoquark . in this article , we report an improved measurement of the absolute branching fraction of the leptonic decay @xmath0 ( charge - conjugate modes are implied ) , with @xmath1 , from which we determine the decay constant @xmath33 . we use a data sample of @xmath38 events provided by the cornell electron storage ring ( cesr ) and collected by the cleo - c detector at the center - of - mass ( cm ) energy @xmath39 mev , near @xmath3 peak production @xcite . the data sample consists of an integrated luminosity of @xmath40 @xmath41 containing @xmath42 @xmath3 pairs . we have previously reported @xcite measurements of @xmath43 and @xmath0 with a subsample of these data . a companion article @xcite reports measurements of @xmath33 from @xmath43 and @xmath0 , with @xmath44 , using essentially the same data sample as the one used in this measurement . the cleo - c detector @xcite is a general - purpose solenoidal detector with four concentric components utilized in this measurement : a small - radius six - layer stereo wire drift chamber , a 47-layer main drift chamber , a ring - imaging cherenkov ( rich ) detector , and an electromagnetic calorimeter consisting of 7800 csi(tl ) crystals . the two drift chambers operate in a @xmath45 t magnetic field and provide charged particle tracking in a solid angle of @xmath46% of @xmath47 . the chambers achieve a momentum resolution of @xmath48% at @xmath49 gev/@xmath50 . the main drift chamber also provides specific - ionization ( @xmath51 ) measurements that discriminate between charged pions and kaons . the rich detector covers approximately @xmath52% of @xmath47 and provides additional separation of pions and kaons at high momentum . the photon energy resolution of the calorimeter is @xmath53% at @xmath54 gev and @xmath55% at @xmath56 mev . electron identification is based on a likelihood variable that combines the information from the rich detector , @xmath51 , and the ratio of electromagnetic shower energy to track momentum ( @xmath57 ) . we use a geant - based @xcite monte carlo ( mc ) simulation program to study efficiency of signal - event selection and background processes . physics events are generated by evtgen @xcite , tuned with much improved knowledge of charm decays @xcite , and final - state radiation ( fsr ) is modeled by the photos @xcite program . the modeling of initial - state radiation ( isr ) is based on cross sections for @xmath3 production at lower energies obtained from the cleo - c energy scan @xcite near the cm energy where we collect the sample . the presence of two @xmath58 mesons in a @xmath3 event allows us to define a single - tag ( st ) sample in which a @xmath58 is reconstructed in a hadronic decay mode and a further double - tagged ( dt ) subsample in which an additional @xmath59 is required as a signature of @xmath60 decay , the @xmath59 being the daughter of the @xmath60 . the @xmath61 reconstructed in the st sample can be either primary or secondary from @xmath62 ( or @xmath63 ) . the st yield can be expressed as @xmath64 where @xmath65 is the produced number of @xmath3 pairs , @xmath66 is the branching fraction of hadronic modes used in the st sample , and @xmath67 is the st efficiency . the @xmath68 counts the candidates , not events , and the factor of 2 comes from the sum of @xmath28 and @xmath61 tags . our double - tag ( dt ) sample is formed from events with only a single charged track , identified as an @xmath69 , in addition to a st . the yield can be expressed as @xmath70 where @xmath71 is the leptonic decay branching fraction , including the subbranching fraction of @xmath1 decay , @xmath72 is the efficiency of finding the st and the leptonic decay in the same event . from the st and dt yields we can obtain an absolute branching fraction of the leptonic decay @xmath71 , without needing to know the integrated luminosity or the produced number of @xmath3 pairs , @xmath73 where @xmath74 ( @xmath75 ) is the effective signal efficiency . because of the large solid angle acceptance with high segmentation of the cleo - c detector and the low multiplicity of the events with which we are concerned , @xmath76 , where @xmath77 is the leptonic decay efficiency . hence , the ratio @xmath78 is insensitive to most systematic effects associated with the st , and the signal branching fraction @xmath71 obtained using this procedure is nearly independent of the efficiency of the tagging mode . to minimize systematic uncertainties , we tag using three two - body hadronic decay modes with only charged particles in the final state . the three st modes and @xmath79 are shorthand labels for @xmath80 events within mass windows ( described below ) of the @xmath81 peak in @xmath82 and the @xmath83 peak in @xmath84 , respectively . no attempt is made to separate these resonance components in the @xmath85 dalitz plot . ] are @xmath86 , @xmath79 , and @xmath87 . using these tag modes also helps to reduce the tag bias which would be caused by the correlation between the tag side and the signal side reconstruction if tag modes with high multiplicity and large background were used . the effect of the tag bias @xmath88 can be expressed in terms of the signal efficiency @xmath74 defined by @xmath89 where @xmath90 is the st efficiency when the recoiling system is the signal leptonic decay with single @xmath59 in the other side of the tag . as the general st efficiency @xmath67 , when the recoiling system is any possible @xmath91 decays , will be lower than the @xmath90 , sizable tag bias could be introduced if the multiplicity of the tag mode were high , or the tag mode were to include neutral particles in the final state . as shown in sec . [ sec : results ] , this effect is negligible in our chosen clean tag modes . the @xmath92 decay is reconstructed by combining oppositely charged tracks that originate from a common vertex and that have an invariant mass within @xmath93 mev of the nominal mass @xcite . we require the resonance decay to satisfy the following mass windows around the nominal masses @xcite : @xmath94 ( @xmath95 mev ) and @xmath96 ( @xmath97 mev ) . we require the momenta of charged particles to be @xmath56 mev or greater to suppress the slow pion background from @xmath98 decays ( through @xmath99 ) . we identify a st by using the invariant mass of the tag @xmath100 and recoil mass against the tag @xmath101 . the recoil mass is defined as @xmath102 where @xmath103 is the net four - momentum of the @xmath4 beam , taking the finite beam crossing angle into account ; @xmath104 is the four - momentum of the tag , with @xmath105 computed from @xmath106 and the nominal mass @xcite of the @xmath91 meson . we require the recoil mass to be within @xmath107 mev of the @xmath108 mass @xcite . this loose window allows both primary and secondary @xmath91 tags to be selected . to estimate the backgrounds in our st and dt yields from the wrong tag combinations ( incorrect combinations that , by chance , lie within the @xmath109 signal region ) , we use the tag invariant mass sidebands . we define the signal region as @xmath110 mev @xmath111 mev , and the sideband regions as @xmath112 mev @xmath113 mev or @xmath114 mev @xmath115 mev , where @xmath116 is the difference between the tag mass and the nominal mass . we fit the st @xmath109 distributions to the sum of double - gaussian signal function plus second - degree chebyshev polynomial background function to get the tag mass sideband scaling factor . the invariant mass distributions of tag candidates for each tag mode are shown in fig . [ fig : dm ] and the st yield and @xmath109 sideband scaling factor are summarized in table [ table : data - single ] . we find @xmath117 summed over the three tag modes . .[table : data - single ] summary of single - tag ( st ) yields , where @xmath118 is the yield in the st mass signal region , @xmath119 is the yield in the sideband region , @xmath120 is the sideband scaling factor , and @xmath68 is the scaled sideband - subtracted yield . [ cols="<,>,>,>,>",options="header " , ] we considered six semileptonic decays , @xmath121 @xmath122 , @xmath123 , @xmath124 , @xmath125 , @xmath126 , and @xmath127 , as the major sources of background in the @xmath128 signal region . the second dominates the nonpeaking background , and the fourth ( with @xmath129 ) dominates the peaking background . uncertainty in the signal yield due to nonpeaking background ( @xmath130 ) is assessed by varying the semileptonic decay branching fractions by the precision with which they are known @xcite . imperfect knowledge of @xmath131 gives rise to a systematic uncertainty in our estimate of the amount of peaking background in the signal region , which has an effect on our branching fraction measurement of @xmath132 . we study differences in efficiency , data vs mc events , due to the extra energy requirement , extra track veto , and @xmath133 requirement , by using samples from data and mc events , in which _ both _ the @xmath134 and @xmath2 satisfy our tag requirements , i.e. , `` double - tag '' events . we then apply each of the above - mentioned requirements and compare loss in efficiency of data vs mc events . in this way we obtain a correction of @xmath135 for the extra energy requirement and systematic uncertainties on each of the three requirements of @xmath136 ( all equal , by chance ) . the non-@xmath69 background in the signal @xmath69 candidate sample is negligible ( @xmath137 ) due to the low probability ( @xmath138 per track ) that hadrons ( @xmath139 or @xmath140 ) are misidentified as @xmath69 @xcite . uncertainty in these backgrounds produces a @xmath141 uncertainty in the measurement of @xmath142 . the secondary @xmath69 backgrounds from charge symmetric processes , such as @xmath143 dalitz decay ( @xmath144 ) and @xmath145 conversion ( @xmath146 ) , are assessed by measuring the wrong - sign signal electron in events with @xmath147 . the uncertainty in the measurement from this source is estimated to be @xmath148 . other possible sources of systematic uncertainty include @xmath68 ( @xmath137 ) , tag bias ( @xmath149 ) , tracking efficiency ( @xmath148 ) , @xmath59 identification efficiency ( @xmath150 ) , and fsr ( @xmath150 ) . combining all contributions in quadrature , the total systematic uncertainty in the branching fraction measurement is estimated to be @xmath151 . in summary , using the sample of @xmath152 tagged @xmath28 decays with the cleo - c detector we obtain the absolute branching fraction of the leptonic decay @xmath153 through @xmath154 @xmath155 where the first uncertainty is statistical and the second is systematic . this result supersedes our previous measurement @xcite of the same branching fraction , which used a subsample of data used in this work . the decay constant @xmath33 can be computed using eq . ( [ eq : f ] ) with known values @xcite @xmath156 gev@xmath157 , @xmath158 mev , @xmath159 mev , and @xmath160 s. we assume @xmath161 and use the value @xmath162 given in ref . we obtain @xmath163 combining with our other determination @xcite of @xmath164 mev with @xmath43 and @xmath0 ( @xmath165 ) decays , we obtain @xmath166 this result is derived from absolute branching fractions only and is the most precise determination of the @xmath91 leptonic decay constant to date . our combined result is larger than the recent lqcd calculation @xmath167 mev @xcite by @xmath168 standard deviations . the difference between data and lqcd for @xmath33 could be due to physics beyond the sm @xcite , unlikely statistical fluctuations in the experimental measurements or the lqcd calculation , or systematic uncertainties that are not understood in the lqcd calculation or the experimental measurements . combining with our other determination @xcite of @xmath169 , via @xmath44 , we obtain @xmath170 using this with our measurement @xcite of @xmath171 , we obtain the branching fraction ratio @xmath172 this is consistent with @xmath173 , the value predicted by the sm with lepton universality , as given in eq . ( [ eq : f ] ) with known masses @xcite . we gratefully acknowledge the effort of the cesr staff in providing us with excellent luminosity and running conditions . d. cronin - hennessy and a. ryd thank the a.p . sloan foundation . this work was supported by the national science foundation , the u.s . department of energy , the natural sciences and engineering research council of canada , and the u.k . science and technology facilities council . c. amsler _ et al . _ ( particle data group ) , phys . b * 667 * , 1 ( 2008 ) . k. ikado _ et al . _ ( belle collaboration ) , phys . lett . * 97 * , 251802 ( 2006 ) . b. aubert _ et al . _ ( babar collaboration ) , phys . rev . d * 77 * , 011107 ( 2008 ) . a. g. akeroyd and c. h. chen , phys . d * 75 * , 075004 ( 2007 ) ; a. g. akeroyd , prog . phys . * 111 * , 295 ( 2004 ) . j. l. hewett , arxiv : hep - ph/9505246 . w. s. hou , phys . d * 48 * , 2342 ( 1993 ) . e. follana , c. t. h. davies , g. p. lepage , and j. shigemitsu ( hpqcd collaboration ) , phys . lett . * 100 * , 062002 ( 2008 ) . b. i. eisenstein _ et al . _ ( cleo collaboration ) , phys . rev . d * 78 * , 052003 ( 2008 ) . b. a. dobrescu and a. s. kronfeld , phys . * 100 * , 241802 ( 2008 ) . d. cronin - hennessy _ et al . _ ( cleo collaboration ) , arxiv:0801.3418 . m. artuso _ et al . _ ( cleo collaboration ) , phys . lett . * 99 * , 071802 ( 2007 ) . k. m. ecklund _ et al . _ ( cleo collaboration ) , phys . rev . lett . * 100 * , 161801 ( 2008 ) . j. p. alexander _ et al . _ ( cleo collaboration ) , phys . rev . d * 79 * , 052001 ( 2009 ) . y. kubota _ et al . _ ( cleo collaboration ) , nucl . instrum . a * 320 * , 66 ( 1992 ) . d. peterson _ et al . _ , instrum . methods phys . , sec . a * 478 * , 142 ( 2002 ) . m. artuso _ et al . _ , nucl . instrum . methods phys . a * 502 * , 91 ( 2003 ) . s. dobbs _ et al . _ ( cleo collaboration ) , phys . rev . d * 76 * , 112001 ( 2007 ) . j. p. alexander _ et al . _ ( cleo collaboration ) , phys . rev . lett . * 100 * , 161804 ( 2008 ) . e. barberio and z. was , comput . . commun . * 79 * , 291 ( 1994 ) .
we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected near the @xmath3 peak production energy in @xmath4 collisions with the cleo - c detector . we obtain @xmath5 and determine the decay constant @xmath6 mev , where the first uncertainties are statistical and the second are systematic .
[sec:introduction]introduction [sec:detector]data and the cleo- detector [sec:analysys]analysis method [sec:conclusion]summary