text
stringlengths
3
744k
summary
stringlengths
24
154k
SECTION 1. SHORT TITLE. This Act may be cited as the ``Healthy Foods for Healthy Living Act''. SEC. 2. DEPARTMENT OF AGRICULTURE GRANTS TO PROMOTE GREATER CONSUMPTION OF FRESH FRUITS, FRESH VEGETABLES, AND OTHER HEALTHY FOODS IN LOW-INCOME COMMUNITIES. (a) Grants Authorized.--The Secretary of Agriculture may make grants for the purposes specified in subsection (b) to any of the following: (1) A community-based organization operating in a low- income community. (2) A local redevelopment agency that is chartered, established, or otherwise sanctioned by a State or local government. (b) Use of Grant Amounts.--The recipient of a grant under this section shall use the grant amounts for one or more of the following activities: (1) To assist in purchasing appropriate equipment or in hiring and training personnel to expand the inventory of fresh fruits and vegetables or other healthy food alternatives, as defined by the Department of Agriculture, such as healthier dairy and non-dairy to whole milk alternatives, 100 percent pure fruit juices, and products with 0 grams of transfat, available for residents of a low-income community. (2) To carry out consumer education and outreach activities to encourage the purchase of products described in paragraph (1), such as by informing residents of a low-income communities about the health risks associated with high-calorie, low- exercise lifestyles and the benefits of healthy living. (c) Maximum Grant.--A grant under this section may not exceed $100,000. (d) Community-Based Organization Defined.--In this section, the term ``community-based organization'' includes schools, day-care centers, senior centers, community health centers, food banks, or emergency feeding organizations. (e) Authorization of Appropriations.--There are authorized to be appropriated to the Secretary to carry out this section $5,000,000 for fiscal year 2008. SEC. 3. COVERAGE OF ADDITIONAL PRIMARY CARE AND PREVENTIVE SERVICES UNDER THE MEDICARE AND MEDICAID PROGRAMS. (a) Medicare Program.-- (1) In general.--Section 1861 of the Social Security Act (42 U.S.C. 1395x) is amended-- (A) in subsection (s)(2), by adding at the end the following new subparagraph: ``(BB) additional primary and preventive services described in subsection (ccc);''; and (B) by adding at the end the following new subsection: ``Additional Primary and Preventive Services ``(ccc) The term `additional primary and preventive services' means such primary and preventive services that are not otherwise covered under this title as the Secretary shall specify when provided by qualified providers, as specified by the Secretary. Such term includes the following: ``(1) Services for the prevention and treatment of obesity and obesity-related disease. ``(2) Supervised exercise sessions. ``(3) Exercise stress testing for the purpose of exercise prescriptions. ``(4) Lifestyle health improvement education. ``(5) Culinary arts education for the purpose of promoting proper nutrition.''. (2) Conforming amendments.--(A) Section 1862(a)(1) of such Act (42 U.S.C. 1395y(a)(1)) is amended-- (i) by striking ``and'' at the end of subparagraph (M); (ii) by adding ``and'' at the end of subparagraph (N); and (iii) by adding at the end the following new subparagraph: ``(O) in the case of additional primary care and preventive services, which are performed more frequently than the Secretary may specify;''. (B) Section 1833(b)(5) of such Act (42 U.S.C. 1395l(b)(5)) is amended by inserting ``or additional primary care or preventive services (as defined in section 1861(ccc))'' after ``(jj))''. (b) Medicaid Program.--Section 1905(a) of the Social Security Act (42 U.S.C. 1396d(a)) is amended-- (1) by striking ``and'' at the end of paragraph (27); (2) by redesignating paragraph (28) as paragraph (29); and (3) by inserting after paragraph (27) the following new paragraph: ``(28) additional primary care and preventive services (as defined in section 1861(ccc)) which are not otherwise covered under this subsection; and''. (c) Effective Date.--The amendments made by this section shall take effect on the first day of the first calendar quarter beginning after the date of the enactment of this Act, regardless of whether regulations to implement the amendments are in effect as of such date.
Healthy Foods for Healthy Living Act - Authorizes the Secretary of Agriculture to make grants to community-based organizations and local redevelopment agencies operating in low-income communities to: (1) assist in purchasing appropriate equipment or in hiring and training personnel to expand the inventory of fresh fruits and vegetables or other healthy food alternatives available for residents of a low-income community; and (2) carry out related consumer education and outreach activities. Amends title XVIII (Medicare) and title XIX (Medicaid) of the Social Security Act to cover additional primary and preventive services relating to obesity treatment and prevention, supervised exercise sessions, stress testing, lifestyle modification education, and nutrition education.
the canonical x - ray afterglow light curve containing five components after the prompt emission phase is a great finding of swift ( chincarini et al . 2005 ; nousek et al . 2006 ; obrien et al . 2006 ; zhang et al . 2006 ; zhang 2007 ) . the first of the five is the so - called `` steep decay phase '' which generally extends to @xmath2 , with a temporal decay slope typically @xmath3 or much steeper ( vaughan et al . 2006 ; cusumano et al . 2006 ; obrien et al . 2006 ) . a hint in this phase suggesting the emission of high latitude fireball surface is that it is typically smoothly connected to the prompt emission phase ( tagliaferri et al . 2005 ; barthelmy et al . 2005 ; liang et al . 2006 ) . generally , the steep decay phase was interpreted as a consequence of the so - called curvature effect ( fenimore et al . 1996 ; kumar & panaitescu 2000 ; dermer 2004 ; dyks et al . 2005 ; butler & kocevski 2007a ; liang et al . 2006 ; panaitescu et al . 2006 ; zhang et al . 2006 ; zhang et al . the curvature effect is a combined effect that includes the delay of time and the shifting of the intrinsic spectrum as well as other relevant factors of an expanding fireball ( see qin et al . 2006 for a detailed explanation ) . the effect was intensively studied recently in the prompt gamma - ray phase , where the profile of the full light curve of pulses , the spectral lags , the power - law relation between the pulse width and energy , and the evolution of the hardness ratio curve are concerned ( sari & piran 1997 ; qin 2002 ; ryde & petrosian 2002 ; kocevski et al . 2003 ; qin & lu 2005 ; shen et al . 2005 ; lu et al . 2006 ; peng et al . 2006 ; qin et al . 2004 , 2005 , 2006 ) . as early as a decade ago , fenimore et al . ( 1996 ) found that , due to the curvature effect , light curves arising from the emission of an infinitely thin shell would be a power - law of observational time when the rest - frame photon number spectrum is a power - law and the emission is within an infinitesimal time interval . in this case , the two power - law indexes are related by @xmath4 , where @xmath5 is the light curve index and @xmath6 the spectral index . in concerning the x - ray afterglow emission , kumar & panaitescu ( 2000 ) also found that , due to the curvature effect , the light curve of a shocked heated fireball shell radiating with a power - law spectrum within the observational band ( i.e. , the x - ray band in the early afterglow observation ) is a power - law of time as well and relation @xmath4 holds in this situation . as revealed in fig . 7 of nousek et al . ( 2006 ) , relation @xmath4 is roughly in agreement with the data in the steep decay phase of some swift bursts . however , the figure also shows that real relations between the two indexes of some bursts significantly deviate from the @xmath4 curve . this might be due to the ill re - setting of time that should be set to the real time when the central engine restarts ( see liang et al . in addition , more or less subtracting the underlying afterglow contribution would lead to other values of the temporal index @xmath5 ( for a detailed explanation , see zhang 2007 ) . we notice that the derivation of relation @xmath4 in previous papers is based on the main part of the curvature effect . does it still hold ( or , in what situation it would still hold ) when the full curvature effect is considered ? this motivates our investigation below . the structure of the paper is as follows . in section 2 , we present a general analysis on the full curvature effect in cases when the intrinsic emission is a power - law . in section 3 , we discuss light curves of power - law emission associated with several typical intrinsic temporal profiles . presented in section 4 is an example of application of our model . conclusions are presented in the last section . observation of the emission arising from an expending fireball would be influenced by the delay of time of different areas of the fireball surface , the variation of the intensity due to the growing of the fireball radius , the variation of the time contracted factor and the shifting of the intrinsic spectrum associated with the angle to the line of sight . taking all these factors into account , one comes to a full knowledge of the so - called curvature effect ( see also qin et al . 2006 for a detailed explanation ) . consider a constant expanding fireball shell emitting within proper time interval @xmath7 and over the fireball area confined by @xmath8 , where @xmath9 is the angle to the line sight . assume that the energy range of the emission is not limited . following the same approach adopted in qin ( 2002 ) and qin et al . ( 2004 ) , one can verify that the flux tensity expected by a distant observer measured at laboratory time @xmath10 is @xmath11[r_{c}/c+(t_{0}-t_{0,c})\gamma ( v / c)]^{2}dt_{0}}{d^{2}\gamma ^{2}\{r_{c}/c-[d / c-(t_{ob}-t_{c})](v / c)\}^{2}},\]]where @xmath12 and @xmath13 are determined by @xmath14\gamma } + t_{0,c}\}\]]and @xmath15\gamma } + t_{0,c}\},\]]respectively , and @xmath16 and @xmath17 are related by @xmath18(v / c)}{r_{c}/c+(t_{0}-t_{0,c})\gamma ( v / c)}\gamma \nu .\]]the observation time is confined by @xmath19[(t_{0,\min } -t_{0,c})\gamma + t_{c}]+[t_{c}(v / c)-r_{c}/c]\cos \theta _ { \min } + d / c\leq t_{ob } \\ \leq \lbrack 1-(v / c)\cos \theta _ { \max } ] [ ( t_{0,\max } -t_{0,c})\gamma + t_{c}]+[t_{c}(v / c)-r_{c}/c]\cos \theta _ { \max } + d / c\end{array}.\]]beyond this time interval , no photons of the emission are detectable by the observer . a power - law spectrum was commonly observed in early x - ray afterglow especially in the steep decay phase ( e.g. , vaughan et al . 2006 ; cusumano et al . 2006 ; obrien et al . 2006 ) . in this paper we focus our attention only on the case of the intrinsic emission with a power - law spectrum which is expectable in the case of synchrotron emission produced by shocks and was generally assumed in previous investigations ( e.g. , fenimore et al . 1996 ; sari et al . 1998 ; kumar & panaitescu 2000 ) . let the intensity of the intrinsic emission be @xmath20 ( kumar and panaitescu 2000 ) . one gets from equation ( 1 ) that @xmath21^{2+\beta } [ ( t_{0}-t_{0,c})\gamma + d / c-(t_{ob}-t_{c})]dt_{0}}{d^{2}(\gamma v / c)^{2+\beta } ( t_{ob}-t_{c}+r_{c}/v - d / c)^{2+\beta } } , \]]where relation ( 4 ) is applied . assigning @xmath22one comes to @xmath23^{2+\beta } [ ( t_{0}-t_{0,c})\gamma + r_{c}/v - t]dt_{0},\]]with@xmath24\gamma } + t_{0,c}\},\]]@xmath25\gamma } + t_{0,c}\},\]]@xmath26and @xmath19[(t_{0,\min } -t_{0,c})\gamma + t_{c}]+[t_{c}(v / c)-r_{c}/c]\cos \theta _ { \min } + r_{c}/v - t_{c}\leq t \\ \leq \lbrack 1-(v / c)\cos \theta _ { \max } ] [ ( t_{0,\max } -t_{0,c})\gamma + t_{c}]+[t_{c}(v / c)-r_{c}/c]\cos \theta _ { \max } + r_{c}/v - t_{c}\end{array } .\ ] ] the meaning of @xmath27 defined by equation ( 7 ) can be revealed by employing equation ( 8) in qin et al . ( 2004 ) ( where quantity @xmath27 is now written as @xmath10 ) . according to quation ( 8) in qin et al . ( 2004 ) , emission from @xmath28 ( this emission occurs at @xmath29 ) corresponds to @xmath30 ; and emission from the area of @xmath31 from any @xmath32 ( occurring at @xmath29 ) gives rise to @xmath33 . quantity @xmath34 is nothing but the traveling time of the fireball surface from the explosion spot to @xmath32 , contracted by factor @xmath35 since the area of @xmath36 moves towards the observer with lorentz factor @xmath37 . thus , @xmath30 is the moment when photons emitted from @xmath38 reach the observer . even for @xmath29 , one would have @xmath39 if @xmath40 . the emission time @xmath29 does not mean that photons are radiated at @xmath38 . in stead , it means that these photons are emitted from the surface of the fireball with radius @xmath32 which is measured at @xmath41 ( see qin 2002 appendix a ) . note that when the power - law range is limited , then it would constrain the integral limits @xmath12 and @xmath13 which are different from equations ( 2 ) and ( 3 ) , or ( 9 ) and ( 10 ) ( see qin 2002 ) . in the following , we adopt the kumar & panaitescu ( 2000 ) s assumption : the intrinsic emission is a strict power - law within the energy range corresponding to the observed energy channel . thus , equations ( 2 ) and ( 3 ) , or ( 9 ) and ( 10 ) are applicable . according to ( 9 ) and ( 10 ) , @xmath42^{2+\beta } [ ( t_{0}-t_{0,c})\gamma + r_{c}/v - t]dt_{0}$ ] is only a function of @xmath27 . let@xmath43^{2+\beta } [ ( t_{0}-t_{0,c})\gamma + r_{c}/v - t]dt_{0}.\]]equation ( 8) could then be written as @xmath44it shows that , in the case that the power - law intrinsic radiation intensity @xmath45 holds within the energy range which corresponds to the observed energy channel due to the doppler shifting , a power - law spectrum will also hold within the observed channel and the index will be exactly the same as that in the intrinsic spectrum . taking factor @xmath46 as a constant , equation ( 14 ) gives rise to @xmath47this is the well - known flux density associated with the curvature effect , which reveals the relation between the temporal and spectral power - law indexes : @xmath48 , where @xmath5 is the temporal index ( e.g. , when assuming @xmath49 ) ( see fenimore et al . 1996 ; kumar & panaitescu 2000 ) . let us consider an intrinsic emission with a @xmath50 function of time . in this situation , effects arising from the duration of real intrinsic emission will be omitted and therefore those merely coming from the expanding motion of the fireball surface will be clearly seen . not losing generality , we assume that @xmath51 and take @xmath52 and @xmath53 ( this corresponds to the half fireball surface facing us , which will be taken throughout this paper ) . one then gets from ( 12 ) that @xmath54 within the observation time confined by ( 16 ) , the integral ( 13 ) becomes @xmath55therefore,@xmath56when @xmath27 is beyond the time range confined by ( 16 ) , @xmath57 and then @xmath58 . based on qin ( 2002 ) and qin et al . ( 2004 ) , one can check that the term @xmath59 $ ] in equation ( 1 ) , or the term @xmath60 $ ] in equation ( 13 ) , comes from the projected factor of the infinitesimal fireball surface area in the angle concerned ( say , @xmath61 ) to the distant observer , which is known as @xmath62 . this term becomes @xmath63 for a @xmath64-function temporal radiation when adopting the new time definition ( 7 ) . corresponding to larger observation times , the line of sight angles of the emitted areas are larger , and then the term @xmath62 becomes smaller . note that when one considers only a very small cone towards the observer , this term could be ignored since it varies very mildly within the angle range close to @xmath31 . however , what we discuss here is the steep decay phase of early x - ray afterglow which was generally assumed to arise from high latitude emission . in this situation , the variation of this term would be significant . according to ( 11 ) , another noticeable term , @xmath65 $ ] , in equation ( 1 ) or ( 13 ) reflects the shifting of frequency . equation ( 11 ) suggests that the flux observed at frequency @xmath66 and time @xmath27 will be contributed by rest - frame photons of frequency @xmath67 emitted at proper time @xmath68 ( note that the flux will also be contributed by rest - frame photons of other frequency @xmath69 emitted at other proper time @xmath70 so long as they satisfy equation ( 11 ) , and the value of the flux is determined by all these possible photons ) . quantity @xmath65 $ ] is a shifting factor of the frequency when observation time @xmath27 is fixed . this term is independent of observation time , but due to its coupling with @xmath71 in the term of @xmath59 $ ] , it might also affect the light curve . similarly , the term @xmath72 might also play a role due to its coupling with @xmath71 in the term @xmath59 $ ] . the last factor affecting the profile of light curves is the integral range of equation ( 13 ) . the integral range might differ from time to time since the fireball surface area that sends photons to the observer , which are observed at time @xmath27 , might change with time . shown in ( 9 ) and ( 10 ) , both @xmath12 and @xmath13 are determined by observation time @xmath27 . a time dependent integral range in equation ( 13 ) probably could make the decay phase of the light curve deviate from a strict power - law one . t^{-(2+\beta ) } $ ] ( lower lines ) and @xmath73 ( upper lines ) associated with @xmath74 ( solid lines ) and @xmath75 ( dashed lines ) . note that the lower and upper lines are overlapped in the main domain of the corresponding light curves.,width=480 ] in the following , we show how these time factors affect the decay phase of light curves which arise from the intrinsic emission with a power - law spectrum . we first consider the case of the temporal profile of the intrinsic emission being a @xmath64-function of time . the light curve arising from this emission is @xmath76t^{-(2+\beta ) } $ ] according to ( 18 ) . we take @xmath77 and @xmath78 to plot the curves . we consider the fireball radius with @xmath79 and @xmath80 which correspond to two typical radius @xmath74 and @xmath81 respectively ( see ryde & petrosian 2002 ) . shown in fig . 1 are the light curves of @xmath76t^{-(2+\beta ) } $ ] and @xmath73 associated with @xmath82 and @xmath75 . we find that , in the case of very short intrinsic emission , although the @xmath63 term in equation ( 18 ) plays a role in the decay phase , the temporal curve well follow a power - law in the main domain of the phase . following the power - law curve is a tail falling off speedily due to the effect of the @xmath63 term . a remarkable feature revealed by the figure is that the power - law decay time is solely determined by and very sensitive to the radius of the fireball and the power - law range itself can tell how large is a fireball radius . for example , a power - law range being found to extend to @xmath83 must be larger than @xmath84 and that being found to extend to @xmath85 must be larger than @xmath86 . the conclusion is surprisingly to be independent of the lorentz factor . note that this conclusion holds when the intrinsic emission is extremely short so that its temporal profile can be treated as a @xmath64-function . as illustrated in fig . 1 , a strict @xmath87 curve followed by a speedily falling off tail is a feature of extremely short intrinsic emission . when this feature is observed , one can estimate the fireball radius merely from the time scale of the power - law decay phase so long as the spectrum is a power - law and the relation of @xmath48 holds . second , let us consider the intrinsic emission with its temporal profile being an exponential of time and check if the resulting light curve is different from that arising from the @xmath64-function emission . we ignore the contribution from the rise phase of the emission of shocks ( it corresponds to the situation when the rise time is extremely short ) . the intrinsic decaying light curve with an exponential form is assumed to be : @xmath88 $ ] for @xmath89 . equation ( 14 ) now becomes @xmath90with @xmath91^{2+\beta } [ ( t_{0}-t_{0,c})\gamma v / r_{c}+1-tv / r_{c}]dt_{0}}{exp[(t_{0}-t_{0,c})/\sigma _ { d}]}\qquad \qquad ( t_{0}>t_{0,c}),\]]@xmath92and@xmath93where @xmath94 is a constant and observation time @xmath27 is confined by @xmath95 not losing generality , we take @xmath96 . equations ( 20)-(22 ) then become @xmath97@xmath98and@xmath99 here , we take @xmath100 , @xmath101 , and adopt @xmath79 to plot the light curves . for the lorentz factor , we take @xmath102 and @xmath103 , respectively . ; dashed lines for @xmath104 ) arising from the intrinsic emission with its temporal profile being an exponential function of time ( see equation ( 19 ) ) , plotted in cases of @xmath105 , @xmath106 , @xmath107 and @xmath83 respectively , where @xmath108 is determined by ( 24 ) . for the sake of comparison , those lines in fig . 1 are also plotted ( the grey color lines).,width=480 ] shown in fig . 2 are the light curves plotted with different values of the width of the exponential function ( @xmath105 , @xmath106 , @xmath107 and @xmath83 ) . the light curves are quite similar to those arising from the intrinsic emission with its temporal profile being a @xmath64-function of time ( see fig . 1 , where a feature of a @xmath87 curve followed by a speedily falling off tail is observed ) . due to the contribution of the exponential decay curve of the intrinsic emission , the range of light curves is sightly larger than that in the case of the intrinsic emission with a @xmath64-function of time ( this can be observed when the width is large enough ; see the lower right panel of fig . this is understandable since after the width the emission of an exponential function dies away rapidly and therefore its contribution can be ignored . in the case when both the width of the exponential function emission and the lorentz factor of the fireball are large , the resulting light curve would obviously deviate from that arising from the @xmath64-function emission in the domain of the falling off tail , where the slope of the tail of the former light curve becomes obviously mild ( see the lower right panel of fig . 2 ) . besides this , no other characteristics can distinguish the tow kinds of light curve . third , we check if an observed light curve arising from the emission with a power - law spectrum has something to do with the intrinsic decaying behavior when the decay curve is a power - law of time . here , we also ignore the contribution from the rise phase of the emission of shocks , and then consider only an intrinsic power - law decay emission ( this will occur when the cooling is a power law ) . assuming that the power - law decay time is infinity , the intrinsic decaying light curve is taken as @xmath109^{-\alpha _ { 0}}$ ] for @xmath110 , where @xmath111 is a constant which is the time when the power - law decay emission begins . in this case , equation ( 14 ) becomes @xmath112with @xmath113^{2+\beta } [ ( t_{0}-t_{0,c})\gamma v / r_{c}+1-tv / r_{c}]}{[(t_{0}-t_{0,c})/(t_{0,0}-t_{0,c})]^{\alpha _ { 0}}}dt_{0}\qquad ( t_{0}>t_{0,0}),\]]@xmath114and@xmath115 where @xmath116 is a constant and observation time @xmath27 is confined by @xmath117not losing generality , we take @xmath96 . equations ( 28)-(31 ) then become @xmath118@xmath119@xmath120and@xmath121 here , we take @xmath122 and @xmath101 to plot the light curves . for the lorentz factor , we take @xmath104 and @xmath123 . we consider two typical values of the fireball radius @xmath124 and @xmath125 . for the intrinsic temporal power - law index , we take @xmath126 , @xmath127 , @xmath128 , @xmath129 and @xmath130 respectively , and for the time when the power - law decay emission begins we take @xmath131 , @xmath132 and @xmath106 respectively . the corresponding light curves are displayed in fig . 3 . due to the contribution of @xmath133 , some new features are observed . there exist two kinds of light curve : a ) a @xmath0 curve followed by a shallow decay curve with its index being obviously smaller than @xmath134 ( type i ) ; b ) a @xmath0 curve followed by a very steep decay phase ( shown as a `` cutoff '' curve ) and then a shallow decay curve with its index being smaller than @xmath134 ( type ii ) . the curve of type ii tends to appear in cases when the intrinsic temporal power - law index is large , the lorentz factor is small and the onset of the intrinsic temporal power - law is early ( comparing left panels of sub - figures a , b and c ; or comparing right panels of the sub - figures ) . the very steep decay curve appears very close to the time position marked by that in the light curve arising from the @xmath64 function emission ( see the gray color lines in the figure ) ( in fact , relative to the latter , the former shifts to slightly larger time scales ) . this means that the time position of the very steep decay phase of the light curve of type ii is mainly determined by the radius of the fireball , which can serve as an indicator of the latter ( see also the discussion in the two previous subsections ) . for the light curve of type i , the start of the shallow decay phase can appear from very early time scale to around 300s for the fireball with radius @xmath125 , depending on the intrinsic temporal power - law index @xmath135 , the lorentz factor @xmath37 and the onset time @xmath136 of the intrinsic temporal power - law ( see the left panels of the sub - figures a , b and c ) . the smaller values of @xmath135 , @xmath37 and @xmath136 , the larger time scale of the start of the shallow decay phase . for the fireball with radius @xmath137 , conclusions drawn from type i light curves remain the same , except that the maximum of the start time of the shallow decay phase can appear at around 3000s . in both types i and ii , the slope of the shallow decay curve increases with the increasing of @xmath135 . revealed in the left lower panel of fig . 3c , as a special case of type i , some light curves appear to be a single power - law one with their indexes significantly smaller than @xmath134 . they are in fact the shallow decay phase of the corresponding light curves . the onset of the phase shifts to much smaller time scales due to the larger values of the lorentz factor @xmath37 and the onset time @xmath136 of the intrinsic temporal power - law for a given value of the fireball radius . one might notice that there is no upper limit of the intrinsic power - law decay emission considered above . let us put an upper limit to the intrinsic emission and then check if it could give rise to other noticeable features on the observed light curves . the intrinsic decaying light curve is assumed to be @xmath109^{-\alpha _ { 0}}$ ] for @xmath138 . also , we take @xmath96 . in this situation , equations ( 27 ) , ( 32 ) and ( 33 ) hold , while equations ( 34 ) and ( 35 ) are replaced by@xmath139 @xmath140 parameters adopted in producing fig . 3 are also adopted here to create the light curves . among those studied in fig . 3 , we consider only the following four cases : @xmath102 and @xmath131 ( see the upper panels of fig . 3a ) ; @xmath102 and @xmath141 ( see the upper panels of fig . 3b ) ; @xmath103 and @xmath141 ( see the lower panels of fig . 3b ) ; @xmath103 and @xmath142 ( see the lower panels of fig . 3c ) . for the new parameter , we take @xmath143 , @xmath144 and @xmath145 respectively . the corresponding light curves are displayed in figs . 4 - 7 which correspond to the four cases respectively . and @xmath131 . here , we consider two values of the fireball radius and three time scales of the duration of the power - law emission ( see the description in each panel ) . the equations are the same as that adopted in fig . 3 , except that we use equations ( 36 ) and ( 37 ) to replace equations ( 34 ) and ( 35 ) , respectively . the symbols are the same as that in fig . 3.,width=480 ] and @xmath141.,width=480 ] and @xmath141.,width=480 ] and @xmath142.,width=480 ] upper panels of these figures show that when the power - law emission is as short as @xmath146 times of the typical time scale of the fireball radius ( say , when @xmath147 ) and the lorentz factor is not so large ( say , not larger than 100 ) , the light curves are similar to those arising from the @xmath64 function emission . this suggests that , in the framework of the curvature effect , light curve characteristics of emissions with time scales as short as @xmath146 times of @xmath148 and the lorentz factor not larger than 100 are hard to be distinguished from that of a @xmath64 function emission . ( this is in agreement with what is shown in fig . lower panels of these figures arise from longer duration of the power - law emission ( @xmath149 ) . some new features appear . a remarkable one is the light curve with a power - law decay curve followed by a shallow phase and then a steeper power - law phase ( type iii ) . connecting the two latter phases of this kind of light curve is a remarkable time break ( check the upper three black lines of each lower panel of the figures ) . this tends to happen when the intrinsic temporal power - law index is relatively small . otherwise , this kind of curve disappear ( check the two lower black lines in each lower panel of the figures ) . when the duration of the power - law emission is not so large and not so small ( say , @xmath150 ) , other forms of light curves are observed ( see the mid panels of these figures ) . in this situation , when the lorentz factor is large enough ( say , @xmath151 ) , light curves of type iii with shorter shallow phases appear ( see the mid panels of figs . 6 and 7 ) . this is expectable since in the framework of the curvature effect the profile of a light curve depends only on the ratio between the observational time scale and the corresponding fireball radius time scale @xmath148 ( see qin et al . 2004 ) , and due to the contraction time effect ( note that , the face - on part of the fireball surface moves towards us when the fireball expands ) a certain observational time scale corresponds to a longer co - moving time scale for a larger lorentz factor . in our analysis above , we consider only a simple power - law emission , for which the power - law index @xmath6 is assumed to be constant . expected from the model , the observed spectrum would be a constant power - law with the same index and the rapid decay light curve would be the well - known @xmath0 curve . this constrains our application , since the spectra of many x - ray afterglows of grbs are found to vary with time and the corresponding light curves are found not to follow the @xmath0 curve ( see zhang et al . 2007 and the unlv grb group web - site http://grb.physics.unlv.edu ) . instead of a power - law , many light curves are bent . to apply our model , one must find a burst with its spectral index being constant and its light curve following ( or approximately following ) the @xmath0 curve in its x - ray afterglow . after checking the data provided in web - site http://grb.physics.unlv.edu ( up to march 25 , 2008 ) , we find that grb 050219a might be one that fits our simple model . the data show , the spectral index does not vary with time and its mean is @xmath152 . in addition , the first decay curve of the bust is a power - law curve with its index being approximately @xmath134 . this phase is followed by a shallow one which starts at about @xmath153 . comparing this light curve with those presented in fig . 3 , we guess , if it is due to the curvature effect , the fireball radius must be larger than @xmath125 , otherwise the start time of the shallow phase would be too small to meet the data ( see the left panels of figs . 3a , 3b and 3c ) . if the radius is @xmath154 , then the lorentz factor must be larger than @xmath155 , otherwise the start time of the shallow phase would be too large ( see the right upper panels of figs . 3a , 3b and 3c ) . revealed in fig . 3 , there are four factors affecting the start time of the shallow phase : the fireball radius @xmath156 ; the lorentz factor @xmath37 ; the intrinsic temporal power - law emission index @xmath135 ; and the start time of the intrinsic temporal power - law emission @xmath136 . available in the mentioned web - site , there are 75 data points in the xrt light curve of grb 050219a . as an example of fitting , we ignore the three data points with the largest time scales since the gap between them and the majority of the data set is too large and the domain showing a constant spectral index does not cover them ( see http://grb.physics.unlv.edu ) ( in this way , one can not tell if the spectral index in the corresponding time scale is still constant ) . with the rest 72 data points , we need only apply the equations adopted in the discussion of the case of the temporal profile of the intrinsic emission being a power - law function of time and the power - law decay time being infinity . the equations adopted in producing fig . 3 are employed to fit the data set , where the term @xmath157 , which dominates the magnitude of the theoretical curve , would be determined by fit . since both the fireball radius and the lorentz factor are sensitive to the time scale of the start time of the shallow phase , we deal with them one by one . we first fix the lorentz factor and assume it to be @xmath158 , allowing @xmath156 and @xmath135 to vary since not only the start time of the shallow phase should be met but also the power - law index of the shallow phase should be accounted for . in addition , we take @xmath142 since @xmath136 is less sensitive to the start time of the shallow phase ( see the right panels of figs . 3a , 3b and 3c ) . the best fitting curve is shown in fig . one finds that the xrt data of grb 050219a can be roughly accounted for by a power - law temporal emission from an expanding fireball surface . note that the corresponding fitting parameters are not important since other possibilities exist ( see the discussion below ) . next , we fix the fireball radius and take it as @xmath137 ( in fact , we take @xmath79 which corresponds to @xmath74 ) , allowing @xmath37 and @xmath135 to vary . also , we take @xmath142 . the best fit is displayed in fig . it shows that the result of the fit with a fixed fireball radius is hard to be distinguished from that with a fixed lorentz factor . therefore , the resulting fitting parameters are not important in this stage of investigation . there is a third choice : one could fix @xmath135 and allow @xmath156 and @xmath37 to vary . we guess , it might yields a similar result . . the solid line is the best fit to the data . the corresponding fitting parameters are : @xmath159 , @xmath160 , and @xmath161 ( see equation ( 27 ) ) . the @xmath162 of the fit is @xmath163.,width=480 ] . the equations adopted for the fit are the same as those used in fig . the black solid line represents the best fit and the grey solid line is the solid line in fig . 8 . the corresponding fitting parameters are : @xmath164 , @xmath165 , and @xmath166 ( see equation ( 27 ) ) . the @xmath162 of the fit is @xmath167.,width=480 ] we investigate in this paper how an intrinsic emission with a power - law spectrum @xmath168 emitting from an expanding fireball surface gives rise to an observed flux density when the full curvature effect is considered . we find that , if the power - law spectrum of the intrinsic radiation holds within the energy range that corresponds to the observed energy channel due to the doppler shifting , the resulting spectrum would be a power - law as well and the index will be exactly the same as that in the intrinsic spectrum , regardless the real form of the temporal profile of the intrinsic emission . accompanied with the power law spectrum of index @xmath6 is a power law light curve with index @xmath169 , expected by the curvature effect , which was known previously ( see fenimore et al . 1996 ; kumar & panaitescu 2000 ) . this light curve could be observed if the intrinsic emission is extremely short or if the emission arises from an exponential cooling . in particular , we assume and consider a power - law cooling emission in the co - moving frame ( for this emission , the intrinsic temporal profile is a power - law ) . we find that , if the power - law decay time being infinity , due to the contribution of the power - law cooling in the co - moving frame , the observed light curve influenced by the full curvature effect contains two phases : one is a rapid decay phase where the light curve well follows the well - known @xmath0 curve , and the other is a shallow decay phase where the light curve is obviously shallower than that in the rapid decay phase . if the power - law decay time is limited , there would be several kinds of light curve . a remarkable one among them contains three power - law phases ( see figs . 4 - 7 ) : the first is a rapid one with its index being equal to or larger than that of the @xmath0 curve ; the second is a shallow decay one with its index being obviously smaller than that in the first phase ; and the third is a rapid decay one with its index being equal to or less than that of the first phase . it might be possible that , some of the grbs containing such features in their afterglow light curves are due to expanding fireballs or face - on uniform jets ( see e.g. qin et al . 2004 ) emitting with a power - law spectrum and a power - law cooling ( being infinity or limited ) . in the view of co - moving observers , the dynamic process of the merger of shells would be somewhat similar to that occurred in the external shocks ( the main difference is that in the case of inner shocks , a co - moving observer observes only a limited volume of medium for which the density would evolve with time due to the enhancement of the fireball surface ) . based on this argument , we suspect that the intrinsic emission of some of those bursts possessing in their early x - ray afterglows a rapid decay phase soon followed by a shallow decay phase and then a rapid decay one might be somewhat similar to the well - known standard forward shock model ( sari et al . 1998 ; granot et al . 1999 ) ; while for some of the bursts with a rapid phase followed by a shallow phase in their late x - ray afterglows the emission might be that of the standard forward shock model influenced by the curvature effect . necessary conditions for perceiving this mechanism include : a ) during the period concerned , the spectral index should be constant ; b ) the temporal index in the first phase should be equal to or larger than that of the @xmath0 curve . as an example of application , we employ the xrt data of grb 050219a to perform a fit since the spectral index @xmath6 of this burst does not vary with time and the first decay phase of its light curve is a power - law one with its index being approximately @xmath134 . the result shows that the xrt data of this burst can be roughly accounted for by a power - law temporal emission from an expanding fireball surface . since there exist various possibilities , parameters obtained by the fit are not unique . to determine the parameters , we need other independent estimations . according to the analysis above , a reliable value of the fireball radius would be obtained if one observes a `` cut - off '' feature following the @xmath0 curve in the case of a constant spectral index @xmath6 . nevertheless , the start time of the shallow phase could raise a limit to the fireball radius ( see figs . 3 - 7 ) . for grb 050219a , if its xrt data are indeed due to the curvature effect , its radius corresponding to this emission must be larger than @xmath170 . we have checked that taking @xmath171 ( which corresponds to @xmath172 ) , @xmath173 and @xmath174 can also roughly account for the data . as the lorentz factor is so small in this situation , we conclude that , if the x - ray afterglow of grb 050219a does arise from the emission of an expanding fireball surface , the radius of the fireball associated with this emission would not be much less than @xmath175 , otherwise @xmath37 would be too small to be regarded as a relativistic motion . why a shallow phase emerges due to the curvature effect ? we guess , while the first phase is dominated by the geometric effect and therefore obeys the @xmath0 curve , in the shallow phase the intrinsic emission overcomes the geometric effect and dominates the light curve observed . one might notice the @xmath126 lines ( the dash dot lines ) in fig . 3 the shallow phase curve of these lines is parallel to the time axis . as the radius grows linearly with time when a constant lorentz factor is assumed ( see qin 2002 ) , the emission from the fireball surface of a certain solid angle increases as a square of time ( the area of the surface is proportional to @xmath176 ) . this in turn makes the total emission of @xmath177 from the surface becoming constant . when the intrinsic emission overcomes the geometric effect in the shallow phase , one can not expect a light curve of @xmath178 , but instead , we expect that of @xmath179 ( see fig . it is known that a @xmath50-function intensity approximates the process of an extremely short emission . this will occur when the corresponding fireball shells are very thin and the cooling time is relatively short compared with the curvature time scale ( for the time scale of the curvature effect , see kocevski et al . 2003 and qin & lu 2005 ) . two light curve characteristics are associated with a quasi-@xmath64-function emission . the first is a strict power - law decay curve with index @xmath134 . the second is the limited time range of this curve . if the cooling time is not so short but it is an exponential one , then these characteristics are also expected ( see fig . 2 ) . note that the exponential cooling time does not last the @xmath0 curve to a much larger time scale when the cooling itself is not very large ( say , in the case of @xmath154 , @xmath180 ; see fig . thus , one can estimate the fireball radius from bursts possessing these characteristics ( note that , the time scale of the @xmath0 curve is independent of the lorentz factor ; see equations ( 16 ) and ( 18 ) ) . for candidates of this kind of burst , we propose to fit the spectrum with @xmath181 and the light curve with @xmath182(t - t_{0})^{-(2+\beta ) } $ ] , where both @xmath183 and @xmath184 are free parameters . when the fitting is good enough , we say that the intrinsic fireball emission is likely very short or the cooling is an exponential one and the corresponding fireball radius is @xmath185 . when the expansion of the fireball is relativistic , we get @xmath186 . therefore , via this method , one obtains at least the upper limit of the fireball radius as long as the intrinsic emission is extremely short , or the cooling is an exponential one , and the intrinsic spectrum is a power - law . in the case of the intrinsic temporal power - law emission , when its temporal index is large enough ( @xmath187 ) , there would be a `` cutoff '' curve located exactly at the same time position of the speedily falling off tail in the light curve of a @xmath64-function emission . this feature could be used to estimate the fireball radius as well . presented in zhang et al . ( 2007 ) , several bursts seem to possess this `` cutoff '' feature : grb050724 , grb060211a , grb060218 , grb060427 , grb060614 , grb060729 and grb060814 . if the proposed interpretation can be applied , their radius would be that ranging from @xmath84 to @xmath86 . at lease one reason prevents us to reach such a conclusion . the spectra of these bursts happen to vary quite significantly within the light curves associated with this feature . this conflicts with what we assume in this paper ( we assume a constant intrinsic spectrum ) . we thus appeal further investigation of this issue taking into account the variation of the intrinsic spectrum , which might tell us whether the `` cutoff '' feature remains and/or its properties are maintained . displayed in literature , many swift bursts are found to possess a bent light curve instead of a strict power - law one , in the early x - ray afterglow ( see , e.g. , chincarini et al . 2005 ; liang et al . 2006 ; nousek et al . 2006 ; obrien et al . 2006 ) . in our analysis above , we seldom get bent light curves . this must be due to the fact that the model concerned is too simple , where we consider only emissions with constant spectra . when the intrinsic spectrum varies with time , one would expect bursts with both variable spectra and bent light curves ( the well - known @xmath0 curve suggests that light curves of fireballs are strongly affected by the corresponding emission spectra ) . indeed , we find that both variable spectra and bent light curves happen to appear in the same period for many swift bursts ( see zhang et al . 2007 and the unlv grb group web - site http://grb.physics.unlv.edu ) . since for some bursts their early x - ray afterglow spectra evolve with time while for some others their spectra have no significant temporal evolution ( zhang et al . 2007 ; butler & kocevski 2007b ) , we suspect that there might be two kinds of mechanism accounting for the x - ray afterglow emission . it seems likely that the observed variation of spectra is due to an intrinsic spectral evolution . the intrinsic spectral evolution would probably lead to deviations of the light curves studied above ( those studied in figs . we thus suspect that the bursts with no spectral evolution might have `` normal '' temporal profiles , while others might exhibit somewhat `` abnormal '' profiles . this seems to be the case according to figs . 1 - 3 in zhang et al . ( 2007 ) and figs . 7 - 8 in butler & kocevski ( 2007b ) . our simple model tends to account for the kind of bursts that their x - ray afterglow spectra do not evolved with time . however , for many bursts with roughly constant spectra and power - law light curves , the curves are too shallow to be accounted for by the @xmath0 curve ( see http://grb.physics.unlv.edu ) . our model seems too simple to account for the majority of xrt light curve data of swift bursts . it is therefore necessary to explore more complicated cases . for example , a variant lorentz factor ( which is expectable when the intrinsic emission is long enough ) might play a role . would it affect the slope of the decaying curve ? we are looking forward to see more investigations on this issue in the near future . before ending this paper , we would like to point out that quantity @xmath188 is the co - moving time measured by a co - moving observer when the fireball radius reaches @xmath32 ( see qin 2002 ) . note that , @xmath189 . therefore , when assigning @xmath190 , @xmath191 means @xmath106 co - moving time has passed after @xmath192 . when one analyzes the emission associated with @xmath193 , @xmath194 refers only the emission at @xmath190 which is the co - moving time when @xmath192 . although we take quite small values of @xmath136 in the above analysis , it does not correspond to early emission when we adopt @xmath193 or @xmath195 . therefore , our analysis on emission from fireballs with @xmath196 does not put forward any constraint on the prompt emission . the conclusion that characteristics of the prompt emission of bursts with shallow decay phase are similar to those without shallow decay phase obtained recently by liang et al . ( 2007 ) are not violated by our findings . this work was supported by the national science fund for distinguished young scholars ( 10125313 ) , the national natural science foundation of china ( no . 10573005 ) , and the fund for top scholars of guangdong provience ( q02114 ) . we also thank the financial support from the guangzhou education bureau and guangzhou science and technology bureau .
we explore the influence of the full curvature effect on the flux of early x - ray afterglow of gamma - ray bursts ( grbs ) in cases when the spectrum of the intrinsic emission is a power - law . we find that the well - known @xmath0 curve is present only when the intrinsic emission is extremely short or the emission arises from an exponential cooling . the time scale of this curve is independent of the lorentz factor . the resulting light curve would contain two phases when the intrinsic emission has a power - law spectrum and a temporal power - law profile with infinite duration . the first phase is a rapid decay one where the light curve well follows the @xmath0 curve . the second is a shallow decay phase where the power - law index of the light curve is obviously smaller than that in the first phase . the start of the shallow phase is strictly constrained by the fireball radius , which in turn , can put a lower limit to the latter . in the case when the temporal power - law emission lasts a limited interval of time , there will be a third phase after the @xmath0 curve and the shallow decay phase , which is much steeper than the shallow phase . as an example of application , we fit the xrt data of grb 050219a with our model and show that the curvature effect alone can roughly account for this burst . although fitting parameters can not be uniquely determined due to various choices of fitting , a lower limit of the fireball radius of this burst can be estimated , which is @xmath1 .
Police have charged Blaec Lammers, 20, of Bolivar, Mo., with plotting a mass shooting at a weekend screening of the new 'Twilight' movie or at a local Walmart. (Photo: Bolivar, Mo., Police Department) A 20-year-old Missouri man was charged Friday with planning a mass shooting this weekend at either a screening of the latest Twilight movie or a local Walmart. Blaec Lammers, of Bolivar, is accused of buying assault rifles and more than 400 rounds of ammunition in a plot mimicking the July mass shooting that killed 12 and wounded 57 at an Aurora, Colo., theater showing the premiere of the latest Batman movie, the Springfield News-Leader reports. Lammers was charged with first-degree assault, making a terroristic threat and armed criminal action. He is being held on $500,000 bail. His mother contacted police Thursday, saying Lammers had bought assault weapons and ammunition, and was concerned he "may have intentions of shooting people at the movie," The Twilight Saga: Breaking Dawn -- Part 2, according to court documents. He told police he had bought a ticket for a Sunday show. He then stated he might instead target the Walmart Supercenter in Bolivar "because if he ran out of ammunition he would be able to break the glass where the ammunition is stored and get more, writes the News-Leader, published by Gannett, USA TODAY's parent. Police Detective Dusty Ross, who interviewed Lammers, told the Bolivar Herald-Free Press, that Lammers said he planned to surrender after the shooting at the B&B Bolivar Cinema 5. Daniel VanOrden, Circuit General Manager for B&B Theatres of Fulton, Mo., said in an e-mail to USA TODAY, "We appreciate the Bolivar Police Department for acting swiftly to avert any threat to the multiple potential targets, thus making this a non-incident." He did say whether any, or additional, security was planned for Twilight screenings. Walmart has not released a statement. According to the police affidavit, signed by Ross, Lammers said he had bought two assault rifles for hunting. When asked about "recent shootings in the news," he "stated that he had a lot in common with the people. .. [He] stated he was quiet, kind of a loner, had recently purchased firearms and didn't tell anybody about it, and had homicidal thoughts," according to a probable-cause statement. Lammers said he bought the guns Monday and Tuesday -- police did not say where -- and went to nearby Aldrich on Tuesday to practice shooting. According to the police affidavit, he said he had never fired a gun before and "wanted to make sure he knew how they functioned." He also said he was currently not taking his medication, which was not specified. Ross said he does not believe anyone else was involved in the alleged plot. "I think it would have been something he would have done solely on his own," Ross told the Herald-Free Press. "You'll come across people who will make threats toward people and events but they don't actually take the substantial steps that he took in planning it out." The affidavit also stated that in 2009, Lammers "claimed he wanted to fatally stab a Walmart employee and followed the employee around the store before he was contacted by officers." KCTV5 says he was "not convicted in state court" but does not say whether he was charged. ||||| A Bolivar man has been arrested and charged in Polk County Circuit Court with class B felony assault in the first degree, class C felony making a terrorist threat and felony armed criminal action after allegedly planning to shoot people with assault-type rifles at Bolivar Walmart. According to a probable cause statement by Det. Dusty Ross of the Bolivar Police Department, Blaec James Lammers, 20, initially planned on attending a showing of the "Twilight" movie at B&B Theatre in Bolivar and shooting people at it. He had planned on doing it at a showing tonight (Friday, Nov. 16), Ross said in a phone interview. "Blaec Lammers stated that he had purchased tickets to go see the 'Twilight' movie ... and he was going to shoot people at the movie theatre on that night," the statement said. "He then got to thinking about it and realized that he might run out of ammunition, so he decided that he would go and shoot people at Walmart in Bolivar. ... He would walk into the store and just start shooting people at random and if he ran out of ammunition (he said he purchased 400 rounds), he would just break the glass where the ammunition is being stored and get some more and keep on shooting until the police arrived," the statement said. Lammers then planned to turn himself in to the police, according to the statement. Danny McGarrah, manager of the Bolivar B&B Theatre, declined to comment. A district manager for B&B Theatres has not responded to a request for information. Walmart Stores Inc. Spokeswoman Ashley Hardie said Friday night that the local store is cooperating with authorities. "We are working with police in providing any information for their investigation," Hardie said. Hardie did not comment on whether the store will increase security. "We don't want to say or do anything that's going to hinder their (Bolivar Police Department) investigation," she said. Lammers' mother, who is not named in the statement, contacted the police department around noon yesterday (Nov. 15) and said she was concerned about her son purchasing weapons that were very similar to the ones used in the movie theater shooting in Aurora, Colo., and that she was concerned he may have intentions of shooting people at the "Twilight" movie. She gave the police department a description of the vehicle Lammers was in — a lime green Volkswagen Beetle that belonged to his girlfriend, Ross said. Officer Michael Sly happened to see it at Sonic on Springfield Avenue and made contact with Lammers at about 1:20 p.m. yesterday. Ross said after speaking with Lammers, he did not believe anyone else was involved in the plans. "I think it would have been something he would have done solely on his own," Ross said. "You'll come across people who will make threats toward people and events but they don't actually take the substantial steps that he took in planning it out," Ross said. "He got the idea, he purchased the guns, purchased the ammo, went out and practiced using the gun, got his venue, got tickets to the venue, then thought 'Well, maybe if I run out of ammo, I need to pick a different venue.' "He had taken every step he needed to take except for actually committing the act," Ross said. Bolivar City Administration Darin Chappell said the police department acted quickly and professionally in protecting public safety. "The main thrust from our perspective is that there is no credible threat for any place in Bolivar on Black Friday or any other day," Chappell said. "There is just no reason to be concerned about shopping in Bolivar or anywhere around here, as far as we can tell. "Everything is taken care of," Chappell said. "There's no need to be alarmed." The city has taken steps to notify any entity that should be notified of Lammers' arrest, Chappell said. Lammers allegedly planned something similar in 2009. "Lammers had previously stated that he wanted to stab a Walmart employee to death and went to Walmart and followed an employee around the store before he was contacted by officers," the statement said. In an interview with Ross, Lammers allegedly said he had a lot in common with individuals involved in recent shootings in the news. "Lammers stated that he was quiet, kind of a loner, had recently purchased firearms and didn't tell anybody about it and had homicidal thoughts," the statement said. The suspect was familiar with several high profile shootings, Ross said, discussing shootings at Virginia Tech, Arizona, Columbine, Colo., and Aurora. Lammers acquired one assault-type gun on Monday, Nov. 12, and another on Tuesday, Nov. 13. Ross said he purchased both guns, a .223 and .22 similar in appearance to the AR-15s used in the Aurora shooting, from Walmart. He is held in the Polk County Jail on a $500,000 bond. The case is still under investigation, Ross said.
– A worried mother may have prevented a mass shooting in Missouri, reports USA Today. Police in Bolivar, Mo., say 20-year-old Blaec Lammers admitted buying assault weapons with the intent to kill people at a showing of the new Twilight movie tonight. At some point, Lammers changed his plan and instead intended to shoot customers at the local Walmart, police tell the Bolivar Herald-Free Press. That way, if he ran out of ammo, "he would just break the glass where the ammunition is being stored and get some more and keep on shooting until the police arrived," says a police statement. Lammers' mother called police yesterday, worried because her son had bought weapons similar to those used in the Dark Knight massacre in Aurora, Colo., along with 400 rounds of ammunition. Police interviewed Lammers and say he eventually confessed to the plot, maintaining that he identified with Dark Knight suspect James Holmes as "kind of a loner." Lammers, charged with first-degree assault and making a terroristic threat, is being held on $500,000 bail.
SECTION 1. SHORT TITLE; TABLE OF CONTENTS. (a) Short Title.--This Act may be cited as the ``Puerto Rico Democracy Act of 2006''. (b) Table of Contents.--The table of contents for this Act is as follows: Sec. 1. Short title; table of contents. Sec. 2. Findings. Sec. 3. Federally sanctioned process for Puerto Rico's self- determination, including initial plebiscite and subsequent procedures. Sec. 4. Applicable laws and other requirements. Sec. 5. Availability of funds for the self-determination process. SEC. 2. FINDINGS. The Congress finds the following: (1) On November 30, 1992, President George H.W. Bush issued a Memorandum to Heads of Executive Departments and Agencies recognizing that ``As long as Puerto Rico is a territory . . . the will of its people regarding their political status should be ascertained periodically by means of a general right of referendum . . .''. (2) Consistent with this policy, on December 23, 2000, President William J. Clinton issued Executive Order 13183, establishing the President's Task Force on Puerto Rico's Status for purposes that included identifying the options for the territory's future political status ``. . . that are not incompatible with the Constitution and basic laws and policies of the United States . . .'', as well as the process for realizing such options. (3) President George W. Bush adopted Executive Order 13183 and, on December 3, 2003, amended it to require that the President's Task Force on Puerto Rico's Status issue a report ``. . . no less frequently than once every 2 years, on progress made in the determination of Puerto Rico's ultimate status.''. (4) On December 22, 2005, the Task Force appointed by President George W. Bush issued a report recommending that: (A) The Congress provide within a year for a federally sanctioned plebiscite in which the people of Puerto Rico would be asked to vote on whether they wish to remain a United States territory or pursue a constitutionally viable path toward a permanent nonterritorial status. (B) If the people of Puerto Rico elect to pursue a permanent nonterritorial status, Congress should provide for a subsequent plebiscite allowing the people of Puerto Rico to choose between one of the two permanent nonterritorial status options. Once a majority of the people has selected one of the two options, Congress is encouraged to begin a process of transition toward that option. (C) If the people of Puerto Rico elect to remain as a United States territory, further plebiscites should occur periodically, as long as a territorial status continues, to keep Congress informed of the people's wishes. SEC. 3. FEDERALLY SANCTIONED PROCESS FOR PUERTO RICO'S SELF- DETERMINATION, INCLUDING INITIAL PLEBISCITE AND SUBSEQUENT PROCEDURES. (a) First Plebiscite Under This Act.--The Puerto Rico State Elections Commission shall conduct a plebiscite in Puerto Rico during the 110th Congress, but not later than December 31, 2007. The ballot shall provide for voters to choose only between the following two options: (1) Puerto Rico should continue the existing form of territorial status as defined by the Constitution, basic laws, and policies of the United States. If you agree, mark here____. (2) Puerto Rico should pursue a path toward a constitutionally viable permanent nonterritorial status. If you agree, mark here ______. The two options set forth on the ballot shall be preceded by the following statement: Instructions: Mark the option you choose as each is defined below. Ballots with more than one option marked will not be counted. (b) Procedure If Majority in First Plebiscite Favors Continued Territorial Status.--If a majority vote in a plebiscite held under subsection (a) favors the continuation of the existing territorial status, the Puerto Rico State Elections Commission shall conduct additional plebiscites under subsection (a) at intervals of every 8 years from the date that the results of the prior plebiscite are certified unless a majority of votes in the prior plebiscite favors pursuing a permanent nonterritorial status. (c) Procedure If Majority in First Plebiscite Favors Permanent Nonterritorial Status.--If a majority vote in any plebiscite held under subsection (a) favors permanent nonterritorial status, the Puerto Rico State Elections Commission shall conduct a plebiscite under this subsection. The ballot on the plebiscite under this subsection shall provide for a vote to choose only between the following two options: (1) Statehood: Puerto Rico should be admitted as a State of the Union, on equal footing with the other States. If you agree, mark here____. (2) Sovereign nation: Puerto Rico should become a sovereign nation, either fully independent from or in free association with the United States under an international agreement that preserves the right of each nation to terminate the association. If you agree, mark here___. The two options set forth on the ballot shall be preceded by the following statement: Instructions: Mark the option you choose as each is defined below. Ballots with more than one option marked will not be counted. (d) Period for Holding Plebiscite.--If a majority vote in the first plebiscite under subsection (a) favors permanent nonterritorial status, the plebiscite under subsection (c) shall be held during the 111th Congress, but no later than December 31, 2009. If a majority vote in a plebiscite referred to in subsection (b) favors permanent nonterritorial status, the plebiscite under subsection (c) shall be held not later than 2 years after the certification of the majority vote in such plebiscite under subsection (b). SEC. 4. APPLICABLE LAWS AND OTHER REQUIREMENTS. (a) Applicable Laws.--All Federal laws applicable to the election of the Resident Commissioner of Puerto Rico shall, as appropriate and consistent with this Act, also apply to any plebiscite held pursuant to this Act. Any reference in such Federal laws to elections shall be considered, as appropriate, to be a reference to the plebiscites, unless it would frustrate the purposes of this Act. (b) Federal Court Jurisdiction.--The Federal courts of the United States shall have exclusive jurisdiction over any legal claims or controversies arising from the implementation of this Act. (c) Rules and Regulations.--The Puerto Rico State Elections Commission shall issue all rules and regulations necessary to carry out the plebiscites under this Act. (d) Eligibility.--Each of the following shall be eligible to vote in any plebiscite held under this Act: (1) All eligible voters under the electoral laws in effect in Puerto Rico at the time the plebiscite is held. (2) All United States citizens born in Puerto Rico who comply, to the satisfaction of the Puerto Rico State Elections Commission, with all Puerto Rico State Elections Commission requirements (other than the residency requirement) applicable to eligibility to vote in a general election. Persons eligible to vote under this subsection shall, upon request submitted to the Puerto Rico State Elections Commission prior to the plebiscite concerned, be entitled to receive an absentee ballot for such plebiscite. (e) Certification of Plebiscite Results.--The Puerto Rico State Elections Commission shall certify the results of each plebiscite held under this Act to the President of the United States and the Senate and House of Representatives of the United States. (f) Report After Second Plebiscite.--No later than 6 months after the plebiscite provided for in section 3(c), the President's Task Force on Puerto Rico's Status shall submit a report to the Congress, prepared in consultation with the Governor, the Resident Commissioner, the President of the Senate of Puerto Rico, and the Speaker of the House of Representatives of Puerto Rico, detailing measures that may be taken to implement the permanent nonterritorial status option chosen in the plebiscite together with such recommendations as the Task Force may deem appropriate. SEC. 5. AVAILABILITY OF FUNDS FOR THE SELF-DETERMINATION PROCESS. During the period beginning October 1, 2006, and ending on the date the President determines that all the plebiscites required by this Act have been held, the Secretary of the Treasury may allocate, from the funds provided to the Government of Puerto Rico under section 7652(e) of the Internal Revenue Code, not more than $5,000,000 to the State Elections Commission of Puerto Rico to be used for expenses of carrying out each plebiscite carried out under this Act, including for voter education materials certified by the President's Task Force on Puerto Rico's Status as not being incompatible with the Constitution and basic laws and policies of the United States. Such amounts may be as identified by the President's Task Force on Puerto Rico's Status as necessary for such purposes.
Puerto Rico Democracy Act of 2006 - Directs the Puerto Rico State Elections Commission to conduct a plebiscite in Puerto Rico during the 110th Congress, giving voters the option to vote for continued U.S. territorial status or for a path toward a constitutionally viable permanent nonterritorial status. Provides for subsequent procedures, depending on ballot results. Authorizes the Secretary of the Treasury to allocate certain funds for the self-determination process.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Protection and Advocacy for Veterans Act of 2016''. SEC. 2. ESTABLISHMENT OF GRANT PROGRAM TO IMPROVE MONITORING OF MENTAL HEALTH AND SUBSTANCE ABUSE TREATMENT PROGRAMS OF DEPARTMENT OF VETERANS AFFAIRS. (a) Establishment.--Commencing not later than 180 days after the date of the enactment of this Act, the Secretary of Veterans Affairs shall establish a grant program to improve the monitoring of mental health and substance abuse treatment programs of the Department of Veterans Affairs. (b) Grants.-- (1) Main grant.-- (A) Award.--In carrying out subsection (a), the Secretary shall award grants to four protection and advocacy systems under which each protection and advocacy system shall carry out a demonstration project to investigate and monitor the care and treatment of veterans provided under chapter 17 of title 38, United States Code, for mental illness or substance abuse issues at medical facilities of the Department. (B) Minimum amount.--Each grant awarded under subparagraph (A) to a protection and advocacy system shall be in an amount that is not less than $105,000 for each year that the protection and advocacy system carries out a demonstration project described in such subparagraph under the grant program. (2) Collaboration grant.-- (A) Award.--During each year in which a protection and advocacy system carries out a demonstration project under paragraph (1)(A), the Secretary shall award a joint grant to a national organization with extensive knowledge of the protection and advocacy system and a veterans service organization in the amount of $80,000. (B) Collaboration.--Each national organization and veterans service organization that is awarded a joint grant under subparagraph (A) shall use the amount of the grant to facilitate the collaboration between the national organization and the veterans service organization to-- (i) coordinate training and technical assistance for the protection and advocacy systems awarded grants under paragraph (1)(A); and (ii) provide for data collection, reporting, and analysis in carrying out such paragraph. (3) Authority.--In carrying out a demonstration project under paragraph (1)(A), a protection and advocacy system shall have the authorities specified in section 105(a) of the Protection and Advocacy for Individuals with Mental Illness Act (42 U.S.C. 10805(a)) with respect to medical facilities of the Department. (c) Selection.--In selecting the four protection and advocacy systems to receive grants under subsection (b)(1)(A), the Secretary shall consider the following criteria: (1) Whether the protection and advocacy system has demonstrated monitoring and investigation experience, along with knowledge of the issues facing veterans with disabilities. (2) Whether the State in which the protection and advocacy system operates-- (A) has low aggregated scores in the domains of mental health, performance, and access as rated by the Strategic Analytics Improvement and Learning database system (commonly referred to as ``SAIL''); and (B) to the extent practicable, is representative of both urban and rural States. (d) Reports.--The Secretary shall ensure that each protection and advocacy system participating in the grant program submits to the Secretary reports developed by the protection and advocacy system relating to investigations or monitoring conducted pursuant to subsection (b)(1)(A). The Secretary shall designate an office of the Department of Veterans Affairs to receive each such report. (e) Duration; Termination.-- (1) Duration.--The Secretary shall carry out the grant program established under subsection (a) for a period of five years beginning on the date of commencement of the grant program. (2) Termination of demonstration projects.--The Secretary may terminate a demonstration project under subsection (b)(1)(A) before the end of the five-year period described in paragraph (1) if the Secretary determines there is good cause for such termination. If the Secretary carries out such a termination, the Secretary shall award grants under such subsection to a new protection and advocacy system for the remaining duration of the grant program. (f) Authorization of Appropriations.--There is authorized to be appropriated to the Secretary to carry out the grant program under subsection (a) $500,000 for each of fiscal years 2017 through 2021. (g) Definitions.--In this section: (1) The term ``protection and advocacy system'' has the meaning given the term ``eligible system'' in section 102(2) of the Protection and Advocacy for Individuals with Mental Illness Act (42 U.S.C. 10802(2)). (2) The term ``State'' means each of the several States, territories, and possessions of the United States, the District of Columbia, and the Commonwealth of Puerto Rico. (3) The term ``veterans service organization'' means any organization recognized by the Secretary for the representation of veterans under section 5902 of title 38, United States Code.
Protection and Advocacy for Veterans Act of 2016 This bill directs the Department of Veterans Affairs (VA) to establish a five-year grant program to improve the monitoring of VA mental health and substance abuse treatment programs. The VA shall award grants to four protection and advocacy systems under which each recipient shall investigate and monitor VA facilities care and treatment of veterans with mental illness or substance abuse issues. Criteria for selecting recipients shall include whether the state in which the protection and advocacy system operates has low mental health, performance, and access scores. During each year in which a protection and advocacy system carries out a demonstration project, the VA shall award a joint grant to a national organization with extensive knowledge of the protection and advocacy system and a veterans service organization to: (1) coordinate training and technical assistance, and (2) provide for related data collection, reporting, and analysis. "Protection and advocacy system" means the state-established system to protect and advocate the rights of persons with developmental disabilities.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Outdoor Lighting Efficiency Act''. SEC. 2. FINDINGS. The Congress finds as follows: (1) Of all the electricity generated in the United States, 4.4 percent is consumed for outdoor lighting. (2) Outdoor lighting represents approximately 20 percent of all electricity consumed for lighting purposes in the United States. (3) Efficient outdoor lighting technologies provide light quality equal or superior to other technologies in common use today. (4) Efficient outdoor lighting technologies often have longer product lifetimes than other technologies in common use today. (5) The use of efficient outdoor lighting technologies will substantially reduce waste and emissions from power generation, and reduce the cost of electricity used in certain commercial and government applications, such as lighting the Nation's roadways and parking lots. SEC. 3. DEFINITIONS. (a) Section 340(1) of the Energy Policy and Conservation Act (42 U.S.C. 6311(1)) is amended by striking subparagraph (L) and inserting the following: ``(L) Outdoor luminares. ``(M) Outdoor high light output lamps. ``(N) Any other type of industrial equipment which the Secretary classifies as covered equipment under section 341(b).''. (b) Section 340 of the Energy Policy and Conservation Act (42 U.S.C. 6311) is amended as adding at the end the following: ``(25) The term `luminaire' means a complete lighting unit consisting of a lamp or lamps, together with parts designed to distribute the light, to position and protect such lamps, and to connect such lamps to the power supply. ``(26) The term `outdoor luminaire' means a luminaire that is listed as suitable for wet locations pursuant to Underwriters Laboratories Inc. standard UL 1598 and is labeled as `Suitable for Wet Locations' consistent with section 410.4(A) of the National Electrical Code 2005, except for-- ``(A) luminaires designed solely for signs that cannot be used in general lighting applications; ``(B) portable luminaires designed for use at theatrical and television performance areas and construction sites; ``(C) luminaires designed for continuous immersion in swimming pools and other water features; ``(D) seasonal luminaires incorporating solely individual lamps rated at 10 watts or less; ``(E) luminaires designed solely to be used in emergency conditions; ``(F) landscape luminaries, with an integrated photoelectric switch or programmable time switch, with a nominal voltage of 15 volts or less; and ``(G) components used for repair of installed luminaries. ``(27) The term `outdoor high light output lamp' means a lamp that-- ``(A) has a rated lumen output not less than 2601 lumens and not greater than 35,000 lumens; ``(B) is capable of being operated at a voltage not less than 110 volts and not greater than 300 volts, or driven at a constant current of 6.6 amperes; and ``(C) is not a Parabolic Aluminized Reflector lamp. ``(28) The term `outdoor lighting control' means a device incorporated in a luminaire that receives a signal, from either a sensor (such as an occupancy sensor, motion sensor, or daylight sensor) or an input signal (including analog or digital signals communicated through wired or wireless technology), and can adjust the light level according to the signal.''. SEC. 4. STANDARDS. Section 342 of the Energy Policy and Conservation Act (42 U.S.C. 6313) is amended by adding at the end the following: ``(g) Outdoor Luminaires.-- ``(1) Each outdoor luminaire manufactured on or after January 1, 2011, shall have-- ``(A) a lighting efficiency of at least 50 lumens per watt; and ``(B) a lumen maintenance, calculated as mean rated lumens divided by initial lumens, of at least 0.6. ``(2) Each outdoor luminaire manufactured on or after January 1, 2013, shall have-- ``(A) a lighting efficiency of at least 70 lumens per watt; and ``(B) a lumen maintenance, calculated as mean rated lumens divided by initial lumens, of at least 0.6. ``(3) Each outdoor luminaire manufactured on or after January 1, 2015, shall have-- ``(A) a lighting efficiency of at least 80 lumens per watt; and ``(B) a lumen maintenance, calculated as mean rated lumens divided by initial lumens, of at least 0.65. ``(4) In addition to the requirements of paragraphs (1) through (3), each outdoor luminaire manufactured on or after January 1, 2011, shall have the capability of producing at least two different light levels, including 100 percent and 60 percent of full lamp output. ``(5)(A) Not later than January 1, 2017, the Secretary shall issue a final rule amending the applicable standards established in paragraphs (3) and (4) if technologically feasible and economically justified. Such a final rule shall be effective no later than January 1, 2020. ``(B) A final rule issued under subparagraph (A) shall establish efficiency standards at the maximum level that is technically feasible and economically justified, as provided in subsections (o) and (p) of section 325. The Secretary may also, in such rulemaking, amend or discontinue the product exclusions listed in section 340(23)(A) through (G), or amend the lumen maintenance requirements in paragraph (3) if he determines that such amendments are consistent with the purposes of this Act. ``(C) If the Secretary issues a final rule under subparagraph (A) establishing amended standards, the final rule shall provide that the amended standards apply to products manufactured on or after January 1, 2020, or one year after the date on which the final amended standard is published, whichever is later. ``(h) Outdoor High Light Output Lamps.--Each outdoor high light output lamp manufactured on or after January 1, 2012, shall have a lighting efficiency of at least 45 lumens per watt.''. SEC. 5. TEST PROCEDURES. Section 343(a) of the Energy Policy and Conservation Act (42 U.S.C. 6314(a)) is amended by adding at the end the following: ``(10) Outdoor lighting.-- ``(A) With respect to outdoor luminaries and outdoor high light output lamps, the test procedures shall be based upon the test procedures specified in Illuminating Engineering Society procedure LM-79 as of March 1, 2009, and/or other appropriate consensus test procedures developed by the Illuminating Engineering Society or other appropriate consensus standards bodies. ``(B) If Illuminating Engineering Society procedure LM-79 is amended, the Secretary shall amend the test procedures established in subparagraph (A) as necessary to be consistent with the amended LM-79 test procedure, unless the Secretary determines, by rule, published in the Federal Register and supported by clear and convincing evidence, that to do so would not meet the requirements for test procedures under paragraph (2). ``(C) The Secretary may revise the test procedures for outdoor luminaries or outdoor high light output lamps by rule consistent with paragraph (2), and may incorporate as appropriate consensus test procedures developed by the Illuminating Engineering Society or other appropriate consensus standards bodies.''. SEC. 6. PREEMPTION. Section 345 of the Energy Policy and Conservation Act (42 U.S.C. 6316) is amended by adding at the end the following: ``(i)(1) Except as provided in paragraph (2), section 327 shall apply to outdoor luminaries to the same extent and in the same manner as the section applies under part B. ``(2) Any State standard that is adopted on or before January 1, 2015, pursuant to a statutory requirement to adopt efficiency standards for reducing outdoor lighting energy use enacted prior to January, 31, 2008, shall not be preempted.''.
Outdoor Lighting Efficiency Act - Amends the Energy Policy and Conservation Act to include as "covered equipment" outdoor luminares and outdoor high light output lamps, as defined in this Act. Specifies the lighting efficiency, lumen maintenance, and light level production capability required for each outdoor luminare manufactured on or after January 1 of 2011, 2013, and 2015. Requires the Secretary of Energy (DOE), by January 1, 2017, to issue a final rule amending such efficiency standards to establish standards at the maximum level that is technically feasible and economically justified. Requires each outdoor high light output lamp manufactured on or after January 1, 2012, to have a lighting efficiency of at least 45 lumens per watt. Sets forth provisions governing energy efficiency test procedures for such luminares and lamps. Provides that state standards that are adopted on or before January 1, 2015, pursuant to a requirement to adopt efficiency standards for reducing outdoor lighting energy use enacted prior to January 31, 2008, shall not be preempted.
recently , single - particle properties of electrons in quasi - one - dimensional ( q1d ) electron systems have attracted considerable interest . with the theoretical calculations of quasiparticle renormalization factor@xcite and the momentum distribution function around the fermi surface , hu and das sarma@xcite have clarified that a clean 1d electron system shows the luttinger liquid behavior , but even a slightest amount of impurities restores the fermi surface and the fermi - liquid behavior remains . within a one - subband model , they evaluated the self - energy due to electron - electron coulomb interaction in unclean q1d systems by using the leading - order gw dynamical screening approximation.@xcite within such an approximation , hwang and das sarma@xcite obtained the band - gap renormalization in photoexcited doped - semiconductor quantum wire in the presence of plasmon - phonon coupling . in particular , the inelastic coulomb scattering rate plays an important role in relaxation processes of an injected electron in the conduction band . the lifetime of the injected electron , determined by this scattering rate , can be measured by femtosecond time - resolved photoemission spectroscopy.@xcite the relaxation processes of an injected electron occur through the scattering channels due to different excitations in the system , such as quasiparticle excitations , plasmons , and phonons.@xcite its lifetime provides information on the interactions between the electron and the different excitations . the relaxation mechanism is important because of its technological relevance , as most semiconductor - based devices operate under high - field and hot - electron conditions.@xcite on the other hand , tunneling effects have provide new devices formed by coupled q1d doped semiconductors@xcite and attracted considerable theoretical interest because of their fundamental applicability . in this work we present a theoretical study on the inelastic coulomb scattering rates in coupled bi - wire electron gas systems . a particular attention will be devoted to the effects of weak resonant tunneling . we find that a weak resonance tunneling can introduce a strong intersubband inelastic coulomb scattering by emitting an acoustic plasmon . the emission of optical plasmon , on the other hand , is provided by intrasubband scattering of injected electrons . the rest of the paper is organized as follows . in sec . ii , we present the theoretical formalism of the inelastic coulomb scattering rates in a multisubband q1d system of coupled quantum wires . sec . iii is devoted to analyze the inelastic - scattering rates for a bi - wire system in the absence of tunneling between the wires . as an extension of such calculations we show in sec . iv the numerical results in the presence of weak resonant tunneling . finally , we summarize our results in sec . v. we consider a two - dimensional system in the @xmath1 plane subjected to an additional confinement in the @xmath2-direction which forms two quantum wires parallel to each other in the @xmath3-direction . the confinement potential in the @xmath2-direction is taken to be of square well type of barrier height @xmath4 and well widths @xmath5 and @xmath6 representing the first and the second wire , respectively . the potential barrier between the two wires is of width@xmath7 . the subband energies @xmath8 and the wave functions @xmath9 are obtained from the numerical solution of the one - dimensional schrdinger equation in the @xmath2-direction . we restrict ourselves to the case where @xmath10 and define @xmath11 as being the gap between the two subbands . the interpretation of the index @xmath12depends on tunneling between the two wires . when there is no tunneling , the wavefunction @xmath13 of the subband @xmath8 is localized in quantum wire @xmath12 . clearly , it is wire index . for two symmetric quantum wires , i.e. , @xmath14 one has @xmath15 or @xmath16 when tunneling occurs , the wavefunction of each subband spreads in two quantum wires . in this case , @xmath12 is interpreted as subband index . for two symmetric quantum wires with tunneling , the wavefunctions of the two lowest eigenstates are symmetric and antisymmetric . in this case , the two wires are in the resonant tunneling condition and the gap between the two subbands is denoted by @xmath17 . in a multisubband q1d system , the inelastic coulomb scattering rate for an injected electron in subband @xmath12 with momentum @xmath18 can be obtained by the imaginary part of the screened exchange self - energy @xmath19 $ ] , @xcite where , @xmath20 is the electron energy with respect to the fermi energy @xmath21 and @xmath22 the electron effective mass . at zero temperature , this self - energy can be obtained from the leading terms of the dyson s equation for the dressed electron green s function,@xcite and given by @xmath23 = \frac{i}{(2\pi ) ^{2}}\int dq\int d\omega ^{\prime } \sum_{n_{1}}v_{nn_{1}n_{1}n}^{s}(q,\omega ^{\prime } ) g_{n_{1}}^{(0)}\left ( k+q,\xi _ { n}(k)-\omega ^{\prime } \right ) , \label{self1}\ ] ] where @xmath24 is the greens function of noninteracting electrons and @xmath25 is the dynamically screened electron - electron coulomb potential . the screened coulomb potential is related to the dielectric function @xmath26 and the bare electron - electron interaction potential @xmath27 through the equation @xmath28 similarly to the one - band model@xcite , the self - energy in eq . ( [ self1 ] ) can be separated into the frequency - independent exchange and the correlation part , @xmath29 = \sigma _ { n}^{ex}(k)+\sigma _ { n}^{cor}\left [ k,\xi _ { n}(k)\right ] .$ ] the exchange part is given by @xmath30 where @xmath31 is the fermi - dirac distribution function . notice that @xmath32 is real because the bare electron - electron coulomb potential @xmath33 is totally real . therefore , one only needs to analyze the imaginary part of @xmath34 $ ] , since it gives rise to the imaginary part of the self - energy which we are interested in . after some algebra , we find that the coulomb inelastic - scattering rate for an electron in a subband @xmath12 with momentum @xmath18 is given by @xmath35 = \sum_{n^{\prime } } \sigma _ { n , n^{\prime } } ( k ) , \label{praver}\ ] ] with @xmath36\right\}\ ] ] @xmath37 where @xmath38 is the standard step function . in the above equation , the frequency integration has already been carried out , since the bare green s function @xmath39 can be written as a dirac delta function of @xmath40 . for the present coupled quantum wire systems with two occupied subbands , the multisubband dielectric function within the random - phase approximation ( rpa ) is given by @xmath41 the function @xmath42 is the 1d non - interacting irreducible polarizability at zero temperature for a system free from any impurity scattering . in the presence of impurity scattering , we use mermin s formula @xcite @xmath43 } \label{pol2}\ ] ] to obtain the polarizability including the effect of level broadening through a phenomenological damping constant @xmath44 . the coulomb potential @xmath45 is calculated by using the numerical solution of the electron wavefunction @xmath46 here , @xmath47 is the static lattice dielectric constant , @xmath48 is the electron charge , and @xmath49 is the zeroth - order modified bessel function of the second order . the electron - electron coulomb interaction describes two - particle scattering events . we observe the following characteristics of the electron - electron coulomb interaction in the coupled quantum wires representing different physical scattering processes : @xmath50 @xmath51 and @xmath52 represent the scattering in which the electrons keep in their original wires or subbands ; @xmath53 represent the scattering in which both electrons change their wire or subband indices ; @xmath54 and @xmath55 @xmath56 indicating the scattering in which only one of the electrons suffers the interwire or intersubband transition . when there is no tunneling , @xmath57 . clearly , they are responsible for tunneling effects . we also notice that , for two symmetric quantum wires in resonant tunneling , @xmath58 and @xmath56 vanish . in the following , we will analyze the inelastic coulomb scattering rate of electrons in two coupled symmetric quantum wires ( @xmath59 ) in the absence of tunneling . as we discussed before , when there is no tunneling between two quantum wires , @xmath60 only the coulomb interactions @xmath61 , @xmath62 , and @xmath63 contribute to the electron - electron interaction . furthermore , the potential @xmath61 and @xmath62 are responsible for the intrawire interaction and @xmath64 due to the symmetry property of the two wires . the potential @xmath63 is responsible for the interwire coulomb interaction . if we assume that the two wires have an identical electron density @xmath65 , the total electron density in the system is @xmath66 . in this case , the two quantum wires have the same fermi level @xmath21 so that @xmath67 . therefore , from eqs . ( [ diel ] ) and ( [ diel1 ] ) , we obtain the screened intrawire coulomb potential @xmath68 given by @xmath69[1-(v_{a}-v_{c})\pi _ { 0}]}. \label{vascreened}\ ] ] the denominator in the above equation is the determinant of the dielectric matrix @xmath70 . the equation @xmath71 yields the plasmon dispersions of the electron gas system . the plasmons result in singularities in the screened coulomb potential which are of the most important contribution to the inelastic coulomb scattering rate . according to eq . ( [ sigma ] ) , the intrawire scattering rate of the symmetric bi - wires with identical electron density becomes @xmath72 \right\ } \left\ { \theta \left ( -2kq - q^{2}\right ) -\theta \left ( e_{fn}-k^{2}-q^{2}-2kq\right ) \right\ } , \label{gamma11}\ ] ] for @xmath73 and @xmath74 where @xmath75 is the subband fermi energy . notice that @xmath76 for two symmetric quantum wires . it is obvious that @xmath77 . in the absence of tunneling , interwire scattering rates @xmath78 and @xmath79 are zero because the transition of an electron from one wire to the other is impossible . therefore , we have @xmath80 @xmath81 . but the interwire coulomb interaction @xmath63 influences the collective excitations in the system leading to two different plasmon modes , i.e. , the optical and acoustic modes . subsequently , it affects the inelastic - scattering rates . we know that the zeros of the two parts @xmath82 and @xmath83 in the denominator in equation ( [ vascreened ] ) yield the optical and acoustic plasmon mode dispersions , respectively . to understand better the scattering mechanism , we show in fig . 1 the collective excitation dispersion relations of the two coupled symmetric gaas quantum wires of width @xmath84 with different barrier widths . in the calculations , we consider the barrier height @xmath85 which does not permit tunneling between the wires . the plasmon modes in fig . 1 correspond to the different scattering channels through which the injected electron can lose energy . we find a higher ( lower ) frequency plasmon branch which represents the optical ( acoustic ) plasmon mode @xmath86 ( @xmath87 ) . intrawire quasiparticle excitation continuum @xmath88 ( shadow region ) is also indicated in the figure . the thin - solid curve is the plasmon dispersion of a single quantum wire with electron density @xmath89 . it corresponds to the situation in which the distance between the two wires is infinity ( @xmath90 ) or @xmath91 in this case , the plasmon mode is of dispersion relation @xmath92 at @xmath93.@xcite as the distance between the wires decreases , the potential @xmath94 increases . a finite @xmath63 leads to a gap between the two plasmon modes . when the two wires are close enough , the acoustic mode develops a linear wave vector dependence . for @xmath93 , @xmath95 with @xmath96 $ ] where @xmath97 is the fermi velocity and @xmath98 , whereas the optical plasmon still keeps its well - know 1d dispersion relation , @xmath99.@xcite notice that , the interwire coulomb interaction @xmath63 , depending on the distance between the two wires , is responsible for the behavior of the wavevector dependence of the acoustic mode . as we will see , this affects significantly the inelastic coulomb scattering rate due to the acoustic plasmons . 2 shows the numerical results of inelastic plasmon scattering rate in the coupled wires corresponding to fig.1(a ) with a very small broadening constant @xmath100 mev . we observe two scattering peaks . the lower ( higher ) one is due to the acoustic ( optical ) plasmon scattering . the abrupt increase of the scattering rate at threshold electron momenta @xmath101 and @xmath102 correspond to the onset of the scattering of the acoustic and optical plasmon modes , respectively . the higher scattering peak due to the optical plasmon mode is always divergent at the onset @xmath103 and @xmath104 similarly to that in the single wire . but the behavior of the lower scattering peak is dependent on the distance between the two wires which is directly related to the dispersion relation of the acoustic plasmon mode at small @xmath0 . for small @xmath105 the acoustic mode is of a linear wavevector dependence leading to a finite scattering rate at the onset @xmath106 . with increasing @xmath107 , the acoustic mode loses its linear @xmath0 dependence resulting in the divergency at the onset of the scattering . in order to clarify such a behavior , we show in the inset the energy- vs momentum - loss curve @xmath108 for @xmath109 @xmath110 ( thin - solid curve ) and @xmath111 @xmath110 ( thin - dashed curve ) in the system with @xmath112 . along these curves , the momentum and energy conservations are obeyed and the electron relaxation is allowed . the dispersions of the optical and acoustic plasmon modes @xmath113 and @xmath114 are also given by thick long - dashed curves in the same figure . for @xmath103 ( @xmath101 ) , the thin - solid ( thin - dashed ) curveintersects the optical ( acoustic ) mode dispersion curve at @xmath115 @xmath116 . this means that the injected electron with momentum @xmath102 ( @xmath101 ) can emit one optical ( acoustic ) plasmon of frequency @xmath117 ( @xmath118 ) . notice that , the slopes of the curves @xmath119 ( @xmath120 ) and @xmath121 ( @xmath122 ) are equal at @xmath115 ( @xmath123 ) . for the optical plasmon mode , the intersection always occurs at finite @xmath124 because the optical plasmon goes as @xmath125 for small @xmath0 . the divergency due to the optical plasmon scattering is similar to that in the single quantum wire@xcite which is resulted from the coupling of the initial and final states via plasmon emission at @xmath103 . however , for the acoustic plasmon mode with linear @xmath0-dependence , @xmath126 because @xmath127 at @xmath93 . in this case , one can obtain @xmath128 . due to the fact that the plasmon mode is of vanished oscillator strength at @xmath129 , the emission of the acoustic plasmon of the wavevector @xmath130 can not produce divergency in the inelastic - scattering rate . with increasing the distance between the two wires , the acoustic plasmon mode loses its linear @xmath0-dependence and approaches to the dispersion of the optical plasmon mode . consequently , the @xmath123 becomes finite and the scattering rate is divergent at the threshold momentum @xmath101 . in fig . 3 , we show the scattering rates in the same structures as in fig . 2 but with higher electron density @xmath131@xmath110 . we see that , in the systems of higher electron density , the scattering threshold shift to larger momentum and the scattering is enhanced . in fig . 2 and 3 , we have not shown the inelastic - scattering rate due to virtual emission of quasiparticles which would occur below the threshold wavevector . it is known that , in a one - subband quantum wire , the contribution of the quasiparticle excitations to the inelastic coulomb scattering rate is completely suppressed due to the restrictions of the energy and momentum conservations . consequently , the scattering rate is zero until the onset of the plasmon scattering at a threshold @xmath132 . @xcite the quasiparticle excitations contribute to the inelastic scattering only when the level broadening is introduced . these contributions are negligible when the broadening constant is small . although , in the present case , we are dealing with two coupled quantum wires , the coulomb interaction does not influence the quasiparticle excitations as well as their contributions to the inelastic scattering . as far as the effect of the phenomenological broadening constant @xmath44 is concerned , we show in fig . 4 the dependence of the inelastic - scattering rate for different @xmath44 . finite broadening values of @xmath44 in the system give rise to the breaking of translational invariance due to the presence of impurity . this fact is responsible for relaxing the momentum conservation permitting inelastic scattering via quasiparticle and plasmon excitations for @xmath133 we show such a contribution in the inset of fig . 4 . for @xmath134 @xmath110 , conservation of energy and momentum does not permit opening of any excitation channels . this means that the injected - electron has infinite lifetime at the fermi surface which has been restored by impurity effects . in this remaining section , we are going to discuss the effect of weak tunneling on the inelastic coulomb scattering rates in two coupled symmetric quantum wires as have been shown in the previous section . when the tunneling occurs , a energy gap @xmath135 opens up between the two lowest subbands which have symmetric and antisymmetric wavefunctions in the @xmath2-direction about the center of the barrier . in this case , only the subband index is a good quantum number . as we have seen in section ii , @xmath58 and @xmath136 vanish in two symmetric quantum wires in resonant tunneling . but , @xmath137 is finite and responsible for the tunneling effects on the coulomb scattering . in weak resonant tunneling condition , one finds @xmath138 . after some algebra , we obtain @xmath139 @xmath140 @xmath141 and @xmath142 from the above equations and equation ( [ sigma ] ) , we can obtain the inelastic coulomb scattering rates in the presence of tunneling . we also notice that the zeros of the denominators in equations ( [ v1 ] ) and ( [ v2 ] ) yield the optical plasmon dispersion and those in equations ( [ v12 ] ) and ( [ v21 ] ) yield the acoustic plasmon dispersion . it indicates that the optical plasmons only contribute to the intrasubband scattering @xmath143 and @xmath144 , and the acoustic plasmons to the intersubband scattering @xmath145 and @xmath146 . we consider two coupled gaas / al@xmath147ga@xmath148as ( @xmath149 mev ) quantum wires of widths @xmath150 separated by a barrier of @xmath151 . in this case , we find @xmath152 mev indicating a very weak resonant tunneling . we show in fig . 5(a ) both inter- and intra - subband scattering rates . the intrasubband scattering rates @xmath143 and @xmath153 induced by the emission of the optical plasmons , is very similar to that in the absence of tunneling . it is also not difficult to understand that @xmath154 because , above the threshold of the optical plasmon emission , the plasmon frequency is much larger than @xmath135 and consequently , @xmath155 on the other hand , the tunneling introduces the intersubband scattering rates @xmath145 and @xmath146 and modifies strongly the mechanism of the acoustic plasmon emission . in order to clarify such results , we plot the corresponding acoustic plasmon dispersion relation in thick - dashed curve in fig . 5(b ) . the acoustic mode develops a plasmon gap at zero @xmath0 due to the tunneling effect.@xcite the thin lines indicate the intersubband energy- vs momentum - loss curves at the onsets of the acoustic plasmon scattering . they are determined by conservations of energy and momentum , given by @xmath156 for @xmath157 ( thin - dashed curve ) , and @xmath158 for @xmath159 ( thin - dotted curve ) , where @xmath160 and @xmath161 are threshold wavevectors above which the injected electron can be transferred to a different subband by emitting an acoustic plasmon . the @xmath162 ( thin - dotted curve ) intersects the acoustic plasmon dispersion at small wavevector @xmath163@xmath164 . the scattering process is similar to the acoustic plasmon scattering in the absence of tunneling as we discussed in the previous section . but now , the acoustic plasmon mode is of finite frequency with also a finite oscillator strength at @xmath93 resulting in a small divergency at @xmath161 . on the other hand , the intersection between the @xmath165 ( thin - dashed curve ) and the acoustic plasmon dispersion occurs at a quite larger wavevector @xmath166 @xmath110 . the scattering mechanism is more similar to that of the intrasubband scattering and produces a pronounceable divergence at @xmath167 . finally , we would like to show the tunneling effects on the total inelastic coulomb scattering rates @xmath168 . 6 gives the total scattering rates in ( a ) the absence and ( b ) the presence of tunneling between two quantum wires with @xmath169 and @xmath170 . we observe that a weak resonant tunneling does not influence much the optical - plasmon scattering , but it does affect strongly acoustic - plasmon scattering . the acoustic - plasmon scattering for the injected electron in the lowest subband is enhanced significantly and a quite strong scattering peak appears . for the injected electron in the second subband , tunneling introduces a small divergency in the scattering rate and shifts the scattering threshold to the lower wavevector . we have calculated the inelastic coulomb scattering rates of two coupled q1d electron gas systems within the gw approximation . the screened coulomb potential was obtained within the rpa . the coulomb interaction between the two quantum wires leads to the optical and acoustic plasmon modes and , consequently , two scattering peaks appear due to the scattering of the two modes . we found that the scattering of the optical plasmons in two coupled quantum wires is very similar to the plasmon scattering in a single wire because both plasmon modes have similar dispersion relations at small @xmath0.the scattering rate is divergent at the onset of the optical plasmon scattering . however , the acoustic plasmon mode does not produce such a divergency when it is of a linear @xmath0-dependence at small @xmath0 . this happens when two wires are close enough . furthermore , we studied the tunneling effects on the inelastic scattering . a weak resonant tunneling was introduced between the wires . such a tunneling lifts the degeneracy of the two subbands originated from two quantum wires and also produces a small plasmon gap on the acoustic mode at @xmath171 moreover , intrersubband scattering appears . we show that , in this case , the optical plasmons are only responsible for the intrasubband scattering and the acoustic plasmons are for the intersubband scattering . a weak tunneling enhances significantly acoustic plasmon scattering for an injected electron in the lowest subband . this work is supported by fapesp and cnpq , brazil .
we report a theoretical study on the inelastic coulomb scattering rate of an injected electron in two coupled quantum wires in quasi - one - dimensional doped semiconductors . two peaks appear in the scattering spectrum due to the optical and the acoustic plasmon scattering in the system . we find that the scattering rate due to the optical plasmon mode is similar to that in a single wire but the acoustic plasmon scattering depends crucially on its dispersion relation at small @xmath0 . furthermore , the effects of tunneling between the two wires are studied on the inelastic coulomb scattering rate . we show that a weak tunneling can strongly affect the acoustic plasmon scattering .
it is a well - known fact that an emitting source moving at a velocity exceeding the wave - propagation velocity in the medium will induce a shock - wave . prime examples of this phenomenon are the sonic boom emitted by a super - sonic airplane , a bow - wave from a moving ship , and cherenkov radiation emitted by an electric charge moving at almost the vacuum speed of light in a medium with an appreciable index of refraction . in this work we show that a similar effect occurs in the emission from a fast moving electric current . it is suggested that this effect manifests itself in the emission of electromagnetic waves from particle cascades in the atmosphere of the earth initiated by ultra - high energy ( uhe ) cosmic rays , with energies in excess of @xmath0ev . an uhe cosmic ray entering the atmosphere of the earth creates a cascade of particles , called an extensive air shower ( eas ) . in this cascade there are copious amounts ( @xmath1 , depending on the initial energy of the cosmic ray ) of electrons and positrons forming a small plasma cloud . this cloud , with a typical size of less than 1 m , moves with almost the light velocity towards earth . the magnetic field of the earth , by means of the lorentz force , induces a drift velocity for the leptons which is perpendicular to the direction of the initial cosmic ray and opposite for electrons and positrons . as a result an electric current is created in the fast moving plasma cloud . the strength of the induced electric current is roughly proportional to the number of charged particles in the plasma cloud as the induced drift velocity is varying little with height . this macroscopic picture @xcite was recently confirmed @xcite to agree with a microscopic description @xcite . even when the index of refraction of air would be equal to that of vacuum this varying electric current emits electromagnetic waves and coherent emission occurs at a wavelength longer than the size of the charge cloud , i.e. for radio frequencies @xmath2mhz @xcite . the geomagnetic emission mechanism @xcite has been confirmed from data @xcite . in addition to the induced current the plasma cloud has a net charge excess which also radiates . the polarization direction of radiation distinguishes the emission due to the charge excess and the geomagnetic current @xcite . since charge - excess radiation is generally smaller in intensity we will concentrate in this work on geomagnetic emission . as is well known the propagation speed of electromagnetic waves is @xmath3 where @xmath4 denotes the velocity of light in vacuum . in this work we investigate the effect of the index of refraction of air , @xmath5 , on the emission following ref . the effects of cherenkov radiation from eas have also been addressed in ref . @xcite . in this work we show that for realistic values for @xmath5 the cherenkov effect introduces distinct features in the ground pattern of the emitted radiation . as described , a cosmic ray entering the atmosphere induces an eas , a plasma cloud moving at almost the light velocity , where the magnetic field of the earth induces a net electric current in the plasma @xcite . from classical electrodynamics @xcite we obtain the linard - wiechert potentials for a point source following a trajectory @xmath6 and an observer at rest at @xmath7 , a^_pl(t,-(t))= _ t = t , [ eq : vec - pl ] where @xmath8 denotes the retarded time corresponding to the time the signal was emitted from the moving charge distribution and @xmath9 is the retarded distance @xcite . in the point - like approximation ( denoted by @xmath10 ) where the size of the plasma cloud is infinitesimal the four - current is , j^_pl(t,)=j^(t)^3(-(t ) ) , where @xmath11 and @xmath12 carries a longitudinal component due to the net charge excess ( which we will ignore ) and a transverse component due to the opposite drift of electrons and positrons induced by the earth magnetic field . the transverse component is proportional to the number of charged particles at a given height , @xmath13 where @xmath14 is the distance to the earth s surface as measured along the shower path . the electromagnetic fields at the observer are given as , e^i&=&c(^ia^0-^0 a^i ) , [ eq : e - field ] at @xmath15 , and where at @xmath16 the shower hits the earth . the distance @xmath17 is equal to the transverse distance to the shower axis . the velocity of the charge cloud is @xmath18 where we set @xmath19 in the following . for the special case @xmath20 , the retarded distance @xmath21 reduces to @xmath22 which is finite . the retarded time is defined by the light - cone condition @xmath23 where the distance @xmath24 is the optical path length between the source located at @xmath25 and the observer at @xmath26 . in reality the index of refraction of air depends on density and thus height , @xmath27 , hence light will follow curved trajectories where @xmath28 is the integral @xmath29 along the light curve from @xmath25 to @xmath26 . as function of the observer time @xmath30 for three different values of the index of refraction . the dashed line gives the shower - profile as function of @xmath31 for a @xmath32ev proton - induced shower . ] [ fig : trett ] in fig . [ fig : trett ] the emission height , @xmath14 , is plotted as a function of the observer time @xmath30 for an observer 100 m from the shower axis for three choices for the index of refraction @xmath5 . for the case @xmath20 the plot of the retarded time ( red drawn curve ) shows that the retarded time is a single valued function and that the earliest signals come from the top of the shower . for an index of refraction deviating from unity ( @xmath33 , @xmath27 ) the function is composed out of several branches ( dashed - magenta and dotted - blue curves in fig . [ fig : trett ] ) limited by the critical times @xmath34 , where the derivative @xmath35 becomes infinite . in @xcite it was already shown that a singularity in this derivative is related to a singularity in our vector potential since @xmath36 . this singularity ( the branch point ) is well known and corresponds to cherenkov emission . since the critical time is the time the cherenkov radiation is seen by the observer , it is henceforth called the cherenkov time . in the case of a constant and finite refractivity @xmath37 the cherenkov time @xmath38 is given as , c t_c = d , [ eq : tk ] with corresponding retarded time @xmath39 . the retarded time can readily be converted into an emission height @xmath40 and it can be seen that the expressions are consistent with the expression of the angle for cherenkov emission , @xmath41 . for the general case it has to be calculated numerically . to calculate the realistic pulse form for @xmath42 it is essential to include the fact that the emitting plasma cloud has a finite size , a^_w(t , ) = d^2 dh ( h , ) a^_pl(t,- ) [ eq : vec - pot ] where @xmath43 is the relative transverse coordinate to the shower axis and the source is at the position @xmath44 . the density profile is parameterized as @xcite w(r , h)=2r ( h,)= n ( r+r_0)^-3.5 h e^-2h / l at a fixed shower time @xmath8 and where @xmath45 is the longitudinal distance from the shower front that , by definition , moves with the vacuum speed of light @xmath4 . the normalization constant , @xmath46 , is chosen such that @xmath47 . positive values of @xmath45 mean a position behind the shower front , hence @xmath48 is zero for values of @xmath49 . the parameters are taken as @xmath50 m , @xmath51 m following the results of simulations using the cascade mode of the conex - mc - geo shower simulation and analysis package @xcite . the results of these simulations indicate that the pancake thickness parameter @xmath28 has to be considerably smaller than the value of @xmath52 m used in previous calculations @xcite . retaining the terms pertinent to geomagnetic radiation and using partial integration to have the derivatives in operate on the density distribution exclusively , the electromagnetic field can be expressed as [ eq : e - field - full ] e^i = - d^2 dh ( j^i_pl + w ) . the complication in is now reduced to the integration of inverse square - root divergencies over smoothly varying functions giving finite results . we will argue later that the first term is important for cherenkov radiation while the second is dominant for @xmath20 . m from the core for different values of the index of refraction as discussed in the text , @xmath20 , @xmath33 fixed , and @xmath27 realistic . in this calculation the lateral extent of the shower is ignored . ] [ fig : gm - r0-d100 ] to get a better appreciation of the structure of the pulse and the influence of the index of refraction we will discuss in some detail the results for an observer at a distance @xmath53 m from a shower with @xmath32ev at an angle @xmath54 . to simplify the discussion we will -for the time being- ignore the radial extent of the charge cloud . essential for the structure of the pulse is the longitudinal shower profile @xmath55 , as shown by the thinly - dotted black curve in fig . [ fig : trett ] . for the case that @xmath20 the denominator @xmath9 is a smoothly changing function over the integration regime and the dominant contribution is derived from the last term in . since the derivative of the shower profile reaches a maximum at @xmath56 km it can be seen from fig . [ fig : trett ] that the peak of the pulse occurs at @xmath57ns which agrees with the result in fig . [ fig : gm - r0-d100 ] . the integral of the first term , being a derivative , almost vanishes . for the cases that @xmath42 the reasoning is rather different . the denominator @xmath9 vanishes at the cherenkov time , indicated by the vertical arrows in fig . [ fig : trett ] , and thus @xmath58 varies strongly as function of @xmath45 . the contribution from the first term in the integral is large ( in contrast to the case @xmath20 ) and results in an enhanced contribution from the corresponding emission height . the pulse height will thus be proportional to @xmath59 while the peak is observed a little after @xmath38 in agreement with the results shown in fig . [ fig : gm - r0-d100 ] . since @xmath60 is large for @xmath53 m this results in a strong pulse . the remaining contributions in the integral are of secondary importance in this case . in reality the refractivity is equal to @xmath61 at ground level and decreases exponentially with height , @xmath62 . the values of the retarded time thus lie in between those obtained with @xmath20 and @xmath33 and are shown as the dotted - blue curve in fig . [ fig : trett ] . also for this case there is a clear cherenkov time which is slightly smaller than that for @xmath33 resulting is a very similar pulse as for the case of a fixed - finite @xmath5 . the main effect of including the radial extent of the shower is to smooth the time structure of the pulse and thus to wash - out some of the effects of @xmath42 . the third panel of fig . [ fig : gm - r - dall ] gives the complete geomagnetic result and should be compared with fig . [ fig : gm - r0-d100 ] . the differences between the three different choices for the index of refraction have diminished , instead of being three times as large the pulse height is increased by a factor two only . , @xmath33 fixed , and @xmath27 realistic ) , including the lateral extent of the shower.,title="fig : " ] , @xmath33 fixed , and @xmath27 realistic ) , including the lateral extent of the shower.,title="fig : " ] , @xmath33 fixed , and @xmath27 realistic ) , including the lateral extent of the shower.,title="fig : " ] , @xmath33 fixed , and @xmath27 realistic ) , including the lateral extent of the shower.,title="fig : " ] [ fig : gm - r - dall ] the cherenkov effect is strongly distance dependent as shown in fig . [ fig : gm - r - dall ] . as expressed in the discussion following emission at the cherenkov angle @xmath63 relates a distance @xmath64 from the shower core to a particular emission height . for an observer close to the shower , @xmath65 m , this cherenkov height lies well below the shower maximum where @xmath55 has fallen considerably and the intensity of the emitted radiation is low . it increases with increasing distance to reach a maximum around @xmath53 m where the cherenkov point lies at the height of the shower maximum . at even larger distances the cherenkov point lies above the shower maximum and the amplitude decreases rapidly . the position of the cherenkov maximum thus reflects the shower maximum and is thus sensitive for the composition of the cosmic ray . for @xmath20 the main strength of the pulse is emitted at the height where the derivative of the shower profile is large . since the retarded distance , @xmath66 , corresponding to this height in the shower evolution increases for observers further away from the shower core , the amplitude of the pulse is a monotonically decreasing function with distance . calculations confirm that for charge - excess radiation ( not reported here ) the effect of the index of refraction is very similar as for geomagnetic radiation where there are subtle differences due to the somewhat different weighting over shower height . for the chosen geometry the geomagnetic effect is however dominant . this shows that the cherenkov effect applies equally to radiation from a moving charge , to which it is usually applied , as to that from a moving electric current , which is at the focus here . the polarization of the radio signal , which distinguishes geomagnetic and charge - excess radiation , is not affected by the cherenkov effect . the principal signature of the cherenkov effect is that the pulse at a certain distance , 100 m for the present example , is considerably larger than it would have been for the case that @xmath20 . this feature is especially clear from fig . [ fig : gm - r - dph ] where the height of the pulse is plotted at various distances from the core . at short distances the pulse height diverges for the case of @xmath20 where it should be noted that this divergence is strongly dependent on the pancake thickness parameter @xmath28 , smaller values give a stronger divergence at @xmath67 . apart from an overall scaling factor the picture is not affected by @xmath28 for @xmath68 m . for the realistic case where @xmath42 a marked deviation is predicted with a clear enhancement in peak intensity at distances ranging from 50 till about 100 m from the core . this peak results from the fact that the cherenkov point lies close to the shower maximum for @xmath69100 m . , @xmath33 fixed , and @xmath27 realistic ) . ] [ fig : gm - r - dph ] the feature shown in fig . [ fig : gm - r - dph ] , an increase of the amplitude of the geomagnetic radiation with distance for @xmath70 instead of the monotonic decrease predicted for @xmath20 , remains for showers at a large angle with the vertical . for these inclined showers the shower maximum will occur at a larger values of @xmath31 , the distance to the point of impact on earth , and the maximum in the intensity will thus be observed at larger distances from the shower core , consistent with the angle for cherenkov emission . some hints of this effect may have been seen in recent results from the lopes @xcite collaboration showing that for certain events the pulse height follows the trend shown by the dotted - blue line in fig . [ fig : gm - r - dph ] . more detailed measurements are necessary where full attention is given to polarization observables which are crucial to distinguish geomagnetic and charge - excess radiation . such measurements are planned for new and renovated set - ups at lopes , codalema @xcite , and recently also new set - ups at the pierre auger observatory ( maxima @xcite , aera @xcite ) , and lofar @xcite . this work is part of the research program of the stichting voor fundamenteel onderzoek der materie ( fom) , which is financially supported by the nederlandse organisatie voor wetenschappelijk onderzoek ( nwo). f.d . kahn and i.lerche , proc . royal soc . falcke [ lopes ] , nature . de vries , , . h. schoorlemmer [ pierre auger coll . ] , proceedings of arena 2010 , nucl . instr . and meth . a ( in print ) ; doi:10.1016/j.nima.2010.11.145 j. coppens [ pierre auger coll . ] , nucl . instr . and meth . a . s. fliescher [ pierre auger coll . ] , contribution to the 2010 arena conference , nucl . instr . and meth . a ( in print ) ; http://dx.doi.org/10.1016/j.nima.2010.11.045 .
very energetic cosmic rays entering the atmosphere of the earth will create a plasma cloud moving with almost the speed of light . the magnetic field of the earth induces an electric current in this cloud which is responsible for the emission of coherent electromagnetic radiation . we propose to search for a new effect : due to the index of refraction of air this radiation is collimated in a cherenkov cone . to express the difference from usual cherenkov radiation , i.e. the emission from a fast moving electric charge , we call this magnetically - induced cherenkov radiation . we indicate its signature and possible experimental verification .
SECTION 1. LAW ENFORCEMENT POWERS OF INSPECTOR GENERAL AGENTS. (a) In General.--Section 6 of the Inspector General Act of 1978 (5 U.S.C. App.) is amended by adding at the end the following: ``(e)(1) In addition to the authority otherwise provided by this Act, each Inspector General appointed under section 3, any Assistant Inspector General for Investigations under such an Inspector General, and any special agent supervised by such an Assistant Inspector General may be authorized by the Attorney General to-- ``(A) carry a firearm while engaged in official duties as authorized under this Act or other statute, or as expressly authorized by the Attorney General; ``(B) make an arrest without a warrant while engaged in official duties as authorized under this Act or other statute, or as expressly authorized by the Attorney General, for any offense against the United States committed in the presence of such Inspector General, Assistant Inspector General, or agent, or for any felony cognizable under the laws of the United States if such Inspector General, Assistant Inspector General, or agent has reasonable grounds to believe that the person to be arrested has committed or is committing such felony; and ``(C) seek and execute warrants for arrest, search of a premises, or seizure of evidence issued under the authority of the United States upon probable cause to believe that a violation has been committed. ``(2) The Attorney General may authorize exercise of the powers under this subsection only upon an initial determination that-- ``(A) the affected Office of Inspector General is significantly hampered in the performance of responsibilities established by this Act as a result of the lack of such powers; ``(B) available assistance from other law enforcement agencies is insufficient to meet the need for such powers; and ``(C) adequate internal safeguards and management procedures exist to ensure proper exercise of such powers. ``(3) The Inspector General offices of the Department of Commerce, Department of Education, Department of Energy, Department of Health and Human Services, Department of Housing and Urban Development, Department of the Interior, Department of Justice, Department of Labor, Department of State, Department of Transportation, Department of the Treasury, Department of Veterans Affairs, Agency for International Development, Environmental Protection Agency, Federal Deposit Insurance Corporation, Federal Emergency Management Agency, General Services Administration, National Aeronautics and Space Administration, Nuclear Regulatory Commission, Office of Personnel Management, Railroad Retirement Board, Small Business Administration, Social Security Administration, and the Tennessee Valley Authority are exempt from the requirement of paragraph (2) of an initial determination of eligibility by the Attorney General. ``(4) The Attorney General shall promulgate, and revise as appropriate, guidelines which shall govern the exercise of the law enforcement powers established under paragraph (1). ``(5)(A) Powers authorized for an Office of Inspector General under paragraph (1) may be rescinded or suspended upon a determination by the Attorney General that any of the requirements under paragraph (2) is no longer satisfied or that the exercise of authorized powers by that Office of Inspector General has not complied with the guidelines promulgated by the Attorney General under paragraph (4). ``(B) Powers authorized to be exercised by any individual under paragraph (1) may be rescinded or suspended with respect to that individual upon a determination by the Attorney General that such individual has not complied with guidelines promulgated by the Attorney General under paragraph (4). ``(6) A determination by the Attorney General under paragraph (2) or (5) shall not be reviewable in or by any court. ``(7) To ensure the proper exercise of the law enforcement powers authorized by this subsection, the Offices of Inspector General described under paragraph (3) shall, not later than 180 days after the date of enactment of this subsection, collectively enter into a memorandum of understanding to establish an external review process for ensuring that adequate internal safeguards and management procedures continue to exist within each Office and within any Office that later receives an authorization under paragraph (2). The review process shall be established in consultation with the Attorney General, who shall be provided with a copy of the memorandum of understanding that establishes the review process. Under the review process, the exercise of the law enforcement powers by each Office of Inspector General shall be reviewed periodically by another Office of Inspector General or by a committee of Inspectors General. The results of each review shall be communicated in writing to the applicable Inspector General and to the Attorney General. ``(8) No provision of this subsection shall limit the exercise of law enforcement powers established under any other statutory authority, including United States Marshals Service special deputation.''. (b) Promulgation of Initial Guidelines.-- (1) Definition.--In this subsection, the term ``memoranda of understanding'' means the agreements between the Department of Justice and the Inspector General offices described under section 6(e)(3) of the Inspector General Act of 1978 (5 U.S.C. App) (as added by subsection (a) of this section) that-- (A) are in effect on the date of enactment of this Act; and (B) authorize such offices to exercise authority that is the same or similar to the authority under section 6(e)(1) of such Act. (2) In general.--Not later than 180 days after the date of enactment of this Act, the Attorney General shall promulgate guidelines under section 6(e)(4) of the Inspector General Act of 1978 (5 U.S.C. App) (as added by subsection (a) of this section) applicable to the Inspector General offices described under section 6(e)(3) of that Act. (3) Minimum requirements.--The guidelines promulgated under this subsection shall include, at a minimum, the operational and training requirements in the memoranda of understanding. (4) No lapse of authority.--The memoranda of understanding in effect on the date of enactment of this Act shall remain in effect until the guidelines promulgated under this subsection take effect. (c) Effective Dates.-- (1) In general.--Subsection (a) shall take effect 180 days after the date of enactment of this Act. (2) Initial guidelines.--Subsection (b) shall take effect on the date of enactment of this Act. Passed the Senate October 17, 2002. Attest: JERI THOMSON, Secretary.
Amends the Inspector General Act of 1978 to permit each Inspector General, any Assistant Inspector General for Investigations, and any special agent supervised by such an Assistant Inspector General to be authorized by the Attorney General to: (1) carry a firearm while engaged in official duties or as expressly authorized by the Attorney General; (2) make an arrest without a warrant while engaged in such duties (or as such expressly authorized) for any offense against the United States committed in the presence of such Inspector, Assistant Inspector, or agent, or for any felony; and (3) seek and execute warrants for an arrest, search, or seizure.Empowers the Attorney General to authorize the exercise of such powers only upon an initial determination that: (1) the affected Office of Inspector General is significantly hampered in the performance of such responsibilities as a result of the lack of such powers; (2) available assistance from other law enforcement agencies is insufficient to meet the need for exercising such powers; and (3) adequate internal safeguards and management procedures exist to ensure proper exercise of those powers.Exempts specified Offices of Inspector General from such an initial determination of eligibility. Directs such Offices to collectively enter into a memorandum of understanding to establish an external review process for ensuring that such safeguards and procedures continue to exist within each Office and any Office that receives such an authorization.
SECTION 1. SHORT TITLE. This Act may be cited as the ``American Fighter Aces Congressional Gold Medal Act''. SEC. 2. FINDINGS. The Congress finds the following: (1) An American Fighter Ace is a fighter pilot who has served honorably in a United States military service and who has destroyed 5 or more confirmed enemy aircraft in aerial combat during a war or conflict in which American armed forces have participated. (2) Beginning with World War I, and the first use of airplanes in warfare, military services have maintained official records of individual aerial victory credits during every major conflict. Of more than 60,000 United States military fighter pilots that have taken to the air, less than 1,500 have become Fighter Aces. (3) Americans became Fighter Aces in the Spanish Civil War, Sino-Japanese War, Russian Civil War, Arab-Israeli War, and others. Additionally, American military groups' recruited United States military pilots to form the American Volunteer Group, Eagle Squadron, and others that produced American-born Fighter Aces fighting against axis powers prior to Pearl Harbor. (4) The concept of a Fighter Ace is that they fought for freedom and democracy across the globe, flying in the face of the enemy to defend freedom throughout the history of aerial combat. American-born citizens became Fighter Aces flying under the flag of United States allied countries and became some of the highest scoring Fighter Aces of their respective wars. (5) American Fighter Aces hail from every State in the Union, representing numerous ethnic, religious, and cultural backgrounds. (6) Fighter Aces possess unique skills that have made them successful in aerial combat. These include courage, judgment, keen marksmanship, concentration, drive, persistence, and split-second thinking that makes an Ace a war fighter with unique and valuable flight driven skills. (7) The Aces' training, bravery, skills, sacrifice, attention to duty, and innovative spirit illustrate the most celebrated traits of the United States military, including service to country and the protection of freedom and democracy. (8) American Fighter Aces have led distinguished careers in the military, education, private enterprise, and politics. Many have held the rank of General or Admiral and played leadership roles in multiple war efforts from WWI to Vietnam through many decades. In some cases they became the highest ranking officers for following wars. (9) The extraordinary heroism of the American Fighter Ace boosted American morale at home and encouraged many men and women to enlist to fight for America and democracy across the globe. (10) Fighter Aces were among America's most-prized military fighters during wars. When they rotated back to the United States after combat tours, they trained cadets in fighter pilot tactics that they had learned over enemy skies. The teaching of combat dogfighting to young aviators strengthened our fighter pilots to become more successful in the skies. The net effect of this was to shorten wars and save the lives of young Americans. (11) Following military service, many Fighter Aces became test pilots due to their superior flying skills and quick thinking abilities. (12) Richard Bong was America's top Ace of all wars scoring a confirmed 40 enemy victories in WWII. He was from Poplar, Wisconsin, and flew the P-38 Lightning in all his combat sorties flying for the 49th Fighter Group. He was killed in 1945 during a P-80 test flight in which the engine flamed out on takeoff. (13) The American Fighter Aces are one of the most decorated military groups in American history. Twenty-two Fighter Aces have achieved the rank of Admiral in the Navy. Seventy-nine Fighter Aces have achieved the rank of General in the Army, Marines, and Air Force. Nineteen Medals of Honor have been awarded to individual Fighter Aces. (14) The American Fighter Aces Association has existed for over 50 years as the primary organization with which the Aces have preserved their history and told their stories to the American public. The Association established and maintains the Outstanding Cadet in Airmanship Award presented annually at the United States Air Force Academy; established and maintains an awards program for outstanding fighter pilot ``lead-in'' trainee graduates from the Air Force, Navy, and Marine Corps; and sponsors a scholarship program for descendants of American Fighter Aces. SEC. 3. CONGRESSIONAL GOLD MEDAL. (a) Presentation Authorized.--The Speaker of the House of Representatives and the President pro tempore of the Senate shall make appropriate arrangements for the presentation, on behalf of the Congress, of a single gold medal of appropriate design in honor of the American Fighter Aces, collectively, in recognition of their heroic military service and defense of our country's freedom, which has spanned the history of aviation warfare. (b) Design and Striking.--For the purposes of the award referred to in subsection (a), the Secretary of the Treasury shall strike the gold medal with suitable emblems, devices, and inscriptions, to be determined by the Secretary. (c) Smithsonian Institution.-- (1) In general.--Following the award of the gold medal in honor of the American Fighter Aces, the gold medal shall be given to the Smithsonian Institution, where it will be available for display as appropriate and available for research. (2) Sense of the congress.--It is the sense of the Congress that the Smithsonian Institution should make the gold medal awarded pursuant to this Act available for display elsewhere, particularly at appropriate locations associated with the American Fighter Aces, and that preference should be given to locations affiliated with the Smithsonian Institution. SEC. 4. DUPLICATE MEDALS. The Secretary may strike and sell duplicates in bronze of the gold medal struck pursuant to section 3 under such regulations as the Secretary may prescribe, at a price sufficient to cover the cost thereof, including labor, materials, dies, use of machinery, and overhead expenses, and the cost of the gold medal. SEC. 5. NATIONAL MEDALS. The medal struck pursuant to this Act is a national medal for purposes of chapter 51 of title 31, United States Code. Speaker of the House of Representatives. Vice President of the United States and President of the Senate.
American Fighter Aces Congressional Gold Medal Act - Directs the Speaker of the House of Representatives and the President pro tempore of the Senate to arrange for the presentation of a single congressional gold medal in honor of the American Fighter Aces, collectively, in recognition of their heroic military service and defense of the nation's freedom. Requires the medal to be given to the Smithsonian Institution for display and research purposes. Expresses the sense of Congress that the medal should be made available for display elsewhere, particularly at locations associated with the American Fighter Aces. Authorizes the Secretary of the Treasury to strike and sell bronze duplicates of the gold medal at a price sufficient to cover the costs of the medals.
endoscopic evaluation of the gastrointestinal tract offers both diagnostic and therapeutic options and has become the preferred procedure for the evaluation of gastrointestinal disorders . presented here are an illustrative example and a review of the world literature , with a focus on diagnostic and management options . a healthy , 75-year - old woman underwent screening colonoscopy at an outside facility and developed left upper quadrant abdominal pain over the ensuing days . cyst , and observational management was elected . over the next several months , the patient underwent serial ct scan examinations of her abdomen , which demonstrated a slowly but progressively enlarging splenic cyst . approximately 4 months after the inciting colonoscopy , the patient was referred to our facility for management . a 10x7-cm thin - walled splenic fluid collection was seen on ct ( figure 1 ) , abutting the abdominal wall and displacing the spleen medially . a fluid / fluid level could be seen within the collection , consistent with hematocrit effect ( blood cells separating from plasma and settling over time ) . the collection was percutaneously drained with an 8 french multi - holed catheter for 400cc of brown fluid , consistent with old blood . postprocedure ct confirmed complete collapse of the collection ( figure 2 ) and the catheter was removed . six days after the collection was drained , the patient presented with a return of her abdominal pain and was found on follow - up ct to have a recurrent 6x4-cm subcapsular splenic collection with a small hematocrit level , presumably from ongoing small hemorrhages . with the rapid reformation of the collection therefore , the patient was immunized with pneumococcal and hib vaccines and taken for elective splenectomy 2 days later . though laparoscopic splenectomy was considered , open resection was chosen given the patient 's age and likelihood of significant inflammatory changes . at laparotomy , she was noted to have a very mobile splenic flexure with an underdeveloped splenocolic ligament . inflammatory adhesions were found in the area of the spleen , and an 8x5-cm white cyst , slightly larger than the spleen itself , was found posterolateral to the spleen ( figures 3 and 4 ) . postoperatively , the patient did remarkably well and was discharged home 3 days later . presenting computed tomographic scan of patient with large undrained hemorrhagic splenic cyst abutting the left lateral abdominal wall . though barium enema has long been the gold standard for identification of colonic lesions , colonoscopy affords the ability to not only identify a lesion , but to provide tissue for pathologic examination and is often the definitive procedure for polyps . hence , colonoscopy is considered the accepted screening examination in all people over 50 years of age , and younger if a familial history of colon cancer is present . though overall very safe the 2 most common complications are hemorrhage following polypectomy ( range , 1.8% to 2.5% ) and perforation ( range , 0.34% to 2.14% ) . the injury was first reported in 1974 , with 2 patients sustaining hemorrhage , one of which resulted in splenectomy . since the first report , an additional 36 cases have been reported in the world literature , including the current case . the average age of the patients is 64.9 years ( range , 33 to 90 ) , reflecting the age for which colonoscopy is routinely indicated , with a preponderance of females ( 66.7% ) . difficult intubation of the colon may impart direct injury to the spleen during passage through the splenic flexure . dense adhesions between the colon and spleen from previous surgery or disease may result in tearing of the splenic capsule as the colonoscope is passed through the colon . telmos and others later added technical maneuvers to the list of risk factors , including slide - by , the alpha maneuver , straightening of the sigmoid loop , and externally applied abdominal pressure . in the available literature , only 6 patients were reported to have had some difficulty with intubation of the colon , while 21 specifically mentioned a lack of difficulty . reports from the remaining 10 cases made no mention of ease or difficulty of the procedure . twelve of the 37 reported patients had undergone previous surgery or had a disease process that may have enabled adhesion formation ( crohn 's disease and pancreatitis ) . eighteen had no predisposition to adhesion formation , and 8 reports lacked any information regarding risk factors . interestingly , of the 12 patients with possible adhesions , 10 had no intubation difficulties . most patients with colonoscopic splenic injury present relatively soon after injury with signs or symptoms suggestive of a problem . the range in the reported literature extends from within 2 hours to as long as 10 days . fourteen of the 32 patients with available information presented with symptoms within 12 hours of colonoscopy . the remaining 18 patients presented over the following days . the vast majority of patients with information available presented with symptoms of pain ( 28/32 ) . approximately half of these same patients also presented with evidence of hemorrhage or shock , or both ( 18/32 ) . with the exception of 1 patient successfully managed nonoperatively , the only reports that did not include pain as a presenting symptom were also those in which the patients died , suggesting the patients were too moribund to complain of pain . the diagnosis and management of these colonoscopy - related splenic injuries has to some degree reflected the available technology and is similar to the management of traumatic splenic injuries . although most injuries were diagnosed , or at least confirmed , at laparotomy in the era before 1987 , 21 of the 24 cases since 1989 have been diagnosed with noninvasive methods , such as computed tomography or ultrasonography . before the report of federle in 1983 , diagnosis of splenic injury sustained in trauma was indirect and was prompted by a positive diagnostic peritoneal lavage ( dpl ) leading to laparotomy . with the proven utility of ct in blunt abdominal trauma , only hemodynamically unstable patients now undergo dpl , and even that has been largely replaced by the focused abdominal sonography for trauma ( fast ) examination . the recognition over 100 years ago of the spleen 's role in immunoprotection against encapsulated organisms , such as streptococcus pneumoniae , haemophilus influenzae , and neisseria meningitidis was largely ignored until the 1950s when a swing in the management of splenic injuries began the era of splenic salvage . besides splenorrhaphy , splenic salvage maneuvers now include nonoperative management of splenic injuries and splenic artery embolization of pseudoaneurysms . in the trauma literature , nonoperative management of lower grade injuries ( grades i though none of the injuries induced at colonoscopy were graded in the available reports , these are presumably not the pulverized high - grade injuries seen in trauma , and probably fall into the grade i thru iii category . however , overall only 27.8% of patients with a splenic injury by colonoscopy have retained their spleens . before 1988 , that rate has dropped to 61.5% , still higher than that predicted from the trauma literature . finally , our patient presented with a condition not previously reported in the colonoscopy literature . the formation of a secondary cyst is rare and is characterized by the lack of a cellular lining as seen in a primary cyst . successful percutaneous drainage under ultrasound guidance has been reported for splenic cysts secondary to blunt abdominal trauma , but not for those related to colonoscopy . our attempt at percutaneous drainage was similarly intended to prevent splenectomy , but unfortunately was unsuccessful . with only limited experience , this technique , however , should remain in the armamentarium for the treatment of these injuries . in the history of splenic injury by colonoscopy , the experience of the first decade was laparotomy and splenectomy in all patients . though ct scanning has proven successful in diagnosing the injury , relatively few patients have escaped the experience with an intact spleen . nonoperative management and splenic artery embolization have been used with significant success in the trauma setting , but used only sparingly with colonoscopic injuries . though our attempt at percutaneous control of the secondary cyst formed as a result of a colonoscopic splenic injury was unsuccessful , we believe this could nevertheless represent an alternative to splenectomy in future patients .
injury to the spleen during routine colonoscopy is an extremely rare injury . diagnosis and management of the injury has evolved with technological advances and experience gained in the management of splenic injuries sustained in trauma . of the 37 reported cases of colonoscopic splenic injury , 12 had a history of prior surgery or a disease process suggesting the presence of adhesions . only 6 had noted difficulty during the procedure , and 31 patients experienced pain , shock , or hemoglobin drop as the indication of splenic injury . since 1989 , 21/24 ( 87.5% ) patients have been diagnosed initially using computed tomography or ultrasonography . overall , only 27.8% have retained their spleens . none have experienced as long a delay as our patient , nor have any had an attempt at percutaneous control of the injury . this report presents an unusual case of a rare complication of colonoscopy and the unsuccessful use of one nonoperative technique , and reviews the experience reported in the world literature , including current day management options .
if unrecognized and left untreated , can lead to ischemia , necrosis and potential need for amputation . to our knowledge , we present the first reported case of hypothyroid - induced compartment syndrome in all four extremities . a 49-year - old female presented with the sudden onset of bilateral lower and upper extremity swelling and pain . her symptoms persisted , necessitating the use of a wheel chair for mobility , prompting her to return to the emergency department . physical examination demonstrated bilateral pretibial myxedema with similar skin changes to the extensor surface of the forearms . laboratory values were as follows : thyroid - stimulating hormone ( tsh ) 164.73 uiu / ml ( 0.45 uiu / ml ) , creatine kinase ( ck ) 13 977 iu / l ) and myoglobin 602 ng / ml ( 0115 ng / ml ) . this revealed nonviable muscle in the right anterior and lateral compartments and viable muscle in the left . this revealed viable muscle in the dorsal compartments on the right and ischemic changes to the left forearm . bilateral compartment syndrome secondary to hypothyroidism is exceedingly rare , and typically found unilaterally . in our pubmed review of the literature . reported on a 40-year - old hypothyroid male who developed bilateral anterior tibial compartment syndrome ( atcs ) . he underwent fasciotomy to decompress the anterior tibial compartments . in the report by hsu et al . , a 33-year - old female patient with undiagnosed hypothyroidism developed unilateral left lower extremity compartment syndrome , and underwent four - compartment fasciotomy . hariri et al . reported a 60-year - old male patient who was noncompliant with levothyroxine and developed bilateral lower extremity anterior compartment syndrome , relieved by four - compartment fasciotomy . a further case by ramadhan et al . reported on a patient that developed rhabdomyolysis and common peroneal nerve compression after thyroid hormone withdrawal in preparation for thyroid ablation . through our review , this is the only case of a hypothyroid patient developing compartment syndrome necessitating surgical intervention in all four extremities . manifestations of myopathy secondary to hypothyroidism include myalgias , rhabdomyolysis , myxedema , pseudohypertrophy and acs . the causes of compartment syndrome can be classified as causes that either increase compartment contents , such as trauma and edema , or restrict compartment volume , such as ill - fitted orthopedic casts [ 1 , 2 , 5 ] . the most common cause of compartment syndrome is trauma , due to fractures , crush or vascular injuries , or severe burns . spontaneous compartment syndrome can be seen , generally secondary to diabetes mellitus or hypothyroidism among other causes . clinical findings such as pretibial myxedema , the discontinuing usage of her thyroid medication , the change in the patients voice , as well as the laboratory findings all point to hypothyroidism as the cause of the patients acs . compartment syndrome develops when the intra - compartmental pressure ( icp ) increases until it impedes tissue perfusion . as pressure increases , ischemia and subsequent necrosis occur [ 1 , 6 ] . in patients with hypothyroidism , the volume needed to observe ischemic changes may be much less . the pathogenesis of hypothyroidism - induced compartment syndrome is unclear , but several theories exist . tsh - induced fibroblast activation results in increased glycosaminoglycan synthesis and deposition in the epidermis , dermis , smooth muscles and skeletal muscle [ 1 , 2 , 4 ] . t3 and the primary glycosaminoglycan in hypothyroid myxedematous changes is hyaluronic acid , which binds water to cause edema [ 2 , 4 ] . this can also lead to increased vascular permeability , extravasation of plasma proteins into the interstitial space , and impaired lymphatic drainage [ 2 , 4 , 5 ] . all of these factors can contribute to increased icp and subsequent acs . in some cases about 1% of myxedema cases are caused by muscle hypertrophy [ 2 , 3 , 5 ] . contributing to the patient 's compartment syndrome was also likely rhabdomyolysis secondary to hypothyroidism . in rhabdomyolysis , however , rhabdomyolysis - induced hypothyroidism generally occurs alongside precipitating factors like trauma or statin use , which are absent in this patient 's case [ 58 ] . this is a rare case of compartment syndrome in all four extremities , secondary to myxedematous changes of hypothyroidism . early recognition , whether from trauma , burns , medications or the rare cause of hypothyroidism is paramount . delay in diagnosis and treatment can lead to lifelong disability from foot drop , parasthesias or limb loss . no sources of support , e.g. grants , equipment or drugs were required for this study .
acute compartment syndrome ( acs ) is an uncommon complication of uncontrolled hypothyroidism . if unrecognized , this can lead to ischemia , necrosis and potential limb loss . a 49-year - old female presented with the sudden onset of bilateral lower and upper extremity swelling and pain . the lower extremity anterior compartments were painful and tense . the extensor surface of the upper extremities exhibited swelling and pain . motor function was intact , however , limited due to pain . bilateral lower extremity fasciotomies were performed . postoperative day 1 , upper extremity motor function decreased significantly and paresthesias occurred . she therefore underwent bilateral forearm fasciotomies . the pathogenesis of hypothyroidism - induced compartment syndrome is unclear . thyroid - stimulating hormone - induced fibroblast activation results in increased glycosaminoglycan deposition . the primary glycosaminoglycan in hypothyroid myxedematous changes is hyaluronic acid , which binds water causing edema . this increases vascular permeability , extravasation of proteins and impaired lymphatic drainage . these contribute to increased intra - compartmental pressure and subsequent acs .
the mass of the @xmath0 boson , @xmath3 , is one of the fundamental parameters of the standard model . as is well known @xcite , a precise measurement of @xmath3 , along with other precision electroweak measurements , will lead , within the standard model , to a strong indirect constraint on the mass of the higgs boson . once the higgs itself is found , this will provide a consistency test of the standard model and , perhaps , evidence for physics beyond . the precise measurement of @xmath3 is therefore a priority of future colliders . lep2 and run ii at fermilab ( @xmath4 fb@xmath5 ) are aiming for an uncertainty on @xmath3 of about 40 mev @xcite and 35 mev @xcite , respectively . an upgrade of the tevatron @xcite , beyond run ii , might be possible , with a goal of an overall integrated luminosity of @xmath6 and a precision on @xmath3 of about 15 mev . clearly , hadron colliders have had and will continue to have a significant impact on the measurement of @xmath3 . in this short paper @xcite we investigate the potential to measure @xmath3 at the large hadron collider ( lhc ) . the lhc will provide an extremely copious source of @xmath0 bosons , thus allowing in principle for a statistically very precise measurement . in section 2 we consider the detector capabilities , in section 3 the theoretical uncertainties , and in section 4 the experimental uncertainties . we present our conclusions in section 5 . a potential problem is that the general - purpose lhc detectors might not be able to trigger on leptons with sufficiently low transverse momentum ( @xmath7 ) to record the @xmath0 sample needed for a measurement of @xmath3 . while this may be true at the full lhc luminosity ( @xmath8 ) it does not appear to be the case at @xmath2 . based on a full geant simulation of the calorimeter , the cms isolated electron / photon trigger @xcite should provide an acceptable rate ( @xmath9 khz at level 1 ) for a threshold setting of @xmath10 gev / c . this trigger will be fully efficient for electrons with @xmath11 gev / c . the cms muon trigger @xcite should also operate acceptably with a threshold of @xmath12 gev / c at @xmath2 . atlas should have similar capabilities . it is likely that the accelerator will operate for at least a year at this ` low ' luminosity to allow for studies which require heavy quark tagging ( e.g. , @xmath13-physics ) . this should provide an integrated luminosity of the order of @xmath14 . the mean number of interactions per crossing , @xmath15 , is about 2 at the low luminosity . this is actually lower than the number of interactions per crossing during the most recent run ( ib ) at the fermilab tevatron . in this relatively quiet environment it should be straightforward to reconstruct electron and muon tracks with good efficiency . furthermore , both the atlas@xcite and cms@xcite detectors offer advances over their counterparts at the tevatron for lepton identification and measurement : they have precision electromagnetic calorimetry ( liquid argon and pbwo@xmath16 crystals , respectively ) and precision muon measurement ( air core toroids and high field solenoid , respectively ) . the missing transverse energy will also be well measured thanks to the small number of interactions per crossing and the large pseudorapidity coverage ( @xmath17 ) of the hadronic calorimeters . the so - far standard transverse - mass technique for determining @xmath3 should thus continue to be applicable . this is to be contrasted with the problem that the increase in @xmath15 will create for run ii ( and beyond ) at the tevatron . in ref . @xcite , it was shown that it will substantially degrade the measurement of the missing transverse energy and therefore the measurement of @xmath3 . large theoretical uncertainties arising from substantial qcd corrections to @xmath0 production at the lhc energy could deteriorate the possible measurement of @xmath18 . in fig . [ fig : lhc]a , we present the leading order ( lo ) calculation and next - to - leading order ( nlo ) qcd calculation @xcite of the transverse mass distribution ( @xmath19 ) at the lhc ( 14 tev , @xmath20 collider ) in the region of interest for the extraction of the mass . we used the mrsa @xcite set of parton distribution functions , and imposed a charged lepton ( electron or muon ) rapidity cut of 1.2 , as well as a charged lepton @xmath7 and missing transverse energy cut of 20 gev . we used @xmath3 for the factorization and renormalization scales . no smearing effects due to the detector were included in our calculation . the uncertainty due to the qcd corrections can be gauged by considering the ratio of the nlo calculation over the lo calculation . this ratio is presented in fig . [ fig : lhc]b as a function of @xmath19 . as can be seen , the corrections are not large and vary between 10% and 20% . for the extraction of @xmath3 from the data , the important consideration is the change in the shape of the @xmath19-distribution . as can be seen from fig . [ fig : lhc]b , the corrections to the shape of the @xmath19-distribution are at the 10% level . note that an increase in the charged lepton @xmath7 cut has the effect of increasing the size of the shape change ( it basically increases the slope of the nlo over lo ratio ) , such that for the theoretical uncertainty is is better to keep that cut as low as possible . for comparison , in fig . [ fig : tev ] we present the same distributions as in fig . [ fig : lhc ] for the tevatron energy ( 1.8 tev , @xmath21 collider ) . the same cuts as for the lhc were applied . as can be seen the corrections are of the order of 20% and change the shape very little . currently , the estimated uncertainty on @xmath3 associated with modelling the transverse momentum distribution of the w ( _ i.e. _ due to qcd corrections ) is of the order of 10 mev at the tevatron@xcite . on the one hand , the larger qcd corrections at the lhc suggest that the uncertainty will also be larger . on the other hand , the @xmath7 distribution of the w can be constrained by data ( both @xmath0 and @xmath22 ) and the significant increase in statistics available , first at the upgraded tevatron and then at the lhc , should keep the uncertainty under control . note also that even though the shape change due to qcd corrections is undoubtedly larger at the lhc than at the tevatron , in absolute terms it is still small and a next - to - next - to leading order calculation might be able to reduce the theoretical uncertainty to an acceptable level . although such a calculation does not yet exist for the @xmath19-distribution , one may certainly imagine that it will be before any data become available at the lhc . an alternative would be to use an observable with yet smaller qcd corrections . recently @xcite , it was pointed out that the ratio of @xmath0 to @xmath22 observables ( properly scaled by the respective masses ) are subject to smaller qcd corrections than the observables themselves . this is illustrated in fig . [ fig : ratio ] for the transverse mass . in fig . [ fig : ratio]a the ratio of nlo / lo calculations for the distribution of events as a function of the mass - scaled transverse mass @xmath23 is presented ; @xmath24 for the @xmath0 and @xmath25 for the @xmath22 . the cuts for the @xmath0 case are as described before . the @xmath22 is required to have one lepton with @xmath26 and the @xmath7 cuts are scaled proportionally to the mass compared to the @xmath7 cut in the @xmath0 case to @xmath22 observables close to the cut . ] . [ fig : ratio]b shows the factor nlo / lo for the quantity defined as the ratio of the number of @xmath0 events to the number of @xmath22 events at a given @xmath23 . as can be seen the nlo corrections to this quantity are much less dependent on @xmath23 than the distribution themselves . indeed , the corrections are similar for the @xmath0 and @xmath22 mass - scaled distributions and therefore cancel in the ratio . this ratio could then be used to measure the @xmath0 mass , with small theoretical uncertainty . note that the measured mass and width of the z are already used to calibrate the detectors @xcite in current analysis @xcite . compared to the standard transverse - mass method , the ratio method will have a larger statistical uncertainty because it depends on the @xmath22 statistics , but a smaller systematic uncertainty because of the use of the ratio . this concept has now been verified in an experimental analysis@xcite . overall this ratio method might therefore be competitive if the systematic uncertainty dominates the overall uncertainty on @xmath3 in the transverse - mass method . it is beyond the scope of this paper to study the sytematic uncertainties in detail ; in what follows we shall benchmark these uncertainties using the demonstated cdf and d0 performance . the ratio method can also be used with other distributions , like the @xmath7-distribution of the charged lepton itself , see @xcite . it is interesting to note that the average bjorken-@xmath27 of the partons producing the @xmath0 at lhc with the cuts considered in this paper is @xmath28 , compared to @xmath29 at the tevatron . without the rapidity cut , the range of @xmath27 probed at the lhc is much larger , going from below @xmath30 to above @xmath31 . the uncertainty due to the parton distributions will thus be different at the lhc and tevatron . considering that this uncertainty might dominate in this very high precision measurement , complementary measurements at the tevatron and lhc would be very valuable . it is not possible to quantify this statement considering the present status of pdf uncertainties @xcite . at this time , it is obviously impossible to predict the overall theoretical uncertainty at the lhc . the present uncertainty of @xmath32 mev from the @xmath0 production model@xcite would already limit the precision of the mass measurement attainable in run ii at the tevatron , so there is obviously great motivation to reduce such uncertainties . part of our goal in writing this paper is to emphasize that such motivation also exists for the lhc , by demonstrating its potential for an extremely precise @xmath0 mass measurement . in the rest of this paper we shall assume that the theoretical uncertainty at the lhc will be decreased to a value lower than the experimental uncertainty . the single @xmath0 production cross section at the lhc , for charged lepton @xmath33 gev/@xmath34 and pseudorapidity @xmath35 , and transverse mass @xmath36 , is about @xmath37 times larger than at the tevatron with the same cuts transverse momentum to be less than 15 or 30 gev does not significantly change this result . ] . scaling from the latest high - statistics @xmath0 mass measurement at d0 @xcite , where @xmath38 @xmath39 events were taken from an integrated luminosity of 82 pb@xmath5 , we then expect at the lhc @xmath40 reconstructed @xmath41 events in one year at low luminosity ( for @xmath14 ) . figure [ fig : eta ] shows that if the lepton rapidity coverage at the lhc were increased above the @xmath42 assumed here , a large gain in signal statistics would be obtained , since the rapidity distribution is rather broad at the lhc energy . the configuration for which one bjorken-@xmath27 is very large and the other one very small is favored and creates the maxima at @xmath43 . 3 in the gain would be of order two if leptons were accepted out to @xmath44 , which is covered by the electromagnetic calorimetry of the atlas @xcite and cms @xcite experiments , and as high as a factor of four for @xmath45 which may be covered by other experiments @xcite . as already noted , it is not straightforward to estimate the precision with which @xmath3 can be determined because of the importance of systematic effects ; even a full geant simulation of a detector is unlikely to include all of them . we have therefore based our estimate on a parametrization of the actual cdf and d0 @xmath3 uncertainties developed in ref . @xcite in order to extrapolate to higher luminosity . the parametrization includes the effect of the number of interactions per crossing , @xmath15 ( which degrades the missing @xmath46 resolution ) , and of those systematic effects which can be controlled using other data samples ( such as @xmath22 bosons , @xmath47 mesons , etc . ) and which will therefore scale like @xmath48 . this behavior appears valid for the most important systematic uncertainties in the present measurement , such as the energy scale determination , underlying event effects , and the @xmath7 distribution of the @xmath0 . the use of these parametrizations , of course , explicitly does not take into account any of the detector improvements offered by the lhc detectors over their tevatron counterparts which were described earlier . the parametrized statistical and systematic uncertainties on @xmath3 are given by : @xmath49 where @xmath50 is the total number of events . taken at face value these would suggest that @xmath51 mev could be reached . however , these parametrizations do not account for effects which do not scale as @xmath48 . such systematic effects , which are not yet important in present data , will probably limit the attainable precision at the lhc . there is however an opportunity to measure the @xmath0 mass to a precision of better than @xmath52 mev at the lhc . it is worth noting that , while we have assumed that only one year of operation at low luminosity is required to collect the dataset , considerably longer would undoubtedly be required after the data are collected in order to understand the detector at the level needed to make such a precise measurement . in conclusion , we see no serious problem with making a precise measurement of @xmath3 at the lhc if the accelerator is operated at low luminosity ( @xmath2 ) for at least a year . the cross section is large , triggering is possible , lepton identification and measurement straightforward , and the missing transverse energy should be well determined . the qcd corrections to the transverse mass distribution although larger than at the tevatron , still appear reasonable . a precision better than @xmath52 mev could be reached , making this measurement the world s best determination of the @xmath0 mass . we feel that it is well worth investigating this opportunity in more detail . 99 see for example w. de boer , a. dabelstein , w. hollik , w. msle and u. schwickerath , ka - tp-18 - 96 , hep - ph/9609209 a. ballestrero _ et al . _ , in proceedings of the workshop on physics at lep2 , g. altarelli , t . sjostrand and f. zwirner ( eds . ) , cern yellow report cern-96 - 01 ( 1996 ) . ` future electroweak physics at the fermilab tevatron : report of the tev_2000 study group , ' d. amidei and r. brock ( eds . ) , fermilab - pub-96/082 , april 1996 . the tev33 committee report , executive summary , http://www-theory.fnal.gov/tev33.ps some preliminary work on this topic was presented at the 1996 dpf / dpb summer study on new directions for high - energy physics ( snowmass 96 ) , see the proceedings p. 527 . ` preliminary specification of the baseline calorimeter trigger algorithms , ' j. varela _ et al . _ , cms internal note cms - tn/96 - 010 , unpublished ; + ` the cms electron / photon trigger : simulation study with cmsim data , ' r. nobrega and j. varela , cms internal note cms - tn/96 - 021 , unpublished . ` cms muon trigger preliminary specifications of the baseline trigger algorithms , ' f. loddo _ et al . _ , cms internal note cms - tn/96 - 060 , unpublished . atlas technical proposal , cern / lhcc/94 - 43 , december 1994 . cms technical proposal , cern / lhcc/94 - 38 , december 1994 . our calculation is based on : w.t . giele , e.w.n . glover , and d.a . kosower , nucl . phys . * b403 * ( 1993 ) 633 . martin , r.g . roberts and w.j . stirling , phys . * d50 * ( 1994 ) 6734 . b. abbott _ et al . _ , `` a measurement of the @xmath0 boson mass '' , hep - ex/9712028 , to be published in phys . @xmath18 measurement at the tevatron with high luminosity , w. t. giele and s. keller , to appear in the proceedings of the dpf96 conference , minneapolis , mn , august 10 15 , 1996 , fermilab - conf-96/307-t ; + determination of @xmath0 boson properties at hadron colliders , w.t . giele and s. keller ( fermilab ) , fermilab - pub-96 - 332-t , april 1997 , hep - ph/9704419 r. raja , in proc . of the 7th topical workshop on proton - antiproton collider physics , fermilab , 1988 ; ed . by r. raja , a. tollestrup and j. yoh , pub . by world scientific . `` measurement of @xmath18 using the transverse mass ratio of w and z '' , s. rajagopalan and m. rijssenbeek , for the do collaboration , proceedings of the 1996 dpf / dpb summer study on new directions for high - energy physics ( snowmass 96 ) , p. 537 , fermilab - conf-96 - 452-e . there were some discussions on the issue of pdf uncertainties during snowmass 96 . see the structure function subgroup summary , proceedings of the 1996 dpf / dpb summer study on new directions for high - energy physics ( snowmass 96 ) p. 1079 . k. eggert and c. taylor , cern - ppe/96 - 136 , october 1996 , submitted to nucl .
we explore the ability of the large hadron collider to measure the mass of the @xmath0 boson . we believe that a precision better than @xmath1 mev could be attained , based on a year of operation at low luminosity ( @xmath2 ) . if this is true , this measurement will be the world s best determination of the @xmath0 mass . = -2.0 cm = -1.5 cm = 22.0truecm = 16.5truecm 2ex fermilab - pub-97/317-t * measurement of the @xmath0 boson mass at the lhc * +
to obtain the reaction rates relevant for the nuclear astrophysics , experimental data should be extrapolated to the very low - energy region ( the gamow window ) . the cross section depends strongly on the energy and therefore is expressed in terms of the astrophysical @xmath13 factor @xmath14 in the system of units , in which @xmath15 , the coulomb ( sommerfeld ) parameter @xmath16 . @xmath17 is the charge of the nucleus @xmath18 , @xmath19 and @xmath20 are the relative momentum and the reduced mass of the interacting nuclei , @xmath21 is their relative kinetic energy in the c.m . frame . the extrapolation of the cross section down to low energies assumes , that the coulomb potential of the target nucleus and a projectile results from bare nuclei . in experimental conditions , however , the coulomb potential is screened by electrons surrounding the target nucleus , thus reducing the height of the coulomb barrier and leading to a higher cross section . as the energy approaches zero , the electron screening potential @xmath10 enhances the bare nucleus astrophysical factor @xmath22 . the estimated electron screening potential for the @xmath0li@xmath2he reaction in the adiabatic limit is a difference in atomic binding energies between li and be@xmath23 , that is 186 ev @xcite . the trojan horse ( th ) experiment has indirectly measured the bare nucleus astrophysical factor @xcite . a comparison of the th results and a direct measurement led to @xmath24 ev @xcite . large @xmath10 values were obtained by engstler et al . @xcite using the polynomial fits , as well as by barker @xcite using the r - matrix fits . barker also noted @xcite , that fixing @xmath10 at 175 ev results in a reasonable fit with only slightly higher @xmath25 . ruprecht et al . @xcite reported @xmath26 ev and concluded , that the lower screening energy is due to the influence of the @xmath5 subthreshold resonance . the current work presents a new r - matrix analysis for the low - energy @xmath0li@xmath2he reaction , that considers three subthreshold resonances . the largest contribution to the low - energy @xmath13 factor comes from the resonances and the subthreshold resonances closest to the threshold of the compound nucleus . the three subthreshold resonances closest to the threshold of @xmath27be are the @xmath28 subthreshold state at @xmath29 kev , followed by the @xmath30 subthreshold resonance at @xmath31 mev and a @xmath28 state at @xmath32 mev . the @xmath28 resonance at @xmath33 mev is not included in the calculation . the astrophysical @xmath3 factor is first calculated including only the @xmath5 subthreshold resonance at @xmath29 kev , and then by adding two more subthreshold resonances . the fitting to the direct low - energy experimental data @xcite is done using the nonlinear least - squares procedure . golovkov et al . @xcite data are the outliers , and , hence , are not included in the calculation . the errors of the experimental data of engstler et al . @xcite do not include the reported uncertainties arising from the number of counts , angular distributions , the effective energy and the target stoichiometry , and , hence , are reduced by @xmath34 . including the latter errors would result in a normalized @xmath35 . here , the normalized @xmath36 is defined as @xmath37 , where @xmath38 is the number of experimental points used in the fit calculation , and @xmath39 is the number of parameters . the error bars of elwyn et al . experimental data @xcite are enlarged to get @xmath40 , as the underestimated errors may lead to a bias in the derived slope . the chosen error bars for elwyn et al . data @xcite are set to the @xmath41 of the measured @xmath3-factor value . the r - matrix fits are calculated using the modified r - matrix formulas @xcite , which use the `` observed '' rather than the formal parameters . the alternate parametrization allows to set the resonances energies at the experimental values , instead of conventionally @xcite choosing the random formal energies in the vicinity of the resonance , and calculating the resulting `` observed '' energies after the r - matrix fit . in the three - level fit , the two @xmath42 resonances interfere , and the two - level r - matrix equations are used . the contribution of the @xmath43 state is added incoherently . considering only two channels for each state , the low - energy astrophysical @xmath13 factor is defined as @xcite @xmath44 here , the statistical spin factor @xmath45 is @xmath46 the collision matrix @xmath47 in terms of the `` observed '' parameters is identical to the one , that is expressed in terms of the formal parameters @xcite @xmath48 and the @xmath49 matrix in terms of the `` observed '' parameters is defined as @xmath50 here , @xmath51 is the shift function @xmath52 , evaluated at energy @xmath53 , @xmath54 is the penetribility , and @xmath55 . the r - matrix formula of lane and thomas @xcite requires an inclusion of all partial waves . for the initial @xmath5 state one would have @xmath56 , @xmath57 , @xmath58 , @xmath59 partial waves and @xmath60 , @xmath59 partial waves for the initial @xmath61 state . the exit channel for the @xmath42 state is @xmath62 , and , for the @xmath61 state , it is @xmath63 . the asymptotic normalization coefficients ( anc s ) for the bound states , as well as the reduced width amplitudes @xmath64 s for the final channels are unknown and , therefore , are treated as parameters in the r - matrix calculation . assuming that the @xmath65-wave approximation for the incoming deuteron is reasonable at very low deuteron energies , the number of free parameters reduces significantly . unconstrained fits to direct data do not allow reliable determination of all parameters ; therefore present calculations use constrained fits , which are subject to assumptions of the physical parameters . first , the r - matrix fits depend on the restrictions placed on the experimentally unknown @xmath66 channels partial widths . the subthreshold resonances under consideration are broad . the experimental resonance widths of @xmath67 are 0.8 mev and 0.88 mev for the @xmath42 states at 22.2 mev and 20.1 mev , respectively , and 0.72 mev for the @xmath61 state at 20.2 mev @xcite . the total width for each level is a sum of all partial widths . the threshold of the @xmath68li@xmath1he reaction entrance channel corresponds to a high excitation energy in a compound nucleus , hence , many reaction channels are open . considering only the @xmath68li@xmath2he reaction , the major contribution to the total width at the subthreshold energy comes from the @xmath69 channel due to the large q value of the reaction @xmath70q@xmath71 mev@xmath72 and varies slowly with energy . the relative widths @xmath73 of the @xmath42 subthreshold resonance at @xmath8 mev have been determined experimentally to be @xmath74 @xcite . @xmath75 has been reported @xcite for the @xmath4 state . page @xcite performed a many - level multichannel simultanous r - matrix fit to known @xmath66 elastic scattering data , as well as @xmath76 and other channels data . a single - level fit to data , as well as the fit , that includes all three aforementioned subthreshold resonances , fails for the bare @xmath0li@xmath2he astrophysical @xmath3 factor using the suggested formal @xmath66 channels partial widths @xmath77 mev for the @xmath42 subthreshold state at 22.2 mev ( experimental total width 0.8 mev ) , @xmath78 mev for the @xmath61 subthreshold state at 20.2 mev ( experimental total width 0.72 mev ) , and @xmath79 mev for the @xmath42 subthreshold state at 20.1 mev ( experimental total width 0.88 mev ) in ref . @xcite . the fit also fails , if one includes an arbitrary background . reference @xcite notes that because the resonances are broad , the resonance contribution can not be separated from the background contribution and therefore the elastic scattering data do not provide accurate resonance parameters . also , the best fit of ref . @xcite includes an additional unknown level at 580 kev ( excitation energy 22.78 mev ) with the formal @xmath80 mev and @xmath81 mev for the @xmath82-wave deuteron . the relative width @xmath83 for the @xmath5 subthreshold resonance at @xmath8 mev , found in @xcite , is significantly smaller than that given in literature @xcite and obtained from the @xmath84li@xmath85 and @xmath86be@xmath85 reactions data . also , the @xmath87 of the @xmath5 subthreshold resonance at @xmath6 mev obtained by ref . @xcite is much smaller than that obtained by other r - matrix or polynomial fits to data @xcite . hence , the present study considers only those constrains , that are imposed by the experimental measurements . for the bound states the anc s are related to the reduced width amplitudes @xcite @xmath88 where @xmath89 is a whittaker function . without the experimentally imposed constrains , it is not possible to say which range of the anc s is more appropriate . therefore the anc s are treated as free parameters . the channel radii associated with the range of the nuclear force are calculated by a conventional formula of lane and thomas @xcite @xmath90 , where @xmath91 and @xmath92 are the mass numbers and @xmath93 is a numerical value between @xmath94 and @xmath95 fm . in principle , the collision matrix is independent of the choice of the channel radii , provided a large enough number of levels is included into the analysis , and , consequentially , is the astrophysical @xmath3 factor . partial widths and energies of the resonances resulting from an r - matrix fit calculation should also be channel radii independent @xcite . the sensitivity of the resonance parameters to the adopted channel radii in the initial and final channels is illustrated in table [ tab1 ] for a single - level calculation that uses the @xmath65-wave approximation for the deuteron . the parameters depend strongly on the @xmath0li+d channel radius , which may indicate a need for the inclusion of the additional initial channel partial waves in the r - matrix fit . also , as the analysis deals with broad resonances , the strong dependance on channel radii may support a need for an inclusion of additional levels into the r - matrix fit . ccccccc @xmath96 & @xmath97 & @xmath98 & @xmath99(mev ) & @xmath100 & @xmath36 & @xmath10(ev ) + + 4.5 & 4.0 & 22.2610 & 0.6952 & 22.5222 & 0.7678 & 76.9987 + 4.5 & 4.5 & 22.2608 & 0.6947 & 22.5279 & 0.7681 & 76.5234 + 4.5 & 5.0 & 22.2606 & 0.6943 & 22.5278 & 0.7682 & 76.6777 + 4.5 & 5.5 & 22.2605 & 0.6940 & 22.5309 & 0.7684 & 76.4070 + 5.0 & 4.0 & 22.1911 & 0.7928 & 22.4021 & 0.7494 & 83.2557 + 5.0 & 4.5 & 22.1907 & 0.7923 & 22.4066 & 0.7496 & 82.8810 + 5.0 & 5.0 & 22.1904 & 0.7919 & 22.4087 & 0.7497 & 82.7473 + 5.0 & 5.5 & 22.1902 & 0.7917 & 22.4115 & 0.7498 & 82.4966 + 5.5 & 4.0 & 22.1368 & 0.9330 & 22.2113 & 0.7303 & 92.8859 + 5.5 & 4.5 & 22.1362 & 0.9327 & 22.2158 & 0.7305 & 92.4693 + 5.5 & 5.0 & 22.1357 & 0.9325 & 22.2155 & 0.7306 & 92.6005 + 5.5 & 5.5 & 22.1354 & 0.9323 & 22.2184 & 0.7307 & 92.3145 + when allowing the channel radius to vary as one of the parameters , the single - level best fit sets the initial channel radius @xmath101 fm . the best fit places the subthreshold resonance at 22.252 mev with @xmath102 mev . the corresponding @xmath103 ev . there is no obvious reason , however , to set the initial channel radius to this value , as one looks for a range of radii in which the conclusions of the calculation are reasonably stable , rather than for a single value which produces the lowest @xmath25 . in this analysis , the chosen values for the initial and final channel radii are 5.0 and 4.5 fm , respectively . the sensitivity to the initial channel radius is evaluated in the error bars . as deviations of the conclusions could be attributed to the effects of other levels , the single - level @xmath104-matrix fit for the @xmath5 subthreshold resonance and the @xmath104-matrix fit , that includes the three aforementioned subthreshold states , are considered . the calculations use the @xmath65-wave approximation for the deuteron , as with the existing data including more partial waves would only introduce more unknown fitting parameters . an unrestricted single - level @xmath104-matrix best fit to the low - energy experimental data @xcite that uses the energy @xmath105 of a @xmath5 subthreshold resonance near the 22.2 mev excitation energy , the electron screening potential @xmath10 , as well as the reduced width amplitudes @xmath64 and @xmath106 as free parameters , is shown in the top panel of fig . [ figs1 ] . the best fit places the subthreshold resonance at @xmath107 mev with @xmath108 mev . the resulting bare astrophysical @xmath3 factor at zero energy @xmath109 mev b and the electron screening @xmath110 ev . the @xmath25 of the fit , that uses four parameters and 82 data points , is @xmath111 . the normalized @xmath36 is then @xmath112 . the subthreshold energy of the @xmath5 state is then fixed at the experimental value of @xmath29 kev ( excitation energy 22.2 mev ) and the partial width of the @xmath69 channel is restricted not to exceed the total width of the state . the best fit that treats @xmath10 , @xmath64 and @xmath106 as parameters , provides @xmath113 mev b , @xmath114 mev and the electron screening @xmath115 ev . the @xmath116 is only slightly higher than that resulting from the former fit , in which the subthreshold resonance energy is allowed to vary , resulting in a slightly smaller @xmath117 . the ambiguity of the fit due to the sensitivity to the choice of the channel radii is larger for the restricted fit . the larger errors correspond to the fits for which the initial channel radius was reduced , resulting in larger values of the electron screening , the larger @xmath118 partial width and the smaller @xmath119 . to neglect the electron screening , the single - level r matrix is fit to data above @xmath120 kev @xcite . the calculation for the bare @xmath3 factor is done treating @xmath64 and @xmath106 as the only parameters . the observed @xmath3 factor is then obtained by varying @xmath10 as a single parameter to fit data below 1 mev @xcite . the best fit is shown in the bottom panel of fig . [ figs1 ] . resulting @xmath121 mev b and @xmath122 mev . @xmath25 of the best fit to 50 data points is 66.6727 , @xmath123 . the reduced width amplitudes are @xmath124 mev@xmath125 and @xmath126 mev@xmath125 . the corresponding anc is @xmath127 fm@xmath125 . resultant @xmath129 ev . the sensitivity for the latter fit tracks the same way as the previous fit : reducing the channel radius strongly increases the electron screening . also , the single - level @xmath104-matrix best fit at a fixed energy of the @xmath5 subthreshold resonance at -80 kev , results in different sets of parameters , when fitting the electron screening simultaneously with other parameters , and , when the electron screening is neglected below 60 kev . fitting all data below 1 mev with the screening potential @xmath10 included in the set of the fitting parameters , results in @xmath130 ev , while fitting data at energies above 60 kev , where supposedly the electron screening can be neglected , and then using the @xmath10 as a single fitting parameter to fit all data below 1 mev , results in the nearly twice as low screening potential @xmath131 ev . the second closest subthreshold resonance to the threshold of @xmath27be ( excitation energy 22.28 mev ) is the @xmath30 state at -2.08 mev ( excitation energy 20.2 mev ) . the @xmath3 factors for the two states are added incoherently . the @xmath104-matrix best fit , that treats the @xmath64 s and the anc s as free parameters , results in a much too large width of the @xmath30 state . restricting the partial widths of the @xmath66 channels not to exceed the total widths of each state leads to @xmath132 mev b , @xmath133 mev , @xmath134 mev . the best fit parameters are anc @xmath135 fm@xmath125 and @xmath136 mev@xmath125 for the @xmath5 , and anc @xmath137 fm@xmath125 and @xmath138 mev@xmath125 for the @xmath4 state . @xmath25 of the fit , that uses four parameters and 50 data points , is 65.8849 , @xmath139 . the electron screening is then fit to data below 1 mev as a single parameter , resulting in @xmath140 ev . by restricting the @xmath99 of the @xmath30 subthreshold resonance not to exceed the half of the total width , as reported in @xcite , the best fit parameters become anc @xmath141 fm@xmath125 and @xmath142 mev@xmath125 for the @xmath5 , and anc @xmath143 fm@xmath125 and @xmath144 mev@xmath125 for the @xmath4 state . @xmath25 of the best fit is 66.6689 and @xmath145 . then @xmath146 mev b , @xmath147 mev , @xmath148 mev , and the electron screening @xmath10 is @xmath149 ev . thus , adding the second subthreshold resonance hardly affects the extracted screening potential from that obtained for a single subthreshold state . it also does not reduce the sensitivity to the channel radii . one notes that whether a single @xmath5 subthreshold state is used in the @xmath104-matrix fit , or the two subthreshold states @xmath5 and @xmath4 are considered , the determined alpha partial width of the @xmath5 state is significantly larger than the one obtained in ref . @xcite . in this section , the @xmath104-matrix fit considers three subthreshold resonances : two @xmath42 and one @xmath43 . the @xmath42 states are interfering and the @xmath43 state is added incoherently . the partial widths of the @xmath66 channels for each state are constrained not to exceed the total experimental widths . in the absence of further constrains , the @xmath104-matrix best fit to the low - energy experimental data @xcite for the bare astrophysical factor between 60 kev and @xmath150 mev results in @xmath151 mev b. the best fit parameters are shown in table [ tabpar ] as a set 1 . the partial widths of the @xmath66 channels are @xmath152 mev , @xmath153 mev and @xmath154 mev for the subthreshold levels at 22.2 mev , 20.2 mev and 20.1 mev respectively . @xmath25 of the fit , that uses six parameters and 50 data points , is 65.5116 , @xmath155 . the electron screening is then fit to all data below 1 mev as a single parameter , resulting in @xmath156 ev . the best fit looks identical to the panel ( b ) of fig . [ figs1 ] . cccccccccc & & & & & & + @xmath157 & @xmath158 & & & & & & & & + & & c ( fm@xmath125 ) & @xmath159 & @xmath10 ( ev)&@xmath160 & c ( fm@xmath125 ) & @xmath159 & @xmath10 ( ev)&@xmath160 + + @xmath161 & 22.2 & 2.2607 & 0.2420 & & 0.6932 & 2.2378 & 0.2558 & & 0.7746 + @xmath162 & 20.2 & 1.7430 & 0.2009 & & 0.4892 & 1.2790 & 0.0366 & & 0.0163 + @xmath161 & 20.1 & 2.4390 & 0.1146&71.393 & 0.1462 & 2.1159 & 0.0925 & 149.521 & 0.0952 + & & & & & & + the sensitivity of the fit to the choice of channel radii for the three level fit tracks the same way as the sensitivity for a single - level fit , possibly indicating that this problem is not due to the lack of the background levels . @xmath163 for the three - level fit is only slightly smaller than the @xmath25 of a single level fit , producing a slightly higher @xmath36 . due to a strong sensitivity to the channel radii , it is difficult to precisely determine the partial widths of the subthreshold resonances ; however , it is clear that the fit prefers a partial width of the @xmath5 state at 20.1 mev significantly lower than the experimental total width 0.88 mev @xcite and its effect on the @xmath3 factor is small . the @xmath104-matrix best fit to the low - energy experimental data @xcite that fits the anc s , the reduced width amplitudes @xmath64 s and the electron screening simultaneously , looks very similar to the panel ( a ) of fig . [ figs1 ] . s@xmath164 of the best fit is @xmath165 mev b and @xmath166 ev . the best fit parameters are shown in table [ tabpar ] as a set 2 . resulting partial widths of @xmath66 channels are @xmath167 mev , @xmath168 mev and @xmath169 mev for the subthreshold levels at 22.2 mev , 20.2 mev and 20.1 mev respectively . @xmath25 of the fit , that uses 7 parameters and 82 data points is 117.5280 , @xmath170 . the electron screening potential @xmath171 ev , resulting from the best fit that varies all parameters simultaneously is halved , when the r matrix is fit to the experimental data above 60 kev to neglect the electron screening . a comparison of the best fit parameters , when @xmath10 is fit simultaneously and separately , listed in table [ tabpar ] , possibly suggests , that the electron screening is not negligible above 60 kev , contrary to ref . @xcite stating that for the deuteron energies larger than 50 kev the enhancement of the cross section due to the screening effect can be neglected . therefore , the parameters we recommend are those resulting from a fit that varies @xmath10 simultaneously with other parameters . the bare @xmath3 factor resulting from different @xmath104-matrix fits is larger than many previously reported calculations @xcite and agrees with s@xmath172 mev b obtained by ruprecht et al . @xcite . while the value of the electron screening potential @xmath10 is sensitive to the choice of channel radii , it is smaller than the adiabatic limit . we did not observe a significantly larger @xmath10 , as it was reported in ref . the astrophysical @xmath3 factor for the low - energy @xmath0li@xmath1he reaction dominated by broad subthreshold resonances has been analyzed using the single- , two- and three level @xmath104-matrix fits . for the low - energy r matrix we use the s - wave approximation for the deuteron . the resulting ambiguity due to the choice of channel radii is large in a single - level fit , table [ tab1 ] , as well as in the fit that considers three subthreshold resonances . our goal is to check possibility of determination of the electron screening potential from the low - energy astrophysical @xmath3 factors and to determine the anc s of the subthreshold states and @xmath173 partial widths . we find that parameters depend on the number of the subthreshold states involved in the fitting . we consider the fit with three subthreshold states as the most reliable . we find that the extracted screening potential depends on the used procedure . if we first fit the @xmath3 factor varying all the parameters at energies above 60 kev , at which according to ref . @xcite the electron screening potential can be neglected , and then fixing all the parameters and varying only @xmath10 to fit the astrophysical factor at energies below 1 mev , we obtain @xmath174 ev . however , if we fit the @xmath3 factor at energies below 1 mev varying all the parameters simultaneously including @xmath10 , we get @xmath175 ev . thus , the result strongly depends on the fitting procedure . we may conclude that the assumption of ref . @xcite that electron screening effects are negligible at energies above 50 kev is not valid and our recommended value is @xmath175 ev . note , that other obtained fitting parameters are also sensitive to the fitting procedure . our recommended values are the parameters from the set 2 in table [ tabpar ] . our obtained @xmath173 width for the @xmath42 subthreshold resonance at 22.2 mev @xmath176 mev is higher than the 0.56 mev value obtained by ref . @xcite and is closer to the 0.76 mev value obtained by @xcite . the partial width is lower than the total experimental value of 0.8 mev @xcite , which was confirmed by corresponding theoretical calculations . the @xmath173 partial width for the @xmath61 subthreshold state at 20.2 mev is small and agrees with @xmath75 reported in ref . the partial width of @xmath42 state at 20.1 mev is much lower than the experimental value of the total width @xcite . our attempt to fit the @xmath3 factor for the @xmath0li@xmath1he reaction using @xmath177 widths from ref . @xcite failed and we believe , that the main reason for that is that our @xmath173 partial width for the dominant @xmath42 state at 22.2 mev is significantly higher than 0.11 mev obtained in @xcite . we conclude that while it is difficult to pinpoint accurately the electron screening potential by fitting existing direct measurements , the obtained electron screening potential is definitely smaller than the adiabatic limit @xmath178 ev @xcite . this work has been partially supported by the italian ministry of the university under grant rbfr082838 ( firb2008 ) . a. m. m. acknowledges the support by the us department of energy under grants no . de - fg02 - 93er40773 , no . de - fg52 - 09na29467 and nsf under grant no . phy-1415656 . h. j. assenbaum , k. langake and c. rolfs , z. phys . a * 327 * , 461 ( 1987 ) . t. d. shoppa et al . c * 48 * , 837 ( 1993 ) . c. spitaleri et al . , c * 63 * , 055801 ( 2001 ) . s. engstler et al . , z. phys . a * 342 * , 471 ( 1992 ) . f. c. barker , nucl . a * 707 * , 277 ( 2002 ) . g. ruprecht et al . , phys . rev . c * 707 * , 025803 ( 2004 ) . a. j. elwyn et al . c * 16 * , 1744 ( 1977 ) . cai dunjiu , zhou enchen , jiang chenglie 1985 ( unpublished ) . r. risler et al . , nucl . phys . a * 286 * , 115 ( 1977 ) . j. m. f. jeronimo et al . , nucl . phys . * 38 * , 11 ( 1962 ) . m. s. golovkov et al . , sov . . phys . * 34 * , 480 ( 1981 ) . c. r. brune , phys c * 66 * , 044611 ( 2002 ) . a. m. lane and r. g. thomas , rev . * 30 * , 257 ( 1958 ) . d. r. tilley , nucl . phys . a * 745 * , 155 ( 2004 ) . v. m. pugach et al . , bull . * 56 * , 1768 ( 1992 ) . j. benn et al . , nucl . a * 106 * , 296 ( 1967 ) . p. r. page , phys . c * 72 * , 054312 ( 2005 ) . a. m. mukhamedzhanov et al . , phys . c * 81 * , 054314 ( 2010 ) . a. m. mukhamedzhanov and r. e. tribble , phys . rev . * c 59 * , 3418 ( 1999 ) . p. descouvemont and d. baye , rep . * 73 * , 036301 ( 2010 ) . c. r. gould , j. m. joyce , and j. r. boyce , proceedings of conference on nuclear cross sections and technology , n. b. s. publication 421 , washington dc 1975 , p .697 . d. r. tilley et al . ( unpublished ) ; this is part of the updated version of f. ajzenberg - selove , nucl . phys . a * 490 * , 1 ( 1988 ) .
background : : : the information about the @xmath0li@xmath1he reaction rates of astrophysical interest can be obtained by extrapolating direct data to lower energies , or by indirect methods . the indirect trojan horse method , as well as various r - matrix and polynomial fits to direct data , estimate the electron screening energies much larger than the adiabatic limit . calculations that include the subthreshold resonance estimate smaller screening energies . purpose : : : obtain the @xmath0li@xmath2he reaction r - matrix parameters and the bare astrophysical @xmath3 factor for the energies relevant to the stellar plasmas by fitting the r - matrix formulas for the subthreshold resonances to the @xmath3-factor data above 60 kev . methods : : : the bare @xmath3 factor is calculated using the single- and the two - level r - matrix formulas for the closest to the threshold @xmath4 and @xmath5 subthreshold states at @xmath6 , @xmath7 and @xmath8 mev . the electron screening potential @xmath9 is then obtained by fitting it as a single parameter to the low - energy data . the calculations are also done by fitting @xmath9 simultaneously with other parameters . results : : : the low - energy @xmath3 factor is dominated by the @xmath5 subthreshold resonance at @xmath6 mev . the influence of the other two subthreshold states is small . the resultant electron screening is smaller than the adiabatic value . the fits that neglect the electron screening above 60 kev produce a significantly smaller electron screening potential . the calculations show a large ambiguity associated with a choice of the initial channel radius . conclusions : : : the r - matrix fits do not show a significantly larger @xmath10 than predicted by the atomic physics models . the r - matrix best fit provides @xmath11 ev and @xmath12 mev b.
Demonstrators stand together as they wait for a Republican response to a new city income tax on the wealthy that was approved earlier by the Seattle City Council Monday, July 10, 2017, in Seattle. Seattle's... (Associated Press) SEATTLE (AP) — Seattle's wealthiest would become the only Washington state residents to pay an income tax under legislation approved by the City Council, a measure designed as much to raise revenue as to open a broader discussion about whether the wealthy pay their fair share. The council voted unanimously Monday to impose a 2.25 percent tax on the city's highest earners. Personal income in excess of $250,000 for individuals and in excess of $500,000 for married couples filing joint returns would be taxed. The measure is certain to face a court challenge from opponents who call the tax proposal illegal, unconstitutional and a waste of taxpayer money. City leaders are likely to keep expanding and increasing the tax over time, they said. The council is "going to unanimously adopt an illegal income tax that has no hope of taking effect and will waste taxpayer resources on litigation the city is sure to lose," said Jason Mercier, who directs the center for government reform with the Washington Policy Center, Supporters of the new tax say the city's economic growth and prosperity has created significant wealth and opportunity, but it has also exacerbated the affordable housing crisis that has put a strain on those in lower income brackets. Washington is one of seven states without a personal income tax, and a state law passed in 1984 prohibits a county, city, or city-county from levying a tax on net income. "We have an increasing affordability gap between the have and have nots. The middle class is being squeezed as well. And one of the reasons is our outdated, regressive and unfair tax structure," said Councilmember Lisa Herbold, who co-sponsored the measure. "This is a big step forward in Seattle but it's also hopefully a big step forward for our state," she said before the 9-0 vote. A Seattle tax would be a step toward building political momentum for the state and its other cities and towns to enact progressive tax systems, the city council said in resolution earlier this year endorsing the idea of an income tax. Those who testified in favor of the bill Monday included tech workers who said they were wealthy and favored being taxed to maintain city services and ensure the city remains a place for all incomes. One person called the tax misguided. At a rally Monday before the vote, Seattle Mayor Ed Murray said the city expects a legal challenge. "We welcome that legal challenge. We welcome that fight," he said. If the city wins that battle, "it won't just be Seattle that's doing a progressive income tax," he added. The city estimates the income tax will raise about $140 million a year. Revenue could be used to lower property taxes, pay for public services such as transit and housing, replace any federal money that is lost and meet carbon reduction goals. Voters in the state have rejected personal income tax-related measures at the statewide ballot several times over the past eight decades. They did approve an income tax in 1932, but the state Supreme Court ruled the measure unconstitutional the following year. Mercier said there is decades of case law saying that a graduated income tax is unconstitutional because income is property and under the constitution, property tax has to be taxed uniformly and no more than 1 percent. ||||| SEATTLE (Reuters) - Seattle’s city council unanimously passed a pioneering income tax on the city’s highest earners on Monday, a measure that has become a clarion call for Democrats there even though it is likely to face a swift legal challenge over violating state law. The Space Needle and Mount Rainier are seen on the skyline of Seattle, Washington, U.S. February 11, 2017. REUTERS/Chris Helgren The measure created a 2.25 percent tax rate on individuals earning above $250,000 and married couples jointly earning above $500,000. The tax will add roughly $140 million in new annual revenue and affect fewer than 20,000 residents in the city of more than 660,000, supporters say. The proposal has become a rallying cry for Democrats and activists in the liberal-leaning city who used local opposition to Republican President Donald Trump to advance long sought-after local policies. Washington is one of seven U.S. states without a tax on personal income, and no city in the state has an income tax. Supporters of the Seattle proposal, including Mayor Ed Murray, say the current tax code unfairly burdens poor and middle class residents because it relies on “regressive” taxes, such as taxes on property and sales transactions. Proponents say new revenue is needed to offset any potential drop in federal funding under the Trump administration. “Our goal is to replace our regressive tax system with a new formula for fairness, while ensuring Seattle stands up to President Trump’s austere budget that cuts transportation, affordable housing, healthcare, and social services,” Murray said by e-mail after the city council’s 9-0 vote. Earlier on Monday, Murray told a cheering crowd at a rally outside city hall he would sign the measure into law on Friday and welcomed a legal challenge. The campaign for the new tax on Seattle’s richest was launched earlier this year by Trump Proof Seattle, a coalition of community activists and residents. Proceeds from the tax could be used to pay for transit services and affordable housing, Murray said. The city has been grappling with soaring housing prices in recent years, fueled in part by the growth of online retailer Amazon.com, which is headquartered in downtown Seattle. Voters in Olympia, the state capital, rejected a similar tax on its highest earners last year. In 2010, Washington state voters rejected a state income tax at the ballot box. Jason Mercier, a director at the conservative Washington Policy Center, said Seattle’s tax conflicts with state law and court decisions. He said he expects the city to face a swift legal challenge after Murray signs the measure into law. “There’s something fundamentally wrong with elected officials passing a tax they know is against state law and the constitution with the hope of being sued and having a judge overturn prior decisions,” Mercier said. State law blocks a county or city from levying a tax on “net” income, although net income is not defined in the statute. The Washington state Supreme Court has found that income is treated as property under the constitution and therefore has to be taxed uniformly and at no more than 1 percent of its value, Mercier said. ||||| The measure applies a 2.25 percent tax on total income above $250,000 for individuals and above $500,000 for married couples filing their taxes together. A legal challenge is expected. The Seattle City Council unanimously approved an income tax on wealthy residents Monday, a move widely expected to draw a quick legal challenge. The measure applies a 2.25 percent tax on total income above $250,000 for individuals and above $500,000 for married couples filing their taxes together. “Seattle should serve everyone, not just rich folks,” software developer Carissa Knipe told the council before the 9-0 vote, saying she makes more than $170,000 per year. “I would love to be taxed,” the 24-year-old from Ballard testified, drawing applause from a room packed with supporters of the tax. The city estimates the tax would raise about $140 million a year and cost $10 million to $13 million to set up, plus $5 million to $6 million per year to manage and enforce. The council’s finance committee cleared the tax last week and increased the rate from 2 percent to 2.25 percent. Opponents have argued the tax would violate state law and the state constitution, while proponents have said it would make Seattle’s tax structure more fair and that they want to test the legality of taxing income. Neither Washington nor any city in the state now collects an income tax. That’s partly why the state’s tax system has been called the most regressive in the country, meaning people with less money pay a much greater percentage of what they have. In a statement, Mayor Ed Murray said Seattle is “challenging this state’s antiquated and unsustainable tax structure by passing a progressive income tax,” calling it a “new formula for fairness.” Washington State Republican Party chair Susan Hutchison urged Seattle residents to “forcefully resist the tax” by refusing to pay it. Under the legislation sponsored by Councilmembers Lisa Herbold and Kshama Sawant, money from the tax could be used by the city to lower property taxes and other regressive taxes; address homelessness; provide affordable housing, education and transit; replace federal funding lost through budget cuts; create green jobs and meet carbon-reduction goals; and administer the tax. Voting 5-3, the council approved an amendment by Councilmember M. Lorena González to possibly reduce Seattle’s business-license tax in some way. The recent push for an income tax began in February, when nonprofits and labor unions calling themselves the Trump Proof Seattle coalition launched a campaign. The coalition said the revenue could offset threatened cuts by President Donald Trump’s administration and held town-hall events in every council district to drum up support. A boost came in April, when Murray, during a mayoral candidate forum, said he would send income-tax legislation to the council. Earlier that week, former Mayor Mike McGinn had backed an income tax. Murray later dropped out of the race. Legal arguments A lawsuit will likely emerge in the next week or so, after the mayor signs the tax into law, said Jason Mercier, director of the Center for Government Reform at the business-backed Washington Policy Center, which opposes the tax. There are three key legal barriers, according to Mercier: The state constitution says taxes must be uniform within a class of property; a 1984 state law bars cities from taxing net income; and cities must have state authority to enact taxes. Seattle may assert that taxing total income is different from taxing net income, while also seeking a ruling that income isn’t property. “We are greatly disappointed,” Washington Policy Center’s president, Dann Mead Smith, said in a statement after the vote. “As a lifelong Seattle resident, it is frustrating to see the Seattle City Council choose to waste taxpayer dollars on lawsuits for an income tax that is not needed.” The Freedom Foundation, a conservative think tank based in Olympia, announced in a statement that the organization was prepared to challenge the tax in court — “hopefully with a coalition of other freedom-minded organizations.” “No matter who starts out paying it, everyone will eventually suffer,” foundation CEO Tom McCabe said in the statement, warning that the tax would creep down the income ladder. But Sawant insisted her only desire is to “tax the rich,” and Herbold said the legislation has been designed to give the city its best chance in court. “Time for rich to pay their fair share” Supporters of the tax rallied before Monday’s vote, waving signs and cheering. “When we fight, we win!” they chanted with Sawant, who said more public pressure may be needed. “If we need to pack the courts, will you be there with me?” she asked. Karen Taylor, 34, was in the crowd holding a sign with a Seattle Times headline dating to the early 1900s: “Why don’t you come through with a little bit of the wealth Seattle has given you, rich man?” The Judkins Park resident said she’s struggling to stay housed. “Whoever goes against this is openly causing suffering,” she said. Inside City Hall, Taylor wound up sitting next to income-tax opponent John Peeples, who was vastly outnumbered. There were grumbles in the chambers when Peeples reminded the council that Washington voters have rejected income-tax measures on multiple occasions. “Yes means yes and no means no,” said the 45-year-old Green Lake resident. The crowd was more appreciative of homeowner Bobby Righi, 79, who said she’s campaigned and voted for one property-tax levy after another despite modest means. “It’s time for the rich to pay their fair share,” said Righi, of Phinney Ridge. Outside City Hall after the vote, calls of “tax the rich” by proponents of the legislation drowned out Hutchison as she spoke against the council’s action. In a KING 5/KUOW poll last month, 66 percent of 900 Seattle adults who took part expressed support for an income tax on the wealthy, while 23 percent were against it and 12 percent weren’t sure. There were about 11,000 individuals in Seattle with earned annual incomes of at least $250,000 in 2015, according to U.S. Census Bureau data. The Seattle tax would cover both earned and unearned income. “Washington has among the most regressive tax systems in the United States,” the legislation states, citing research by the Institute on Taxation and Economic Policy. In 2015, Washington households with incomes below $21,000 paid 16.8 percent of their income in state and local taxes, on average, while households with income above $500,000 paid only 2.4 percent, according to the organization.
– Supporters waved "Tax the Rich" signs Monday as Seattle's city council voted unanimously to do exactly that. By a vote of 9-0, the council approved an income tax that only applies to wealthy residents, with the 2.25% tax starting at income above $250,000 for individuals and above $500,000 for married couples filing joint returns, the Seattle Times reports. The city estimates that the tax will raise around $140 million a year from Seattle's 20,000 or so wealthiest residents. Washington state doesn't have a personal income tax and a 1984 law bans cities and counties in the state from taxing net income, meaning Seattle's move is certain to face legal challenges, the AP reports. Opponents vowed to fight the measure, with Washington State Republican Party Chair Susan Hutchison urging citizens to "forcefully resist" the tax and not pay a penny. Seattle Mayor Ed Murray says he'll sign the measure into law Friday—and he will welcome legal challenges. The goal is to replace the current "regressive" tax system with a fairer one, "while ensuring Seattle stands up to President Trump's austere budget that cuts transportation, affordable housing, health care, and social services," the mayor tells Reuters. (Illinois has ended its long budget standoff with a 32% hike in the income tax rate.)
laser excited rydberg atoms @xcite stored in large spacing optical lattices @xcite or magnetic trap arrays @xcite offer unique possibilities for implementing scalable quantum information processors . in such a setup single atoms can be loaded and kept effectively frozen at each lattice site , with long - lived atomic ground states representing qubits or effective spin degrees of freedom . lattice spacings of the order of a few @xmath0 m allow _ single site addressing _ with laser light , and thus individual manipulation and readout of atomic spins . exciting atoms with lasers to high - lying rydberg states and exploiting the strong and long - range dipole - dipole or van der waals interactions between rydberg states provides fast and _ addressable 2-qubit entangling operations _ or effective spin - spin interactions ; recent theoretical proposals have extended rydberg - based protocols towards a single step , high - fidelity entanglement of a mesoscopic number of atoms @xcite . remarkably , the basic building blocks behind such a setup have been demonstrated recently in the laboratory by several groups @xcite . and @xmath1 give rise to an effective spin degree of freedom . these states are coupled to a rydberg state @xmath2 in two - photon resonance , establishing an eit condition . on the other hand , the control atom has two internal states @xmath3 and @xmath4 . the state @xmath4 can be coherently excited to a rydberg state @xmath5 with rabi frequency @xmath6 , and can be optically pumped into the state @xmath3 for initializing the control qubit . b ) for the toric code , the system atoms are located on the links of a 2d square lattice , with the control qubits in the centre of each plaquette for the interaction @xmath7 and on the sites of the lattice for the interaction @xmath8 . setup required for the implementation of the color code ( c ) , and the @xmath9 lattice gauge theory ( d ) . ] motivated by and building on these new experimental possibilities , we discuss below a rydberg qs for many body spin models . as a key ingredient of our setup ( see fig . 1 ) we introduce additional auxiliary qubit atoms in the lattice , which play a two - fold role : first , they control and _ mediate _ _ effective @xmath10-body spin interactions _ among a subset of @xmath10 system spins residing in their neighborhood in the lattice . in our scheme this is achieved efficiently making use of single - site addressability and a parallelized multi - qubit gate , which is based on a combination of strong and long - range rydberg interactions and electromagnetically induced transparency ( eit ) , as suggested recently in ref . second , the auxiliary atoms can be optically pumped , thereby providing a dissipative element , which in combination with rydberg interactions results in _ effective collective dissipative dynamics _ of a set of spins located in the vicinity of the auxiliary particle , which itself eventually factors out from the system spin dynamics . the resulting coherent and dissipative dynamics on the lattice can be represented by , and thus simulates a master equation @xmath11+{\cal l}\rho$ ] @xcite , where the hamiltonian @xmath12 is the sum of @xmath10-body interaction terms , involving a quasi - local collection of spins in the lattice . the liouvillian term @xmath13 with @xmath14 in lindblad form governs the dissipative time evolution , where the many - particle quantum jump operators @xmath15 involve products of spin operators in a given neighborhood . the actual dynamics of our system is performed as a stroboscopic sequence of coherent and dissipative operations involving the auxiliary rydberg atoms over time steps @xmath16 , with the master equation emerging as a coarse grained description of the time evolution . for purely coherent dynamics governed by the hamiltonian , this is the familiar `` digital qs '' @xcite where for each time step the evolution operator is implemented via a trotter expansion @xmath17 and a certain error associated with the non - commutativity of the quasi - local interactions @xmath18 . the concept of stroboscopic time evolution is readily adapted to the dissipative case by interspersing coherent propagation and dissipative time steps @xmath19 , providing an overall simulation of the master equation by sweeping over the whole lattice with our coherent and dissipative operations . many of these steps can in principle be done in a highly parallel way , rendering the time for a simulation step independent on the system size . in our scheme the characteristic energy scale of the many - body interaction terms is essentially the same for two - body , four- or higher - order interaction terms , and mainly limited by the fast time - scale to perform the parallel mesoscopic rydberg gate operations . we note that this is in contrast to the familiar analog simulation of hubbard and spin dynamics of atoms in optical lattices @xcite . where collisional interactions between atoms provide naturally two - body interactions , while higher order , small effective interactions and constraints are typically derived with perturbative arguments @xcite . before proceeding with the concrete physical implementation of our rydberg qs , we find it convenient to discuss special spin models and master equations of interest , starting with an explicit example : kitaev s toric code . we will discuss the realization of a more complex setup of a three - dimensional @xmath9 lattice gauge theory giving rise to a spin liquid phase in the last section . kitaev s _ toric code _ is a paradigmatic , exactly solvable model , out of a large class of spin models , which have recently attracted a lot of interest in the context of studies on topological order and quantum computation . it considers a two - dimensional setup , where the spins are located on the edges of a square lattice @xcite . the hamiltonian @xmath20 is a sum of mutually commuting stabilizer operators @xmath21 and @xmath22 , which describe _ four - body interactions _ between spins located around plaquettes ( @xmath7 ) and vertices ( @xmath8 ) of the square lattice ( see fig . [ fig1]b ) . the ground state ( manifold ) of the hamiltonian consists of all states , which are simultaneous eigenstates of all stabilizer operators @xmath7 and @xmath8 with eigenvalues @xmath23 , while the degeneracy of the ground state depends on the boundary conditions and topology of the setup . the excitations are characterized by a violation ( eigenstates with eigenvalue @xmath24 ) of the low - energy constraint for each term @xmath7 ( `` magnetic charge '' ) and @xmath8 ( `` electric charge '' ) in the hamiltonian , and exhibit mutual anyonic statistics . a dissipative dynamics which `` cools '' into the ground state manifold is provided by a liouvillian with quantum jump operators , @xmath25,\hspace{10pt}c_{s}=\frac{1}{2}\sigma_{j}^{x}\left[1-b_{s}\right ] , \label{eq : kitaev_jump_operators}\ ] ] with @xmath26 and @xmath27 , which act on four spins located around plaquettes @xmath28 and vertices @xmath29 , respectively . the jump operators are readily understood as operators which `` pump '' from @xmath24 into @xmath23 eigenstates of the stabilizer operators : e.g. due to the projector @xmath30 the jump operator @xmath31 will not act on states in the @xmath23 eigenspace of @xmath32 , while for @xmath24 an arbitrary one of the four spins is flipped by @xmath33 . similar arguments apply to @xmath34 efficient cooling is achieved by alternating the index @xmath35 of the spin , which is flipped . the jump operators then give rise to a random walk of anyonic excitations on the lattice , and whenever two anyons of the same type meet they are annihilated , resulting in a cooling process , see fig . [ fig3 ] . note that by flipping , for example , the signs @xmath36 on selected plaquettes this dissipative dynamics will prepare excited states . our choice of the jump operator follows the idea of reservoir engineering of interacting many - body systems as discussed in ref . in contrast to alternative schemes for measurement based state preparation @xcite , here , the cooling is part of the time evolution of the system . these ideas can be readily generalized to more complex stabilizer states and to setups in higher dimensions , as in , e.g. , the color code ( see fig . [ fig1]c ) @xcite . lattice sites ( periodic boundary conditions ) . single trajectories for the anyon density @xmath10 over time are shown as solid lines . filled circles represent averages over 1000 trajectories . the initial state for the simulations is the fully polarized , experimentally easily accessible state of all spins down . for perfect gates the energy of the system reaches the ground state energy in the long time limit , while for imperfect gates heating events can occur ( blue solid line ) and a finite density of anyons @xmath10 remains present ( blue circles ) . this finite anyon density corresponds to an effective temperature @xmath37 . the parameter quantifying the gate error was set to @xmath38 ( see app . [ app : errors ] for details ) . ] we now turn to the physical implementation of the digital quantum simulation . the system and auxiliary atoms are stored in a deep optical lattice or magnetic trap arrays with one atom per lattice site , where the motion of the atoms is frozen and the remaining degree of freedom of the system and auxiliary atoms are effective spin@xmath39 systems described by the two long - lived ground states @xmath40 and @xmath1 and @xmath41 and @xmath42 , respectively ( see fig . [ fig1]a ) . we first discuss the elements of the digital qs for a small local setup , and present the extension to the macroscopic lattice system below . to be specific , we will focus on a single plaquette in the example of kitaev s toric code outlined above . the implementation of the four - body spin interaction @xmath43 and the jump operator @xmath44 uses an auxiliary qubit located at the centre of the plaquette ( see fig.[fig1]b ) . the general approach then consists of three steps ( see fig . [ fig2]b ) : ( i ) we first perform a gate sequence @xmath45 which encodes the information whether the four spins are in a @xmath23 or @xmath24 eigenstate of @xmath7 in the two internal states of the auxiliary atom . ( ii ) in a second step , we apply gate operations , which depend on the internal state of the control qubit . due to the previous mapping these manipulations of the control qubit are equivalent to manipulations on the subspaces with fixed eigenvalues of @xmath7 . ( iii ) finally , the mapping @xmath45 is reversed , and the control qubit is re - initialized incoherently in its internal state @xmath46 by optical pumping . coherently maps the information , whether the system spins reside in any eigenstate @xmath47 ( @xmath48 ) corresponding to the eigenvalue @xmath24 ( @xmath23 ) of the many - body interaction @xmath7 onto the internal state @xmath4 ( @xmath3 ) of the control qubit . b ) after the mapping @xmath45 , we apply gate operations , which depend on the internal state of the control qubit . finally , the mapping @xmath45 is reversed and the control qubit is incoherently reinitialized in state @xmath3 by optical pumping . at the end of the complete sequence the dynamics of the control qubit factors out . [ fig2 ] ] the mapping @xmath45 is a sequence of three gate operations @xmath49 where @xmath50 is a standard @xmath51-single qubit rotation of the control qubit and the parallelized many - body rydberg gate @xcite takes the form ( see fig . [ fig1]a for the required electronic level scheme and app . [ app : gate ] for a brief summary ) @xmath52 for the control qubit initially prepared in @xmath3 , the gate @xmath45 coherently transfers the control qubit into the state @xmath4 ( @xmath3 ) for any system state @xmath47 ( @xmath48 ) , with @xmath53 denoting the eigenstates of @xmath7 , i.e. , @xmath54 , see fig . [ fig2 ] . for the _ coherent time evolution _ , the application of a phase shift @xmath55 on the control qubit and the subsequent reversion of the gate , @xmath56 , implements the time evolution according to the many - body interaction @xmath7 , i.e. , @xmath57 the control qubit returns to its initial state @xmath3 after the complete sequence and therefore effectively factors out from the dynamics of the system spins . for small phase imprints @xmath58 , the mapping reduces to the standard equation for coherent time evolution @xmath59+o(\phi^{2}).\ ] ] the energy scale for the four - body interaction @xmath7 becomes @xmath60 with @xmath16 the time required for the implementation of a single time step . on the other hand , for the _ dissipative dynamics _ , we are interested in implementing the jump operator @xmath31 ( see eq . [ eq : kitaev_jump_operators ] ) . to this purpose , after the mapping @xmath45 , we apply a controlled spin flip onto one of the four system spins , @xmath61 with @xmath62 . as desired , the sequence @xmath63 leaves the low energy sector @xmath48 invariant , while - with a certain probability - it performs a controlled spin flip on the high energy sector @xmath47 . however , once a spin is flipped , the auxiliary qubit remains in the state @xmath4 , and optical pumping from @xmath4 to @xmath3 is required to re - initialize the system , guaranteeing that the control qubit again factors out from the system dynamics . the optical pumping constitutes the dissipative element in the system and allows one to remove entropy in order to cool the system . the two qubit gate @xmath64 is implemented in close analogy to the many - body rydberg gate @xmath65 previously discussed . in one dissipative time step the density matrix @xmath66 of the spin system is changed according to @xmath67 with @xmath68\\ b & = & \frac{1}{4}\left[(a_{p}-1)(a_{p}+1)+(a_{p}+1)\sigma(a_{p}-1)\right].\nonumber \end{aligned}\ ] ] for small phases @xmath69 , the change of the density matrix can be expressed by its time derivative and the mapping ( [ mapping ] ) reduces to the lindblad form @xmath70+o(\theta^{3})\ ] ] with the jump operators @xmath31 given in eq . ( [ eq : kitaev_jump_operators ] ) and the cooling rate @xmath71 . here , @xmath16 denotes again the time scale for the implementation of a single time step . the above scheme for the implementation of the many - body interaction @xmath7 and the dissipative cooling with @xmath31 can be naturally extended to arbitrary many - body interactions between the system spins surrounding the control atom , as e.g. , the @xmath72 interaction terms in the above toric code . gate operations on single system spins allow to transform @xmath73 in @xmath74 and @xmath33 , while selecting only certain spins to participate in the many - body gate via local addressability gives rise to the identity operator for the non - participating spins . consequently , we immediately obtain the implementation of the general many - body interaction and jump operators @xmath75 \label{toolbox}\ ] ] with @xmath76 . here , @xmath77 and @xmath78 stand for a collection of indices characterizing the position of the local interaction and the interaction type . note that @xmath79 also includes single particle terms , as well as two - body interactions , while the dissipative term with @xmath80 describes simple dephasing instead of cooling . extending the analysis to a _ large lattice system _ with different , possibly non - commuting interaction terms in the hamiltonian , i.e. , @xmath81 and dissipative dynamics described by a set of jump operators @xmath15 with damping rates @xmath82 , provides a complete toolbox for the quantum simulation of many - body systems . each term is characterized by a phase @xmath83 ( @xmath84 ) written during a single time step determining its coupling energy @xmath85 and damping rate @xmath86 . for small phases @xmath87 and @xmath88 , the sequential application of the gate operations for all interaction and damping terms reduces to the master equation of lindblad form , @xmath89+\sum_{\beta}\kappa_{\beta}\left[c_{\beta}\rho c_{\beta}^{\dag}-\frac{1}{2}\left(c_{\beta}^{\dag}c_{\beta}\rho+\rho c_{\beta}^{\dag}c_{\beta}\right ) \right].\ ] ] the choice of the different phases during each time step allows for the control of the relative interaction strength of the different terms , as well as the simulation of inhomogeneous and time dependent systems . the characteristic energy scale for the interactions @xmath90 and damping rates @xmath82 are determined by the ratio between the time scale @xmath16 required to perform a single time step , and the phase difference @xmath83 and @xmath84 written during these time steps . it is important to stress that within our setup , the interactions are quasi - local and only influence the spins surrounding the control qubit . consequently , the lattice system can be divided into a set of sublattices on which all gate operations that are needed for a single time step @xmath16 , can be carried out in parallel . then , the time scale for a single step @xmath16 becomes independent on the system size and is determined by the product of the number @xmath91 of such sublattices and the duration @xmath92 of all gate operations on one sublattice . in our setup , @xmath92 is mainly limited by the duration of the many - body rydberg gate @xmath93 , which is on the order of a few hundred ns ( see app . [ app : gate ] for details ) . for the toric code discussed above , we have to apply the many - body gate twice for every interaction term ( see fig . [ fig2 ] ) , and with @xmath94 , we obtain @xmath95 a few @xmath0s , resulting in characteristic energy scales and cooling rates of the order of several hundred khz . it is a crucial aspect of this quantum simulation with rydberg atoms that it can be performed fast and is compatible with current experimental time scales of cold atomic gases @xcite . finally , we would like to point out that imperfect gate operations provide in leading order small perturbations for the hamiltonian dynamics and weak dissipative terms ; see app . [ app : errors ] for a detailed discussion and fig . [ fig3 ] for a numerical analysis of the induced errors . however , the thermodynamic properties and dynamical behaviour of a strongly interacting many - body system are in general robust to small perturbations in the hamiltonian and weak coupling to noise fluctuations . consequently , small imperfections in the implementation of the gate operations are tolerable and still allow for the efficient quantum simulation . an important aspect for the characterization of the final state is the measurement of correlation functions @xmath96 , where @xmath97 denote local , mutually commuting many - body observables . in our scheme , the observables @xmath97 can be measured via the mapping @xmath45 of the system information onto auxiliary qubits and their subsequent state selective detection . in analogy to noise correlation measurements in cold atomic gases @xcite the repeated measurement via such a detection scheme provides the full distribution function for the observables , and therefore allows to determine the correlation function @xmath98 in the system . consequently , in the above discussion of kitaev s toric code , the necessary string operators characterizing topological order can be detected . in the first example , we discussed the implementation of the quantum simulator for the toric code , where the ground state is given by a stabilizer state , and the extension to more complex stabilizer states is straightforward . in the following , we will show that our approach can also be extended to systems with non - commuting terms in the hamiltonian . as an example , we focus on a three - dimensional @xmath9-lattice gauge theory @xcite , and show that dissipative ground state cooling can also be achieved in such complex models . such models have attracted a lot of recent interest in the search for ` exotic ' phases and spin liquids @xcite . the three - dimensional setup consists of spins located on the links of a cubic lattice ( see fig . [ the lattice structure for the spins can be viewed as a corner sharing lattice of octahedra with one site of the cubic lattice in the center of each octahedra . the hamiltonian for the @xmath9 lattice gauge theory takes the form @xmath99 where the first term in the hamiltonian defines a low energy sector consisting of allowed spin configurations with an equal number of up and down spins on each octahedron , i.e. , spin configurations with vanishing total spin @xmath100 on each octahedron . the second term denotes a ring exchange interaction on each plaquette with @xmath101 ; here @xmath102/2 $ ] and the numbering is clockwise around the plaquette . this term flips a state with alternating up and down spins on a plaquette , i.e. , @xmath103 . the last term denotes the the so - called rokhsar - kivelson term , which counts the total number of flipable plaquettes @xmath104 ; for a non - flipable plaquette @xmath105 vanishes . while the ring exchange interaction commutes with the spin constraint , ring exchange terms on neighboring plaquettes are non - commuting . at the rokhsar - kivelson point with @xmath106 , the system becomes exactly solvable @xcite , and it has been proposed that in the regime @xmath107 the ground state is determined by a spin liquid smoothly connected to the rokhsar - kivelson point @xcite : the properties of this spin liquid are given by an artificial ` photon ' mode , gapped excitations carrying an ` electric ' charge ( violation of the constraint on an octahedron ) , which interact with a 1/r coulomb potential mediated by the artificial photons , and gapped magnetic monopoles . into a different dimer covering . b ) numerical simulation for the cooling into the ground state at the rokhsar - kivelson point with @xmath108 for a system with 4 unit cells ( 12 spins ) . the cooling into the constraint on the octahedra follows in analogy to the cooling of the toric code via the diffusion and annihilation of electric charges on the octahedra . the inset shows the cooling into the equal superposition of all dimer coverings starting from an initial state satisfying the constraint on all octahedra . c ) coherent time evolution from the rokhsar - kivelson point with a linear ramp of the rokhsar - kivelson term @xmath109 : the solid line denotes the exact ground state energy , while the dots represent the digital time evolution during an adiabatic ramp for different phases @xmath110 written during each time step . the difference accounts for errors induced by the trotter expansion due to the non - commutative terms in the hamiltonian . ] in the following , we present the implementation of this hamiltonian within our scheme for the digital quantum simulation and demonstrate that dissipative ground state cooling can be achieved at the rokhsar - kivelson point , from where the entire phase diagram is accessible . the control qubits reside in the center of each octahedron ( on the lattice sites of the 3d cubic lattice ) controlling the interaction on each octahedron , and in the center of each plaquette for the ring exchange interaction @xmath72 , see fig . then , the coherent time evolution of the hamiltonian ( [ ringexchange ] ) can be implemented in analogy to the above discussion by noting that the ring exchange interaction @xmath72 and @xmath111 can be written as a sum of four - body interactions of the form ( [ toolbox ] ) , while the constraint on the octahedra is an ising interaction , see app . [ app : lg ] . next , we discuss the jump operators for the dissipative ground state preparation . the cooling into the subspace with an equal number of up and down spins on each octahedron is obtained by the jump operator @xmath112\sigma_{i}^{x}\left[1-\prod_{j}e^{i\frac{\pi}{6}\sigma_{j}^{z}}\right ] , \label{eq : lattice_gauge_jump_operators}\ ] ] where the product is carried out over the six spins located on the corners of the octahedron ( see fig . [ fig1]d ) . the `` interrogation '' part @xmath113 of the jump operator vanishes if applied to any state with three up and three down spins , while in all other cases a spin is flipped . then the cooling follows in analogy to the cooling in the toric code by the diffusion of the ` electric ' charges . identifying each spin up with a ` dimer ' on the link , all states satisfying the constraints on the octahedra can be viewed as a dimer covering with three dimers meeting at each site of the cubic lattice , see fig . [ fig4]a . within this description , the ground state at the rokhsar - kivelson point is given by the condensation of the dimer coverings @xcite , i.e. , the equal weight superposition of all dimer coverings . the condensation of the dimer coverings is then achieved by the jump operator @xmath114b_{p}.\ ] ] the action of this jump operator can be understood by starting with a dimer covering : for a non - flipable plaquette @xmath28 ( @xmath115 eigenstate of @xmath116 ) , the dimer covering is a dark state of the jump operator @xmath31 , while for a flipable plaquette @xmath28 ( @xmath117 eigenstate of @xmath116 ) the state is cooled into the equal weight superposition of the original dimer covering and the dimer covering obtained by flipping the plaquette ( i.e. , the @xmath23 eigenstate ) . consequently , the system is cooled into the dark state which is the equal superposition of all dimer coverings , which can be reached by flipping different plaquettes . the cooling of these jump operators is demonstrated via a numerical simulation for a small system of 4 unit cells , see fig . [ fig4]b . the implementation of the digital quantum simulations provides full control on the spatial and temporal interaction strengths . therefore , there are two possibilities to analyze the phase diagram for arbitrary interaction strengths : ( i ) the possibility to vary the different coupling strengths in time allows to adiabatically vary the different interaction terms from the dissipatively cooled ground state at the rokhsar - kivelson point ; the adiabatic preparation using the trotter expansion is shown in fig . ( ii ) on the other hand , the spatial control of the coupling parameters allows us to divide the lattice into a system and a bath . the ground state of the bath is given by the rokhsar - kivelson state , which can be continuously cooled via the dissipative terms , while the system part is sympathetically cooled due to its contact with the bath ; in analogy to the cooling well known in condensed matter systems . support by the deutsche forschungsgemeinschaft ( dfg ) within sfb / trr 21 , the institute for quantum optics and quantum information , the austrian science foundation ( fwf ) through sfb foqus , and the eu projects scala and namequam is acknowledged . in the following , we discuss the influence of a gate error onto the dynamics of the system . for simplicity , we illustrate the general behaviour for an error in the many - body gate @xmath65 for the coherent time evolution of the many - body interaction @xmath7 . the imperfect many - body gate operation can be written @xmath118 where the perfect gate @xmath65 is recovered for @xmath119 and the operator @xmath120 acts on the system spins surrounding the control atom . this form of the error is motivated by the specific implementation of the gate @xcite ; however , it can be seen that different errors in the many - body and single particle gates will lead to similar phenomena . note that we have introduced @xmath110 into the definition of the error in order to perform a consistent expansion . for the _ coherent _ time evolution , the imperfect gate gives rise to a finite amplitude for the control qubit to end up in the state @xmath4 . consequently , optical pumping of the control qubit is required to reinitialize the system . then the gate operations on a single plaquette give rise to the mapping of the density matrix onto @xmath121 with ( @xmath122 ) @xmath123\\ & \approx & \exp\left[i\phi\left(a_{p}+q\right)\right]-\frac{\phi^{2}}{2}q^{2}\\ d & = & \frac{1}{2}\left[\left(\theta^{2}-{\bf 1}\right)\cos\phi+\left(\theta a_{p}-a_{p}\theta\right)i\sin\phi\right]\\ & \approx & -i\phi q.\end{aligned}\ ] ] the last equations hold with an accuracy up to third order in the small parameter @xmath110 . consequently , the optical pumping has no influence in leading order , and the system is well described by a hamiltonian evolution with the modified hamiltonian @xmath124 $ ] . the characteristic energy scale of the correction is again given by @xmath125 , and consequently describes a small perturbation if @xmath126 . in second order expansion in the small parameter @xmath110 , the mapping of the density matrix reduces to @xmath127-\frac{\phi^{2}}{2}\left[h,\left[h,\rho\right]\right]+\frac{\phi^{2}}{2}\left(2q\rho q-\left\ { q^{2},\rho\right\ } \right),\ ] ] with @xmath128 $ ] . the first terms on the right hand side describe the coherent evolution of the system with the evolution operator @xmath129 consistently expanded up to second order , while the last term takes the standard lindblad form for a dissipative coupling with the jump operator @xmath130 describing a dephasing with the rate @xmath131 . the dissipative terms appear only in second order and are therefore strongly suppressed for nearly perfect gates with @xmath126 . in the following we briefly summarise the main properties and requirements of the many - body rydberg gate @xmath65 introduced in ref . @xcite . the internal level structure of the control atom and the surrounding ensemble atoms is depicted in fig . the underlying physical mechanism of the gate operation ( [ eq : gate ] ) is a conditional raman transfer of all ensemble atoms between their logical internal states @xmath132 and @xmath133 , which - depending on the internal state @xmath46 or @xmath134 of the control qubit - is either inhibited or enabled . the gate is realised by the following three laser pulses : ( i ) a first state selective @xmath135-pulse acting on the control atom changes the ground state @xmath134 into the rydberg state @xmath136 . ( ii ) during the whole gate operation , a strong coupling laser of rabi frequency @xmath137 constantly acts on all ensemble atoms and off - resonantly couples the rydberg level @xmath138 to the intermediate level @xmath139 with a detuning @xmath140 . its frequency is chosen such that is in two - photon resonance with the two raman laser beams of rabi frequency @xmath141 ( see fig . [ fig1 ] ) , thereby establishing a condition known as electromagnetically induced transparency ( eit ) @xcite . in consequence , when the raman laser pulses are applied - and provided the control atom resides in state @xmath46 - this eit condition effectively inhibits the coupling of the raman lasers to the intermediate state @xmath139 and the raman transfer is blocked . in case the control atom was excited to the rydberg state @xmath136 in step ( i ) , the large rydberg - rydberg interaction energy shift ( dipole blockade ) lifts the eit condition for the ensemble atoms and thus the raman transfer takes place . ( iii ) finally , the control atom is transfered from state @xmath136 back to @xmath134 via a second @xmath135-pulse . the total time @xmath142 required for the gate is mainly limited by the duration of the raman pulse , resulting in @xmath143 . for @xmath144 experimentally realistic parameters are a principal quantum number of @xmath145 for the @xmath146 state , @xmath147 , @xmath148 , and @xmath149 , resulting in a gate speed of @xmath150 . for these parameters the required lattice spacing of the optical lattice , which is essentially given by the distance where the dimensionless interaction @xmath151 equals one , is given by @xmath152 . this lattice spacing can be achieved using standard techniques such as tuning the angle between the lattice laser beams , and it is also larger than the leroy radius , i.e. the interaction between the rydberg atoms is well described by a van der waals interaction . the hamiltonian giving rise to the constraint for the spins on the octahedra can be expressed as a sum of ising interactions , @xmath153 which allow for an efficient implementation using the general toolbox for quantum simulation . the implementation for the jump operators for the constraint is obtained in analogy to the general jump operators with the many - body gate @xmath65 replaced by the gate @xmath154 . on the other hand , the ring exchange interaction can be written as a sum of commuting four - body interactions @xmath155 likewise , the rokhsar - kivelson term can be decomposed into @xmath156 where @xmath157 is the identity matrix . consequently , the coherent time evolution follows again from the general toolbox , while the jump operators for cooling into the ground state at the rokhsar - kivelson point effectively cool into the zero eigenvalue eigenstate of the operators @xmath158b_{p } = \frac{1}{16}\sum\limits_{j=1}^{16}c_p^{(j ) } = \frac{1}{16}\left[\sum\limits_{j=1}^8b_p^{(j)}-\sum\limits_{j=1}^8n_p^{(j)}\right].\ ] ] this can be achieved by replacing the gate @xmath93 with @xmath159\nonumber\\ & = & \prod_{j=1}^{16 } u_{c}(\pi/2)^{-1}u_ju_{c}(\pi/2)\exp\left(i\pi/32\sigma_{c}^{z}\right)\nonumber\\&&\phantom{\prod}u_{c}(\pi/2)^{-1 } u_ju_{c}(\pi/2),\end{aligned}\ ] ] with @xmath160 . this gate operation leaves states with eigenvalue @xmath161 of @xmath116 invariant , while the @xmath24 eigenvalue picks up a phase of @xmath135 . it can be implemented as a product of many - body gates which derive directly from the standard gate @xmath65 with the combination of spin rotations .
following feynman and as elaborated on by lloyd , a universal quantum simulator ( qs ) is a controlled quantum device which reproduces the dynamics of any other many particle quantum system with short range interactions . this dynamics can refer to both coherent hamiltonian and dissipative open system evolution . we investigate how laser excited rydberg atoms in large spacing optical or magnetic lattices can provide an efficient implementation of a universal qs for spin models involving ( high order ) n - body interactions . this includes the simulation of hamiltonians of exotic spin models involving n - particle constraints such as the kitaev toric code , color code , and lattice gauge theories with spin liquid phases . in addition , it provides the ingredients for dissipative preparation of entangled states based on engineering n - particle reservoir couplings . the key basic building blocks of our architecture are efficient and high - fidelity n - qubit entangling gates via auxiliary rydberg atoms , including a possible dissipative time step via optical pumping . this allows to mimic the time evolution of the system by a sequence of fast , parallel and high - fidelity n - particle coherent and dissipative rydberg gates .
consider a data set where each data point is a vector of @xmath1 quantitative variables and one categorical variable with @xmath2 levels . ideally , several of the quantitative variables are real - valued . according to the categorical variable , we will view the data set as @xmath2 not necessarily distinct collections of points in @xmath3 , referred to as point clouds . additionally , assuming the data were randomly sampled , we will view each of these @xmath2 point clouds as a representative subset of their respective space . of interest is whether or not the spaces corresponding to these @xmath2 point clouds have measurably different shapes ? but what does shape even mean if @xmath1 is large ? topology , in particular algebraic topology , is an area of mathematics that can be used to qualitatively measure the shape of a point cloud . for a given point cloud , we construct an infinite family of simplicial complexes that vary according to a real - valued distance parameter . each complex in the family is an object that inherits a shape from the point cloud and the topological tool known as homology can be used to detect this shape . since any single complex within the infinite family corresponds to a choice of parameter value , we might ask which parameter value , if any , best " captures the shape of the point cloud ? persistence homology is a study of the homological features that persist over long intervals of the distance parameter , thus sidestepping the search for a best choice parameter value . hence , persistence homology can be used to determine if the @xmath2 point clouds in the data set have different shape . while persistence homology allows comparisons of shape across the @xmath2 sampled point clouds , can any resulting sample differences then be generalized to the corresponding spaces at large ? the answer is yes , but as random sampling unavoidably introduces variability , a method is needed which can distinguish true " differences in shape between the spaces from artificial " differences between the sampled point clouds . statistical hypothesis testing is an inferential method often implemented to assess whether or not randomly sampled data provide sufficient evidence of a difference , with respect to some characteristic , between two or more populations , which we have been and will continue to loosely refer to as spaces . in the 2013 results of k. turner and a. robinson , such an assessment is conducted on @xmath4 spaces using a specific type of hypothesis testing procedure known as a permutation test , where the characteristic of interest is shape @xcite . in this procedure , the randomly sampled data are numerous point clouds from both spaces and the shape of a point cloud is measured via persistence homology . in this paper we extend this procedure to three or more spaces , @xmath5 . the remainder of the paper is organized as follows . in section [ hom ] we provide definitions and examples of the vietoris - rips complex of a point cloud , homology groups , persistence homology and persistence diagrams . in section [ htandtda2spaces ] we describe the permutation test of robinson and turner . in section [ htandtda3spaces ] we propose an extension of the permutation test for three or more groups . in section [ htandtda3spacessimstudy ] we present the results of a large - scale simulation study , incorporating various measurement errors and sample sizes , that validate our proposed extension . finally , in section [ cardioapp ] we apply our extension to a cardiotocography data set and find significant evidence of differences in shape , as measured by persistence homology , between the spaces corresponding to normal , suspect and pathologic health groups . before defining the persistence homology of a point cloud , we associate to the point cloud a nested family of abstract simplicial complexes . a thorough explanation of simplicial complexes and abstract simplicial complexes is available in many sources @xcite . here we motivate the definition of an abstract simplicial complex with a brief geometric introduction to simplicial complexes , followed by the definition of the vietoris - rips complex which is the abstract simplicial complex used herein . geometrically , a 0-simplex is a point , a 1-simplex is a line segment , a 2-simplex is a triangular subset of a plane , a 3-simplex is a solid tetrahedron , and an @xmath6-simplex is the @xmath6-dimensional analogue of these convex sets . observe that the boundary of an @xmath6-simplex , @xmath7 , is a collection of @xmath8-simplices ; these boundary simplices are called faces of @xmath7 . simplicial complex _ is a collection of simplices in @xmath9 that satisfy certain subset and intersection properties specifying how simplices can be put together to create a larger structure . more precisely , a simplicial complex is a finite collection of simplices , @xmath10 , such that ( 1 ) if @xmath11 and @xmath12 is a face of @xmath7 then @xmath13 , and ( 2 ) given any two simplices @xmath14 then @xmath15 is either the empty set or a face of both @xmath16 and @xmath17 . more generally , and without relying on geometry , an abstract simplicial complex _ is a finite collection of sets , @xmath18 , such that if @xmath19 and @xmath20 , then @xmath21 . it is well known that a finite abstract simplicial complex can be geometrically realized as a simplicial complex in @xmath22 for @xmath23 sufficiently large . the _ vietoris - rips complex _ , denoted @xmath24 , is an abstract simplicial complex associated to a point cloud @xmath25 for a fixed radius value @xmath26 . the elements of @xmath25 form the 0-simplices or vertex set of @xmath24 . a simplex of @xmath24 is a finite subset @xmath27 of @xmath25 such that the diameter of @xmath27 is less than @xmath28 . a simplex @xmath29 with @xmath30-elements is called a @xmath31-simplex of @xmath25 . thus , a 1-simplex corresponds to a two element set ( viewed geometrically as the endpoints of a line segment ) , a 2-simplex corresponds to a three element set ( viewed as the vertices of a triangle ) , and so on . observe that if @xmath27 is a @xmath30-simplex , then every subset of @xmath27 is a simplex of @xmath25 as the diameter of a subset of @xmath27 can be no larger than the diameter of @xmath27 . hence the vietoris - rips complex satisfies the definition of an abstract simplicial complex . as an example , consider the set , @xmath25 , of five points in the plane as pictured in figures [ data5 ] and [ vrexamples ] . each point in @xmath25 is a 0-simplex , each line segment drawn between points is a 1-simplex , and each shaded triangle a 2-simplex . as the parameter @xmath28 increases beyond @xmath32 the vietoris - rips complex will contain additional 2-simplices , a 3-simplex at @xmath33 , and eventually a 4-simplex when @xmath34 is equal to the diameter of @xmath25 . note that the abstract simplicial complex @xmath35 in figure [ vrexamples ] can not be geometrically realized in @xmath36 since it contains pairs of 2-simplices whose intersection is not a face of either simplex . and @xmath37 for a five point data set , @xmath25.,title="fig : " ] and @xmath37 for a five point data set , @xmath25.,title="fig : " ] we note that vietoris - rips complexes for increasing radius values are always a nested family of simplicial complexes associated to @xmath25 , that is the complexes satisfy @xmath38 this nested feature of the complexes along with the functorial nature of homology are what give rise the the concept of persistence to be defined below . although the vietoris - rips complex is relatively straightforward to define and calculate , it can be computationally expensive when used with large point clouds . there are economical alternatives to the vietoris - rips complex , such as the lazy witness complex introduced in @xcite . persistence homology can be applied using any nested family of complexes indexed by some parameter . the homology of a simplicial complex @xmath10 is an algebraic measurement of how the @xmath6-simplices are attached to the @xmath8-simplices within @xmath10 . below we define some technical machinery ( chains , boundary maps , and cycles ) used to define homology groups , followed by some example calculations . the _ @xmath0-chains _ of a simplicial complex @xmath10 , denoted @xmath39 , is the group of formal linear combinations of the @xmath0-simplices of @xmath10 with coefficients from @xmath40 . ( more general definitions of homology with ring coefficients can be found in the standard algebraic topology texts @xcite . ) since @xmath40 is a field , the @xmath0-chains of @xmath10 are a @xmath40-vector spaces with basis the @xmath0-simplices of @xmath10 . for example , the chains of @xmath41 are the vector spaces @xmath42 , @xmath43 , and @xmath44 . the _ boundary map _ , denoted @xmath45 , identifies each @xmath0-chain with its boundary , a @xmath46 chain . each boundary map , @xmath47 , is a homomorphism and in the case of @xmath40 coefficients , as considered here , these maps are linear transformations . notice that @xmath48 is the zero map as the boundary of a boundary is empty . this fundamental property of chain complexes ensures that the image of @xmath49 is a normal subgroup of the kernel of @xmath50 . the collective sequence of boundary maps and chains , as shown below , is called a _ chain complex_. @xmath51 [ cols="^,^ " , ] [ table : buw-2vs3 ] the two trends that were apparent in the corresponding omnibus permutation tests for this simulation scenario are also readily apparent in all three of these post - hoc tests . specifically , as sample size increases for a fixed measurement error , the percentage of significant post - hoc tests tends to increase . similarly , as measurement error increases for a fixed sample size , the percentage of significant post - hoc tests tends to decrease . a cell by cell comparison of the percentages among the three post - hoc tests , however , reveals an additional interesting trend . the percentages for the post - hoc test between the circle and the three - wedge are almost uniformly larger than or equal to the corresponding percentages between the circle and the two - wedge , which are in turn almost uniformly larger than or equal to the corresponding percentages between the two - wedge and the three - wedge . this too is mostly intuitive and desirable since , among the three spaces , the unit circle and the three wedge are the most different with respect to shape . we are uncertain why the post - hoc test appears more adept at recognizing measurable differences in shape between the circle and the two - wedge rather than between the two - wedge and the three - wedge . regardless , all three of these trends , when coupled with the volume of entries in all three tables which are at or near 100% , indicate that the proposed post - hoc tests successfully " identified measurable differences in shape between each of the three possible pairings of these three spaces . such findings additionally corroborate the legitimacy of the two - space permutation test . in summary , the major findings of the simulation study are three - fold . first and foremost , these simulations demonstrate that the proposed omnibus permutation testing procedure successfully " identified measurable differences in shape between at least two of the three spaces . second , these simulations confirm that the post - hoc testing component successfully " identified measurable differences in shape between any two spaces ; such findings corroborate the legitimacy of the two - space permutation testing procedure . third and finally , these simulations reveal that these hypothesis testing procedures , for any number of spaces , require balanced sample sizes . we apply our methods to the cardiotocography ( ctg ) data set that is freely available from the university of california at irvine machine learning repository . the ctg data set includes 23 variables for each of 2126 subjects . we apply our methods on a focused subset of four quantitative variables , including fetal heart rate baseline in beats per minute , number of accelerations per second , number of uterine contractions per second , and number of light decelerations per second . these four quantitative variables are chosen because they are seemingly independent , and we want to consider no more than four such variables . the categorical variable of interest is health status , which has three levels : normal , suspect , and pathologic . the question of interest is whether or not the four - dimensional space created by the quantitative variables has a measurably different shape across the three health status groups . to answer this question , we use the omnibus permutation testing procedure developed in section [ htandtda3spacestestingprocedure ] , measuring shape via one dimensional persistence homology . before this procedure can be performed , however , balanced samples from the three health status groups must be obtained . of the 2126 sampled subjects , 1655 are of normal health status , 294 of suspect health status , and 176 of pathologic health status . to obtain balanced samples across the three health status groups , we select a random sample of size 176 from both the normal and suspect health status groups . however , since the normal health status group is so large , we first test the representativeness of our sample of 176 subjects using an omnibus permutation test . to that end , the 1655 normal health status subjects were randomly partitioned into nine spaces " of 176 , leaving 71 discarded subjects . the 176 subjects in each space " were then randomly partitioned into four four - dimensional point clouds of 44 subjects each . the omnibus permutation test was then performed using these 36 point clouds . the corresponding null hypothesis asserted that there were no measurable differences in shape between the nine spaces . " the resulting approximate permutation test p - value was based on 100,000 random assignments of the 36 point clouds to the nine spaces . " this entire process was then repeated 149 more times , where each such trial was based on a different initial random partition of the 1655 normal health status subjects into the nine spaces . " of the 150 trials , 24 produced approximate permutation test p - values under 0.1 , which suggests that in those trials there is evidence of measurable differences in shape between the nine spaces . " in other words , in those 24 trials , there is evidence that the nine spaces " may not be equally representative of the normal health status group , with respect to shape . hence , we select our random sample of 176 normal health status subjects by randomly selecting one of the nine spaces " from the more representative " 126 trials . with balanced samples of 176 subjects from each of the three health status groups , we can now utilize the omnibus permutation testing procedure to address the original question of interest . to that end , within each of the three health status groups , the 176 subjects were randomly partitioned into four point clouds of 44 subjects each . the omnibus permutation test was then performed using the persistence diagrams corresponding to these 12 point clouds . the corresponding null hypothesis asserted that there were no measurable differences in shape between the three spaces , i.e. health status groups . the resulting permutation test p - value of approximately 0.003 was based on all 34650 possible assignments of the 12 persistence diagrams to the three spaces . given that the p - value is so small , we reject the null hypothesis and conclude that there are measurable differences in shape between at least two of the three spaces . to determine the source(s ) of the difference , we ultimately performed three post - hoc tests , one for each possible pairing of the three health status groups . for each such test , the null hypothesis asserted that there were no measurable differences in shape between the two spaces of the respective health status groups . all three resulting permutation test p - values were based on all 70 possible assignments of the 8 corresponding persistence diagrams to the two spaces . for the normal and suspect health status groups , the permutation test p - value was approximately 0.029 ; for the normal and pathologic health status groups , the permutation test p - value was also approximately 0.029 ; for the suspect and pathologic health status groups , the permutation test p - value was approximately 0.257 . hence , there is significant evidence of measurable differences in shape between the normal and suspect health status groups , and between the normal and pathologic health status groups , but insignificant evidence of such differences between the suspect and pathologic health status groups . for point clouds sampled from three or more spaces , we propose using an omnibus permutation test on the corresponding persistence diagrams to determine whether statistically significant evidence exists of measurable differences in shape between any of the respective spaces . if such differences do exist , we then propose using a number of post - hoc ( i.e. two - space ) permutation tests to identify the specific pairwise differences . to validate this proposed procedure , we conducted a large - scale simulation study using samples of point clouds from three distinct groups . various combinations of spaces , samples sizes and measurement errors were considered in the simulation study and for each combination the percentage of @xmath0-values below an alpha - level of 0.05 were provided . the results of the simulation study clearly suggest that the procedure works , but additionally reveal that the method is neither scale invariant nor insensitive to imbalanced sample sizes across point clouds . finally , accounting for sample size and scale , we applied our omnibus testing procedure to a cardiotocography data set and found statistically significant evidence of measurable differences in shape between the normal , suspect and pathologic health status groups . while the proposed ombinus testing procedure is applicable in any homological dimension , the simulation study and ctg application presented in this paper focus exclusively on homological dimension one . hence , to validate the effectiveness of the method in other homological dimensions , and to assess the consistency of the method across various dimensions , additional simulation studies can be performed .
we extend the work of robinson and turner to use hypothesis testing with persistence homology to test for measurable differences in shape between point clouds from three or more groups . using samples of point clouds from three distinct groups , we conduct a large - scale simulation study to validate our proposed extension . we consider various combinations of groups , samples sizes and measurement errors in the simulation study , providing for each combination the percentage of @xmath0-values below an alpha - level of 0.05 . additionally , we apply our method to a cardiotocography data set and find statistically significant evidence of measurable differences in shape between normal , suspect and pathologic health status groups .
however , its administration is associated with problems , such as the adjustments needed to maintain an effective blood concentration . in addition , dietary restrictions are necessary , because vitamin k levels influence warfarin s efficacy . in recent years , novel oral anticoagulants ( noacs ) those have demonstrated encouraging outcomes , as well as effects comparable to those of warfarin . noacs are increasingly being used for primary and secondary prevention with respect to cardiogenic embolisms . with the aging of society , elderly patients suffering from atrial fibrillation make up a growing population . as a result , the prevention of cardiogenic embolisms is becoming a major issue for social and health economics . since warfarin has a long half - life , controlling intracranial bleeding is very difficult . there are some reports that the incidence of intracranial bleeding after noac administration is not as frequent as in patients treated with warfarin . other reports indicate that the clinical course following intracranial hemorrhage is better in noac patients than in warfarin patients . we investigated the progress and prognosis of cases at our institute in which conservative therapy was selected , with intracranial bleeding as a complication during noac administration . noacs were administered to 313 patients ( dabigatran : 173 , rivaroxaban : 140 ) at our institute between 2011 and july 2014 . all patients were diagnosed with non - valvular atrial fibrillation and noac medication was started for the prevention of cardiogenic embolization . patients were allocated to dabigatran or rivaroxaban on the basis of their application time , medication compliance , and other factors . random assignment was not performed . among these patients , an 85-year - old man with a history of high - blood pressure , diabetes , and episodic atrial fibrillation ( cardiac pacemaker ) was staying at a health and welfare institution for the elderly because of dementia . he was taking a hypoglycemic agent , -inhibitor , anti - dementia drug , aspirin ( 100 mg per day ) , and dabigatran ( 220 mg per day ) . he was transported to our institute after the occurrence of left hemiparesis . on initial examination , his level of consciousness was 10 points on the japan coma scale ( jcs 10 ) and pupil diameters were equal at 2.5 mm bilaterally ; however , right conjugate deviation was observed . paralysis was observed in the left upper and lower limbs , and he scored 3/5 on a manual muscle test . his renal function indicated a creatinine clearance ( ccr ) of 49.6 ml / min . an evaluation of the risk factors for cerebral stroke gave 3 points on the congestive heart failure , hypertension , age , diabetes mellitus , and stroke / tia ( chads2 ) scale , and 4 points on the congestive heart failure , hypertension , age , diabetes mellitus , stroke / tia , vascular disease , age and sex category ( cha2ds2-vasc ) scale , making him eligible for anticoagulant therapy . at the same time , his risk factors for bleeding during anticoagulant therapy produced a hypertension , abnormal renal / liver function , stroke , bleeding history or predisposition , labile inr , elderly , drug / alcohol use ( has - bled ) score of 2 points . mild midline shift was observed on a head computed tomography ( ct ) scan as a complication of subcortical bleeding in the right frontal lobe ( figure 1afigure 1case 1 : the patient s head computed tomography on admission ( a ) and 7 days later ( b ) . ) . anticoagulant therapy ( dabigatran ) and antiplatelet therapy ( aspirin ) were discontinued and conservative therapy was applied . the level of consciousness shifted to jcs 3 without aggravation of the cerebral hemorrhage ( figure 1b ) . the patient was transferred to a nursing home 10 months after the onset of cerebral hemorrhage and the discontinuation of anticoagulant therapy . case 1 : the patient s head computed tomography on admission ( a ) and 7 days later ( b ) . an 81-year - old man with a history of cerebral infarction , diabetes , and atrial fibrillation was treated with insulin and rivaroxaban ( 10 mg per day ) . a residual disability caused by cerebral infarction left him requiring total assistance in his day - to - day living . he was admitted because of torso tilt and vomiting . on initial examination , his consciousness level was jcs 3 with no apparent immobility of the pupils , deviation , or paralysis of the four limbs . regarding risk factors for cerebral stroke , his chads2 score was 4 points and cha2ds2-vasc was 5 points . at the same time , the has - bled score was 2 points , with the risk of bleeding during anticoagulant therapy determined as moderate . a head ct scan indicated a left cerebellar hemorrhage ( figure 2afigure 2case 2 : the patient s head computed tomography on admission ( a ) and 5 days later ( b ) . ) . anticoagulant therapy the patient was transferred to a geriatric health services facility 3 months after the onset of cerebellar hemorrhage while still receiving no anticoagulant therapy . case 2 : the patient s head computed tomography on admission ( a ) and 5 days later ( b ) . an 85-year - old man with a history of high - blood pressure , diabetes , and episodic atrial fibrillation ( cardiac pacemaker ) was staying at a health and welfare institution for the elderly because of dementia . he was taking a hypoglycemic agent , -inhibitor , anti - dementia drug , aspirin ( 100 mg per day ) , and dabigatran ( 220 mg per day ) . he was transported to our institute after the occurrence of left hemiparesis . on initial examination , his level of consciousness was 10 points on the japan coma scale ( jcs 10 ) and pupil diameters were equal at 2.5 mm bilaterally ; however , right conjugate deviation was observed . paralysis was observed in the left upper and lower limbs , and he scored 3/5 on a manual muscle test . his renal function indicated a creatinine clearance ( ccr ) of 49.6 ml / min . an evaluation of the risk factors for cerebral stroke gave 3 points on the congestive heart failure , hypertension , age , diabetes mellitus , and stroke / tia ( chads2 ) scale , and 4 points on the congestive heart failure , hypertension , age , diabetes mellitus , stroke / tia , vascular disease , age and sex category ( cha2ds2-vasc ) scale , making him eligible for anticoagulant therapy . at the same time , his risk factors for bleeding during anticoagulant therapy produced a hypertension , abnormal renal / liver function , stroke , bleeding history or predisposition , labile inr , elderly , drug / alcohol use ( has - bled ) score of 2 points . mild midline shift was observed on a head computed tomography ( ct ) scan as a complication of subcortical bleeding in the right frontal lobe ( figure 1afigure 1case 1 : the patient s head computed tomography on admission ( a ) and 7 days later ( b ) . ) . anticoagulant therapy ( dabigatran ) and antiplatelet therapy ( aspirin ) were discontinued and conservative therapy was applied . the level of consciousness shifted to jcs 3 without aggravation of the cerebral hemorrhage ( figure 1b ) . the patient was transferred to a nursing home 10 months after the onset of cerebral hemorrhage and the discontinuation of anticoagulant therapy . case 1 : the patient s head computed tomography on admission ( a ) and 7 days later ( b ) . an 81-year - old man with a history of cerebral infarction , diabetes , and atrial fibrillation was treated with insulin and rivaroxaban ( 10 mg per day ) . a residual disability caused by cerebral infarction left him requiring total assistance in his day - to - day living . he was admitted because of torso tilt and vomiting . on initial examination , his consciousness level was jcs 3 with no apparent immobility of the pupils , deviation , or paralysis of the four limbs . regarding risk factors for cerebral stroke , his chads2 score was 4 points and cha2ds2-vasc was 5 points . at the same time , the has - bled score was 2 points , with the risk of bleeding during anticoagulant therapy determined as moderate . a head ct scan indicated a left cerebellar hemorrhage ( figure 2afigure 2case 2 : the patient s head computed tomography on admission ( a ) and 5 days later ( b ) . ) . the patient was transferred to a geriatric health services facility 3 months after the onset of cerebellar hemorrhage while still receiving no anticoagulant therapy . case 2 : the patient s head computed tomography on admission ( a ) and 5 days later ( b ) . among the 313 patients treated with noac at our institute between 2011 and july 2014 ( dabigatran : 173 , rivaroxaban : 140 ) the average age was 74.6 years , with 96 patients aged up to 70 years and 217 aged 71 years or older . regarding chads2 , 54 patients scored 01 point , while 259 scored 2 points or more . the cha2ds2-vasc score was 01 point in 24 cases and 2 points or more in 289 cases . serious complications requiring discontinuation of anticoagulant therapy occurred in 9 cases , of which intracranial bleeding was observed in 2 . in addition , recurrence of cerebral infarction was observed in 8 cases . the intracranial bleeding involved hemorrhage in the cerebral parenchyma ( 1 case of cerebral hemorrhage in the cerebrum and 1 case of cerebellar hemorrhage ) , with no cases of subdural bleeding . conservative therapy was selected in both intracranial hemorrhage cases ; no aggravation of hematoma was observed as a result of the antihypertensive therapy , the administration of a hemostatic drug , and the discontinuation of anticoagulants . i have described above two cases in which the administration of anticoagulants was indicated ( chads2 and cha2ds2-vasc scores of 2 points or more ) and intracranial hemorrhage was observed as a complication during noac administration . the bleeding risk in both cases was moderate ( has - bled score 2 points ) . however , no increase in the quantity of hematoma or aggravation of the abnormal neurological findings were observed as a result of conservative therapy . whereas the half - life of warfarin in serum is considered to be 20 to 60 hours , the half - lives of 1217 hours for dabigatran and 59 hours for rivaroxaban mean that the anticoagulant effect of noac is likely to disappear within a very short period of time following withdrawal . in addition , in noac the mechanism of coagulation inhibition and hemostasis differs from that of warfarin , being limited to a single cause ( figure 3figure 3mechanism of hemostasis and targets of warfarin and novel oral anticoagulants . ) . furthermore , the brain system is rich in tissue factor , which contributes to the activation of the extrinsic clotting system due to factor vii . unlike warfarin , the microenvironment of the hemorrhaged brain system contains abundant active factor vii in patients taking noac . therefore , the rapid functional restoration of the coagulation and hemostasis mechanisms occurs in parallel with the decline in blood concentration once the noac intake is stopped . thus , the increase in hematoma size can be controlled by rapid withdrawal alone , even after the onset of intracranial bleeding . a comparative outcome analysis has indicated that noac have a lower incidence of complications such as massive bleeding or intracranial bleeding . substantial anticoagulant effects in preventing cerebral infarction and systemic embolisms have been reported , . although intracranial bleeding during anticoagulant therapy is likely to become serious , no sudden increase in the hematoma is observed , because of the short half - lives of noac in blood compared to that of warfarin . in cases with a high anticoagulant therapy recommendation score ( chads2 and cha2ds2-vasc ) currently , in japan , the suitability of drugs is limited to inhibiting the occurrence of ischemic cerebral stroke and systemic embolisms in patients with non - valvular atrial fibrillation . based on our results and a previous report , noac are more likely than warfarin to inhibit an increase in severity at the onset of hemorrhagic complications . the has - bled score is usually used for the prediction of the risk of intracranial hemorrhage during anticoagulation therapy . however , a moderate has - bled score has not been established as a prognostic factor . the relationship between has - bled score and hematoma size or prognosis needs to be investigated by accumulating more patient cases and performing a statistical analysis . , the number of elderly patients developing atrial fibrillation as underlying disease is expected to increase . accordingly , for the purposes of primary and secondary prevention , the expansion of the range of diseases suitable for anticoagulant therapy can not be avoided . against such a social background , it has been predicted that cases of intracranial bleeding during anticoagulant therapy will continue to increase in the future . although the use of noac may be limited by social and health economic backgrounds , along with the limitation of pharmaceutical registration , the types of disease suitable for anticoagulant treatment are likely to increase . since noac have the advantage of not worsening the severity of serious complications , they may be useful in the management of patients with atrial fibrillation .
objective : oral anticoagulants are widely administered to patients with atrial fibrillation in order to prevent the onset of cardiogenic embolisms . however , intracranial bleeding during anticoagulant therapy often leads to fatal outcomes . accordingly , the use of novel oral anticoagulants ( noacs ) , which less frequently have intracranial bleeding as a complication , is expanding . a nationwide survey of intracranial bleeding and its prognosis in japan reported that intracranial bleeding of advanced severity was not common after noac administration . in this report , two cases from our institute are presented.patients : case 1 was an 85-year - old man with a right frontal lobe hemorrhage while under dabigatran therapy . case 2 was an 81-year - old man who had cerebellar hemorrhage while under rivaroxaban therapy.result : in both patients , the clinical course progressed without aggravation of bleeding or neurological abnormalities once anticoagulant therapy was discontinued.conclusion : these observations suggest that intracranial hemorrhage during noac therapy is easily controlled by discontinuation of the drug . noac administration may therefore be appropriate despite the risk of such severe complications . further case studies that include a subgroup analysis with respect to each noac or patient background will be required to establish appropriate guidelines for the prevention of cardiogenic embolisms in patients with atrial fibrillation .
cisplatin ( cis - diamminedichloroplatinum ii , cp ) as a potent antitumor drug is commonly used for a wide variety of tumors , including head and neck , lung , testis , ovary , and breast tumors . however , it has many side effects like ototoxicity , gastrotoxicity , myelosuppression , and allergic reactions . cp injection leads to accumulation of platinum within kidney tissue and influences renal tubular function . the renal dysfunction following exposure to cp is involved in tubular epithelial cell toxicity , apoptosis , vasoconstriction in the renal microvasculature , proinflammatory effects , and activation of mitogen - activated protein kinases [ 4 , 5 ] . these events lead to wasting of sodium , potassium , magnesium , elevation in serum levels of creatinine ( cr ) and blood urea nitrogen ( bun ) , reduction in serum albumin , and decrease in the glomerular filtration rate [ 2 , 3 , 5 ] . many agents such as vitamins c and e , losartan , and l - arginine have been proposed to protect the kidney against nephrotoxicity of platinum drugs [ 68 ] . epo is one of these agents used for treatment of anemia and acute renal failure induced by cp [ 9 , 10 ] . epo has antiapoptotic , antioxidant , and anti - inflammatory effects , and it has been used as a nephroprotective against various kidney injuries such as kidney damage induced by ischemia - reperfusion [ 11 , 12 ] cp - induced nephrotoxicity [ 9 , 1316 ] , and gentamycin - induced kidney toxicity . epo is a glycoprotein hormone , primarily produced by renal cortical and outer medullary fibroblasts in response to hypoxia . epo receptors ( epor ) have been identified in a large range of cell types , including proximal tubular epithelial cells , mesangial cells , renal cell carcinomas , prostatic cells , breast cancer cells , chorioallantoic membrane , uterine adenocarcinomas , and ovarian carcinomas . epor activation leads to activation of some signaling pathways that enhance cell proliferation and mediate renoprotection [ 9 , 11 ] . as mentioned before , epo improves cp - induced acute renal failure and leads to recovery after tubular damage . experimental evidence suggests that estradiol inhibits production of epo in female rats , when rats have exposed to various intensities of hypoxia , which is confirmed by production lower amounts of epo in normal females than normal males . so , ovariectomized rats show a response equal to males , and estradiol decreases epo gene expression during hypoxia . accordingly , the protective roles of epo against oxidative stress may change when estradiol is accompanied by epo . therefore , this study was designed to find the protective role of epo in a cp - induced nephrotoxicity when it is accompanied by estrogen . the investigation was performed on 27 adult female wistar rats ( 152.02 2.847 g ) ( animal centre , isfahan university of medical sciences , isfahan , iran ) . they had free access to water and rat chow , and they were acclimatized to this diet for at least one week prior to the experiment . the experimental procedures were in advance approved by the isfahan university medical sciences ethics committee . the animals were anesthetized by ketamine ( 75 mg / kg , i.p . ) . an incision was made in the abdominal middle line to expose and remove the ovaries from retroperitoneal space . one week after the operation and recovery , the rats were allowed to acclimatize to the same diet at least for one week . group 1 received estradiol valerate ( 500 g / kg / week ) in sesame oil intramuscularly for four weeks . at the end of week 3 , group 1 ( n = 6 ) received single dose of cp ( 7 mg / kg ) and then treated with epo ( 100 iu / kg , i.p . ) every day during week 4 . group 2 ( n = 5 ) received the same regimen as group 1 , except for vehicle instead of epo . group 3 was treated similar to group 1 , except for sesame oil alone instead of estradiol valerate in sesame oil for four weeks . then , they received single dose of cp and treated with epo every day during week 4 . group 4 ( n = 5 ) was treated with the same regimen as group 3 except for vehicle instead of epo . the negative control group ( group 5 ) received vehicle alone during the study . in summary , the animals in group 1 were ovariectomized and were treated by estradiol valerate in sesame oil , cp , and epo ( named ove + cp + epo ) ; group 2 received vehicle instead of epo ( named ove + cp ) . the animals in group 3 were ovariectomized and received sesame oil , cp , and epo ( named ov + cp + epo ) , while animals in group 4 were treated by vehicle instead of epo ( named ov + cp ) . cp ( cis - diammineplatinum ( ii ) dichloride , code p4394 ) , epo , and estradiol valerate were purchased from sigma ( st . louis , mo , usa ) , janssen - cilag ( czech republic ) , and aburaihan ( tehran , iran ) , respectively . the animals ' body weight was recorded daily . at the end of the experiment , the levels of serum cr and bun were determined using quantitative diagnostic kits ( pars azmoon , iran ) . the serum level of nitrite ( stable no metabolite ) was measured using a colorimetric assay kit ( promega corporation , usa ) that involves the griess reaction . the serum level of malondialdehyde ( mda ) was determined by thiobarbituric acid ( tba ) 0.67% and trichloroacetic acid ( tca ) 10% . the removed kidneys were fixed in 10% neutral formalin solution and were embedded in paraffin for histopathological staining . hematoxylin and eosin stain was applied to examine the tubular atrophy , cast , debris , and necrotic materials in the tubular lumen . tubular lesions were scored from 1 to 4 based on the damage intensity , where score zero was assigned to normal tubules without damage . one way anova was applied to compare the weight loss , kidney weight , uterus weight , and serum levels of bun , cr , mda , and nitrite between the groups . the pathological damage score of the groups was compared by the mann - whitney and kruskal - wallis tests . to determine the correlation between kidney weight and pathological damage score , the nonparametric spearman correlation test was used . cp - induced nephrotoxicity was approved by the increase of bun and cr serum levels in positive control groups ( groups 2 and 4 ) when compared with the negative control group ( group 5 ) ( p < 0.05 ) . such findings were not obtained in epo - treated groups ( groups 1 and 3 ) . in other words , epo reduced the serum levels of bun and cr in ovariectomized animals treated with cp either estradiol was present or not ( figure 1 ) . as expected , the serum level of nitrite significantly increased in estrogen - treated groups and was statistically different when compared with nonestradiol - treated group ( p < 0.05 ) ( figure 1 ) . at the end of the experiment , the uterus weight in estradiol - treated groups ( groups 1 and 2 ) was significantly greater than others groups so , administration of estradiol was effective for the animals ( figure 1 ) . however , only in group 4 , the serum level of mda was significantly different from the negative control group ( p < 0.05 ) . the rats treated with epo had a lower serum level of mda compared with the positive control groups ( p < 0.05 ) . weight loss during the last week of the experiment was compared between cp injection day as the first day and seven day after cp injection as the last day of the week . the data indicates a significant weight loss in cp - treated groups 1 ( ove + cp + epo ) , 2 ( ove + cp ) , and 4 ( ov + cp ) when compared with the negative control group ( p < 0.05 ) . this is while such weight loss was not observed in group 3 . the weight change in group 3 in comparison with the negative control group was not significant , while group 3 was significantly different from other groups in this respect ( p < 0.05 ) ( figure 2 ) . the ktds and kw of groups 1 ( ove + cp + epo ) , 2 ( ove + cp ) and 4 ( ov + cp ) increased significantly when compared with the negative control group ( p < 0.05 ) . however , no significant differences in mentioned parameters were detected between group 3 ( ov + cp + epo ) and the negative control group ( figure 2 ) . this indicates that epo could protect the kidney from toxicity induced by cp , but this protection was abolished when epo was accompanied by estradiol . the ktds and kw in group 3 were statistically less than the corresponding positive control group . a significant correlation was detected between kw and ktds ( r = 0.6304 , p < 0.01 ) . the main objective of this study was to determine the protective role of epo against cp - induced nephrotoxicity at the presence of estradiol on ovariectomized rat model . our findings indicate that epo reduces the serum levels of bun , cr , and mda that were increased by cp . however , when estradiol was not present , higher ktds , kw , and weight loss , that were induced by administration of cp , were reduced significantly by epo , when compared to the positive control group . this finding is in agreement with results of other studies [ 7 , 21 ] . epo may protect the kidney from acute renal damage , as its receptors are expressed in the kidney . it was reported that epo ameliorates cp - induced nephrotoxicity [ 9 , 10 , 15 , 2326 ] , while female sex hormone , estrogen , inhibits epo production in female rats and decreases epo gene expression during hypoxia . some evidence also has reported the sex difference response to cp - induced nephrotoxicity and renal function [ 27 , 28 ] . it seems that although estrogen acts as a cardiovascular protectant in women before menopause , its protective role in cp - induced nephrotoxicity is failed . estrogen enhances oxidative stress in the kidney and promotes kidney toxicity in the tubules [ 30 , 31 ] . estrogen also enhances the serum level of no as seen in our study [ 32 , 33 ] , and on the other hand , no is involved in cp - induced nephrotoxicity [ 34 , 35 ] . therefore , enhancement of serum level of both estradiol and no potentially may promote the intensity of nephrotoxicity . estrogen decreases hypoxic induction of plasma epo , and renal epo gene expression is mediated by increasing no production [ 20 , 36 ] , and no can reduce epo gene expression in kidneys . epo , as an antioxidant and antiapoptotic agent , has a protective effect against cp - induced nephrotoxicity [ 14 , 23 ] . recombinant human epo reduces the serum levels of mda and glutathione , induced by cp treatment . therefore , it seems that the protective role of epo in cp - induced nephrotoxicity is not related to its hematopoietic effect . we could hypothesize that the protective role of epo may not only be attenuated by estrogen due to the effect of sex hormone on epo [ 40 , 41 ] , but also estrogen itself promotes the toxicity intensity via no production [ 3235 ] or other pathways . this study for the first time suggests a need to pay special attention to cp therapy in women who are under estrogen replacement therapy .
introduction . nephrotoxicity is one the side effect of cisplatin therapy and erythropoietin has been candidate as a nephroprotectant agent . however , its nephroprotective effect when it is accompained with estrogen has not been studied in female . methods . 27 ovariectomized female wistar rats divided into five groups . groups 1 & 2 received estradiol valerate ( 0.5 mg / kg / week ) for four weeks , and single dose of cisplatin ( 7 mg / kg , ip ) was administrated at the end of week 3 . then the group 1 was treated with erythropoietin ( 100 u / kg / day ) , and the group 2 received vehicle during week 4 . groups 3 and 4 were treated similar to group 1 and 2 , except for placebo instead estradiol valerate . group5 ( negative control ) received placebo during the study . animals were killed at the end of week 4 . results . in non - erythropoietin treated rats , cisplatin significantly increased the serum levels of blood urea nitrogen and creatinine ( p < 0.05 ) . however , these biomarkers significantly decreased by erythropoietin ( p < 0.05 ) . the weight loss , kidney weight , and kidney tissue damage score in rats treated with cisplatin but without estradiol were significantly less than the values in similar group when estradiol was present ( p < 0.05 ) . conclusion . it seems that erythropoietin could protect the kidney against cisplatin - induced nephrotoxicity . this protective effect was not observed when estrogen was present .
the renormalization group properties of quantum chromodynamics were the reason of acceptance of this theory as the theory of strong interactions . the central rle played by the qcd @xmath0-function , calculated at the one- @xcite , two- @xcite , three- @xcite and finally at the four - loop @xcite level , can not be overestimated in this respect . the calculation of the last known , four - loop , term in the expansion of the @xmath0-function , was performed by only one group @xcite . the authors evaluated `` ... of the order of 50.000 4-loop diagrams '' . these two facts lead to the conclusion , stressed many times ( in _ e.g. _ @xcite ) that it is not only justified , but also necessary to independently evaluate this quantity . the present work is intended to fulfill this need . apart from the @xmath0-function itself , we present two new results . the first one is the complete set of @xmath1 anomalous dimensions at the four - loop level in the linear gauge and with color structures of a general compact simple lie group . these give , in a compact form , the complete set of four - loop qcd renormalization constants . we notice that the case of the su(n ) group has been solved in @xcite with the assumption , however , that the four - loop @xmath0-function is correct . our second new result is of a more technical nature , but should simplify future renormalization group calculations at this level of the perturbative expansion . to this end , we derived the necessary set of divergent parts of the four - loop fully massive tadpole master integrals . a purely numeric result was available from @xcite , but could not be used for our purpose . this paper is organized as follows . the next section presents the methods used in the calculation , as well as some efficiency considerations . subsequently , our results for the anomalous dimensions are listed . conclusions and remarks close the main part of the paper . the expansions of the tadpole master integrals are contained in the appendix . the calculation of renormalization group parameters , _ i.e. _ anomalous dimensions and beta functions allows for vast simplifications in comparison to the actual evaluation of green functions with kinematic invariants in the physical region . dimensional regularization and the @xmath1 scheme are particularly well suited for this kind of problems , since they make it possible to manipulate dimensionful parameters of the theory . in fact , ever since the introduction of the infrared rearrangement @xcite , it is known how to set most of them to zero and avoid spurious infrared poles . with the advent of the @xmath2 operation @xcite , infrared divergences are even allowed in individual diagrams and only compensated by counterterms afterward . at the four - loop level two techniques seem to be most promising . one is a global @xmath2 operation @xcite , where one would set all of the external momenta to zero and also almost all of the masses , keeping just one massive line . the spurious infrared divergences are then compensated by adding a global counterterm . the advantage of this approach is that the whole problem can be reduced to the calculation of three - loop massless propagators , for which there exists a well tested and efficient form @xcite package , mincer @xcite . the disadvantage is that the construction of the global counterterm is not trivial . in fact , up to now it has not been possible for the gluon propagator . a second technique consists in setting all of the external momenta to zero , but keeping a common non - zero mass for all the internal lines @xcite . the advantage of this approach is that one never encounters any infrared divergences . one minor part of the price for this convenience is the necessity of a gluon mass counterterm . the more problematic part is , of course , the calculation of the divergent parts of four - loop tadpole diagrams that occur . one way to do this is to generalize the algorithms of @xcite to the four - loop level . we , however , decided to use integration - by - parts identities to reduce all of the integrals to a set of master integrals depicted in fig . [ prototypes ] . the divergent parts of the latter were then calculated as described in appendix [ expansions ] . instead of developing a dedicated software for the reduction of tadpole integrals we used our own implementation of the laporta algorithm @xcite in the form of the c++ library diagen / idsolver @xcite . we found it also a good opportunity to study the efficiency of this approach on a large scale problem . since the calculation was performed in the linear gauge , the gluon propagator had the form @xmath3 it is clear that this implies that every power of the gauge parameter will lead to more powers of the denominators and irreducible numerators in the integrals . a minimal way to test gauge invariance at the end is to keep at most a single power of @xmath4 , which corresponds to a first order expansion of the result around the feynman gauge . under this restriction , the calculation involved a little less than 210.000 independent integrals . the distribution of them among the four - loop tadpole prototypes in the hardest case of the gluon propagator is given in tab . [ distribution ] . the remaining cases of the quark - gluon vertex , and quark and ghost propagators involved about one - third of these integrals and one power of denominator and irreducible numerator less for each of the prototypes . .[distribution ] distribution between the tadpole prototypes of the 203580 different integrals occurring in the calculation of the gluon propagator in linear gauge limited to at most one power of @xmath4 . `` denominator powers '' denote in fact the total number of dots on the lines . [ cols="^,^,^,^,^ " , ] it turned out that to reduce all of the integrals to masters , it was sufficient to generate integration - by - parts identities with up to six additional powers of the denominators and four of the numerators for all of the integrals up to seven lines , and with five additional powers of the denominators and four of the numerators for the eight- and nine - liners . the total number of solved integrals was then about 2.000.000 , meaning a 10% efficiency . about 37% percent of the integrals turned out to be finite . these could have been eliminated from the very beginning by a careful study of divergences . we convinced ourselves , however , that this would not allow for lower powers of denominators and numerators in the reduction process , unless some very involved procedure were used . since the @xmath0-function is , up to normalization , the anomalous dimension of the coupling constant , it is necessary to perform the complete renormalization of some vertex . to this end we chose the quark - gluon interaction , mostly because the calculation of @xcite involved the ghost and gluon instead . the renormalization constant of the quark - gluon vertex will subsequently be denoted by @xmath5 , whereas the renormalization constants of the quark , gluon and ghost fields by @xmath6 , @xmath7 and @xmath8 respectively . even though @xmath8 is not necessary for the present calculation , we derived it in order to have the complete set of renormalization constants at the four - loop level . we will not give the renormalization constants explicitly , but instead we will limit ourselves to the anomalous dimensions , which are defined by @xmath9 where @xmath10 is the t hooft unit of mass introduced to keep the renormalized coupling constant dimensionless . since we use dimensional regularization and the @xmath1 scheme , the renormalization constants can be expanded as @xmath11 where @xmath12 is connected to the qcd coupling constant @xmath13 by @xmath14 . using the fact that the dependence of @xmath15 on @xmath10 enters only through @xmath12 and @xmath4 , and that the renormalization constants of the gluon field and of the gauge parameter are equal , one obtains @xmath16 where the @xmath0-function is simply equal to the anomalous dimension of @xmath17 , _ i.e. _ @xmath18 . this implies that @xmath19 where now @xmath20 eq . [ reconstruction ] can be used in turn to reconstruct the original renormalization constant from a given anomalous dimension . since the quark - gluon vertex , as well as the quark and gluon anomalous dimensions have already been given up to the three - loop level in the linear gauge in @xcite , we will not reproduce them here but only give our result for the four - loop anomalous dimensions with color structures of a general compact simple lie group . at this point we stress once more , that the results have been obtained in a first order expansion around the feynman gauge @xmath21 @xmath22 @xmath23 the first three terms of the expansion of the anomalous dimension of the ghost field in the linear gauge can be found in @xcite . even though the results there are restricted to the su(n ) group , the values for a general compact simple lie group can be derived on the basis of the fact that only quadratic casimir operators occur . as before our four - loop result is obtained in a first order expansion around the feynman gauge @xmath24 the notation is similar to the one used in @xcite . besides riemann @xmath25 functions , the result contains the quadratic casimir operators of the fundamental and adjoint representations , @xmath26 and @xmath27 , as well as the normalization of the trace of the fundamental representation @xmath28 , where @xmath29 are the representation generators , and the number of fermion families @xmath30 . the higher order invariants are constructed from the symmetric tensors @xmath31 ( and similarly for the adjoint representation ) and of the dimensions of the fundamental and adjoint representations , @xmath32 and @xmath33 respectively . using the specific values for the su(n ) group @xmath34 @xmath35 we checked that eqs . [ g1]-[g3c ] are in perfect agreement with @xcite . we can now combine the anomalous dimensions to reach the goal of our calculation , _ i.e. _ the four - loop @xmath0-function . since , @xmath36 , we have @xmath37 , and @xmath38 this result , manifestly gauge invariant , confirms @xcite . for completeness , we reproduce the lower order values , which we have also calculated @xmath39 we have completed the four - loop @xmath1 renormalization program of an unbroken gauge theory with fermions and a general compact simple lie group in the linear gauge . the fact that we performed an expansion in the gauge parameter still allows for gauge invariance tests in practical calculations , although special gauges such as the landau gauge can not be chosen . if such need would occur , one may use the su(n ) results from @xcite . the correctness of the latter relies , however , on the four - loop @xmath0-function . therefore , the present work was not only indispensable to confirm the value of the four - loop @xmath0-function itself , but also the correctness of @xcite . this goal has been reached . it should be stressed that with the accumulated solved integrals , one could easily obtain one more term in the @xmath4 expansion of the anomalous dimensions . this is due to the fact that the complexity of the integrals occurring in the gluon propagator with at most a single power of @xmath4 is comparable to the complexity of the integrals for the other functions with at most two powers of @xmath4 . having solved the latter , the gauge invariant @xmath0-function enables then to recover the third term in the @xmath4 expansion of the gluon propagator as well . we did not perform such a calculation , since from the purely pragmatical point of view , only the complete @xmath4 dependence would be an improvement . a second comment concerns the independence of our calculation . the reader should notice that besides basic software as form @xcite , fermat @xcite _ etc . _ and our own programs @xcite , the only external package that we used was the color package @xcite of the form distribution . on a diagram per diagram basis , we made sure that the color factors produced by this package agree with the su(n ) values obtained with the algorithm of @xcite . our confidence was strengthened by the fact that we agreed with @xcite on the final results . the author would like to thank k. chetyrkin , m. misiak and y. schrder for stimulating discussions , and the institute of theoretical physics and astrophysics of the university of wrzburg for warm hospitality during the period when part of this work was completed . this work was supported in part by tmr , european community human potential programme under contract hprn - ct-2002 - 00311 ( euridice ) , by deutsche forschungsgemeinschaft under contract sfb / tr 903 , and by the polish state committee for scientific research ( kbn ) under contract no . here we give the @xmath40 expansions of the master integrals depicted in fig . [ prototypes ] up to the necessary order . the notation is similar to that used in matad @xcite , _ i.e. _ we use the integration measure @xmath41 with @xmath42 and the constants which occur in the divergent parts of the integrals are @xmath43 where @xmath44 is the clausen function . notice , however , that we use the minkowski space metric and `` minkowski type '' propagators , _ i.e. _ @xmath45 . the expansions of the irreducible four - loop integrals have been obtained either from the finiteness of the same integral with higher powers of denominators or with the help of the method described in @xcite . after taking into account the different normalization and translating master integrals with dots to integrals with suitable irreducible numerators , the above results for the divergent parts are in agreement with the numerical values obtained in @xcite . the integrals pr1-pr4 , pr4d and pr7 are needed up to constant parts . it is , however , sufficient to express the finite part of pr7 through the finite parts of pr4 and pr4d and keep them as symbols , since they always cancel from the divergent part of any four - loop vacuum integral . the reader should notice that we also needed @xmath40 expansions of two- and three - loop tadpoles . these have been obtained in @xcite . the values can be read off from our results on the reducible four - loop integrals above . d. j. broadhurst , z. phys . c * 54 * ( 1992 ) 599 ; d. j. broadhurst , eur . j. c * 8 * ( 1999 ) 311 ; j. fleischer and m. y. kalmykov , phys . b * 470 * ( 1999 ) 168 ; k. g. chetyrkin and m. steinhauser , nucl . b * 573 * ( 2000 ) 617 .
the four - loop @xmath0-function of quantum chromodynamics is calculated and agreement is found with the previous result . the anomalous dimensions of the quark - gluon vertex , and quark , gluon and ghost fields are given for a general compact simple lie group .
field epidemiology involves the application of epidemiological methods to often unexpected public health events where rapid on - site investigations and timely interventions are necessary . with global transformations in politics , economics and culture , the health of populations is increasingly more vulnerable to the threat of epidemics . from emerging and re - emerging diseases to the spread of antimicrobial drug - resistance , governments are faced with the challenge of improving surveillance and response capacities . with the support of the world health organization , 194 state parties to the international health regulations ( 2005 ) have been implementing plans of action to enhance health security . but until the first international sanitary conference in 1851 and its serial meetings , there were few mechanisms that facilitated international cooperation among countries . the long time span between the inaugural conference and the subsequent institutionalisation of a common framework suggest the difficulties and sensitivities involved in facilitating international cooperation on epidemic disease control . yet , that countries remain concerned with , and are willing to cooperate on the cross - border transmission of disease , indicates a rationale far surpassing historical continuity or obligatory duty . a seemingly archaic and oft - overlooked reason is that disease epidemics have vast implications on national survival , and that the health of the domestic population provides countries with the assurance and freedom to pursue its vision and goals . located at the southern tip of the malay peninsula , singapore is a relatively young country , having achieved its independence in 1965 . for most part of its economic history , singapore served as a major trading hub , providing a strategic and convenient port of call for sea and air cargo . singapore 's changi airport is linked to approximately 200 cities in 60 countries , with about 5400 weekly flights . in 2011 , the airport handled a record 46.5 million passengers , a 10.7% increase over 2010 's 42 million . adding to its global connectivity , singapore has a heterogeneous , mobile population living in close proximity and restrained by an equatorial climate of high humidity levels and heavy rainfall , particularly during the months of november to january . in the course of 47 years of nation building , its total population has grown from a mere two million in 1970 to five million in 2010 . in this article , we share an epidemiological perspective to singapore 's experience in safeguarding its population against the onslaught of novel diseases . we suggest the circumstances that condition its set of disease control measures may not be entirely unique , namely , the features of global connectivity and cosmopolitanism witnessed in many cities today . we also describe our local characteristics so that the reader may more aptly draw conclusions from where we differ . singapore 's international connectivity places it at an increased risk of disease outbreaks , with global air travel playing a pivotal role in the dissemination of emerging infections . in february 2003 , an outbreak of severe acute respiratory syndrome ( sars ) was introduced into singapore with the return of three unsuspecting young travellers from hong kong . sars transmission from symptomatic patients to other passengers and crew was subsequently documented in at least three flights flying outbound from hong kong . in order to stem the spread of sars , a contact tracing centre was established at the ministry of health singapore . the centre catered for 200 officers who sought to identify all contacts of sars cases and observation cases in whom sars could not be ruled out . legal provisions were strengthened to endow the director of medical services with the legal authority to order the quarantine of persons , and confer an offence for persons who disobeyed . when the disease threatened to establish itself within hospitals , mandatory protection gear and infection control procedures were set in place , along with close monitoring of healthcare workers for sars symptoms as well as movement restrictions on all staff , patients , and visitors in hospitals . in mounting large - scale quarantine operations alone , it cost the government approximately us$5.2 million . compared to the economic recession in 1997 - 1998 and 2001 , the sars crisis had the deepest , albeit short - lived impact on visitor arrivals with many airlines drastically reducing flight numbers to singapore . by april 2003 , visitor arrivals dropped 67% and caused a ripple effect on business activities as companies delayed or cancelled trade and investment missions and travels . similarly , the widespread dissemination of the novel influenza a ( h1n1 ) in 2009 was thought to be related to the high number of flights out of early major centres of the epidemic . investigations into a cluster of six cases confirmed transmission of the h1n1 on board a commercial aircraft . these events illustrate the capability of diseases to transmit rapidly across borders , facilitated by convenient air travel . fear and uncertainty over an unknown disease our national strategy is premised on a well established surveillance and response system that forewarns , detects , and contains the importation of a novel agent , and on mitigation measures when community spread is sustained ( i.e. , showing no epidemiological link to imported source cases ) . a national pandemic readiness and response plan was developed with the disease outbreak response system condition framework as its risk management centre - piece . this framework helps calibrate outbreak response according to the nature and transmissibility of the agent . our pandemic experiences have shown that disease control can not be the sole purview of the health authority . in order to facilitate a strong command and control centre where knowledge is effectively cascaded to stakeholders and efforts coordinated across various government bodies and agencies , singapore adopts a " whole - of - government " approach through its homefront crisis management system . our modus operandi gathers relevant ministries and inter - agency groups that either lead or support a sector ( e.g. , health , foreign affairs , trade and industry ) to mitigate the consequences of an outbreak . since singapore is highly dependent on international trade and food supplies from overseas , total border closure is not feasible . the aim , therefore , is for the country to maintain continuity of essential services and supplies . in the meanwhile , morbidity and mortality can be reduced through early isolation and treatment of cases , quarantine of close contacts , mass vaccination once a pandemic vaccine becomes available , and the stepping up of infection control in different settings . clear communication at the national level is also needed at all stages during an outbreak . this helps ensure public confidence and strengthen social morale , which are likely to run deficit . as illustrated during sars , the timely provision of information , advocacy for social responsibility , and promotion of good hygiene practices helped build trust between the people and the government . singapore 's population density has more than doubled from 3.5 thousand population per square km in 1970 to 7.1 thousand population per square km in 2010 . this is notwithstanding its position as one of the most cosmopolitan societies in southeast asia - for every three singaporean residents , there is now one non - singaporean resident . in ecological terms , singapore 's rapid urbanization has resulted in an increasingly built environment with new dynamic interactions between niches that are natural ( biosphere ) and man - made ( technosphere ) . this in turn leads to emerging health concerns peculiar to an urbanized built environment . as for elaborations on our flora and fauna , we shall state what is most obvious to infectious disease : the presence of vectors . singapore 's tropical climate gave over to the prevalence of mosquito vectors such as the aedes spp . and , whose populations were ruthlessly abated through a range of stringent control measures and the removal of marshlands and forested areas , though not entire , as the country rapidly urbanized . it was with some degree of concern then that epidemic chikungunya , a mosquito - borne viral disease , surfaced in 2008 . genetic analysis showed that the first three local episodes were most likely the result of independent importations of the virus from neighbouring asian countries while locally acquired cases that occurred around july in the same year were largely due to a single strain which was closely related to the strain detected in cases imported from malaysia . in a similar measure , although singapore was certified malaria - free by the world health organization in 1982 and the anopheles spp . vector population was reduced to low levels , the country remains vulnerable to outbreaks involving foreign workers with relapsing malaria who , due to socio - behavioural and economic reasons , did not seek early medical treatment . the first locally acquired human plasmodium knowlesi infection , an emerging malaria parasite , was also reported in 2007 , with four additionally detected human cases within the same year , and one in 2008 . all cases involved military personnel who had undergone training in restricted - access forested areas in singapore . the quintessential vector - borne disease , dengue , continues to be endemic in singapore with cyclical outbreaks observed . in 2005 , a switch in denguevirus ( den ) serotype predominance from den-2 to den-1 unleashed an unprecedented epidemic in both size and geographical distribution of cases . low herd immunity against the den-1 serotype due to the introduction of immunologically naive non - residents from non - dengue endemic countries , as well as the cunning of the aedes aegypti in exploiting difficult - to - reach habitats were factors that contributed to the outbreak . while singapore has been exemplary in its effective implementation of environmental health programmes , and that the connection between the environment and the health of its people was recognized early in its development , leading to major clearance works of putrid , polluted areas of living and the establishment of a systematic drainage and sewerage system to ensure good standards of public sanitation and hygiene , the battle against emerging and re - emerging diseases is one that requires continued vigilance . a new variable that may amplify disease transmission is climate change as evident by extreme weather events , particularly the unprecedented flash flooding witnessed in parts of the country from 2010 to 2011 . our national environment agency has acknowledged the difficulties in rainfall prediction , which may augur unfavourably for mid- to long - term infrastructural planning . extreme weather events are indicatory of an upset ecological system that remains poorly understood and addressed even as the country grapples with the new realities of climate change . moving forward , our disease control programmes require periodic review as the epidemiological triad of host , environment , and agent rebalances dynamically . singapore , by and large , has seen extraordinary growth in its average national income since its humble beginnings as a colonial outpost . in attaining its vision to become a " distinctive global city " , its relatively affluent , well - educated , and upwardly mobile population have increased access to environments , goods , and services that are not previously experienced and their risks to individual health uncertain . besides unusual outbreaks and stress - related disorders , other curious aetiologies have occurred from time to time . in addition , changes in lifestyle by an ageing population - about 400 000 baby boomers will turn 65 years old between now and 2020-is a major force re - shaping our society . the role of lifestyle was evident in an outbreak of fusarium keratitis associated with contact lens wear ( renu with moisture lock , manufactured by bausch and lomb ) which we investigated in 2006 . of the 66 patients diagnosed , close to 82% reported poor contact lens hygiene practices . this illustrated a lack of patient knowledge of the potential harm of novel products if not used properly . more recently , during the escherichia coli food poisoning outbreak in germany and the fukushima nuclear incident in japan , the agri - food and veterinary authority had to increase its surveillance of food imports to ensure consumption safety . our high dependency on food imports , a lack of good local substitutes , and the proclivity for international food items made available through a global food production and supply - chain network place singapore at increased risk of food - borne incidents . this is compounded by the flagrant use of antibiotics and pesticides , mass production of processed food items , high ambient temperatures , and an extensive farm - to - fork process , providing many opportunities for contamination of food items . with such risk factors , the national pastime of exotic dining outside the home needs only small mentioning . for outbreak management , investment needs to be wisely directed towards capability enhancement and singapore has learnt that dealing with unknowns requires sufficient bandwidth in both infrastructure ( i.e. , hardware ) and expertise ( i.e. , software ) . recognising the importance of epidemic intelligence , a public health intelligence unit was set up in 2011 to monitor and analyse changes in local and overseas disease landscapes . intelligence that is acquired from this process is used to track potential threats , trigger public health response , and facilitate risk communication to relevant stakeholders as necessary . in our global village where many potential threats loom ominously over the horizon , we need public health leadership . singapore is continuously on the look - out to improve its capability and capacities in the detection of , and response to health threats , whether known or unknown . to achieve this resilience , administered by the communicable diseases division of the ministry of health and modelled after the us centers for disease control and prevention 's epidemic intelligence service , courses are conducted biannually . in addition to didactics and rigorous fieldwork , novel training methods such as multimedia gaming are being introduced . this programme aims to build a cadre of field specialists who can lead and support the public health mission . for its professional contributions , it has successfully gained recognition into the global training programs in epidemiology and public health interventions network and is a founding member of the regional asean+3 field epidemiology training network . to cultivate public health leadership on a broader front , the saw swee hock school of public health at the national university of singapore was elevated to a full faculty in 2011 . the school aims to produce future public health leaders and fulfil a unique niche in utilising new technologies to provide local solutions to some of today 's most pressing public health challenges , including infectious disease control . it has recently signed a memorandum of understanding with the london school of hygiene and tropical medicine , to advance research and education in areas of infectious disease control , health systems , and chronic diseases with an asian focus . in addition to public health manpower , the country 's communicable disease centre which has steadfastly served the nation in the clinical management of outbreaks for the past hundred years , will soon be integrated into a new purpose - built state - of - the - art facility for the isolation and management of patients with infectious diseases . further , the number of infectious disease specialists has increased from 16 in 2003 to 39 in 2010 . the experience of singapore offers a case study for field epidemiology and disease control in a globally - connected city . its territorial compactness , population heterogeneity , and relative affluence mirror the characteristics of many cities today . the presence of a stable government and efficient civil service is a strong contributing factor in its ability to implement policies and regulate human behaviour . that the city - state can be an anomaly , beating the odds of its natural landscape of tropical diseases showed that good public health can be sustainably practised with the right policies that evolve with the dynamics of modern living and its impact on disease transmission
field epidemiology involves the implementation of quick and targeted public health interventions with the aid of epidemiological methods . in this article , we share our practical experiences in outbreak management and in safeguarding the population against novel diseases . given that cities represent the financial nexuses of the global economy , global health security necessitates the safeguard of cities against epidemic diseases . singapore 's public health landscape has undergone a systemic and irreversible shift with global connectivity , rapid urbanization , ecological change , increased affluence , as well as shifting demographic patterns over the past two decades . concomitantly , the threat of epidemics , ranging from severe acute respiratory syndrome and influenza a ( h1n1 ) to the resurgence of vector - borne diseases as well as the rise of modern lifestyle - related outbreaks , have worsened difficulties in safeguarding public health amidst much elusiveness and unpredictability . one critical factor that has helped the country overcome these innate and man - made public health vulnerabilities is the development of a resilient field epidemiology service , which includes our enhancement of surveillance and response capacities for outbreak management , and investment in public health leadership . we offer herein the singapore story as a case study in meeting the challenges of disease control in our modern built environment .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Indian Country Educational Empowerment Zone Act''. SEC. 2. FINDINGS. Congress makes the following findings: (1) A unique legal and political relationship exists between the United States and Indian tribes that is reflected in article I, section 8, clause 3 of the Constitution, various treaties, Federal statutes, Supreme Court decisions, executive agreements, and course of dealing. (2) Native Americans continue to rank at the bottom of nearly every indicator of social and economic well-being in America: (A) Unemployment rates average near 50 percent in Indian country and hover well over 90 percent on many reservations. (B) While the national poverty rate is only 11 percent, over 26 percent of all Native Americans live in poverty. (C) In addition, Native Americans have some of the lowest levels of educational attainment in the United States. (3) Numerous external efforts at economic development in Indian Country have proven unsuccessful. The most successful efforts have been initiated from within the Native communities themselves. Efforts that empower the communities and give them the tools to make their own decisions should be encouraged and pursued. (4) Educational achievement continues to be a cyclical obstacle to economic development in Indian Country. Businesses are often unwilling to locate to Indian Country because of the lack of an educated workforce. Over a quarter of all Americans have a bachelors degree or higher. However, only 12 percent of all Native Americans nationwide have such a degree, and only 6 percent of those who actually live in Indian Country have a bachelors or higher. Once Natives are finally able to obtain higher education, many are not able to return to their communities because there are no jobs. There needs to be an intervening factor to help break this damaging cycle. SEC. 3. LOAN FORGIVENESS FOR EMPLOYMENT IN INDIAN COUNTRY. Part B of title IV of the Higher Education Act of 1965 is amended by inserting after section 428K (20 U.S.C. 1078-11) the following: ``SEC. 428L. LOAN FORGIVENESS FOR EMPLOYMENT IN INDIAN COUNTRY. ``(a) Purpose.--It is the purpose of this section-- ``(1) to dramatically increase in the number of individuals with higher education degrees working within and for Indian country; ``(2) to facilitate economic growth and development in Indian country, and promote Tribal sovereignty; ``(3) to encourage members of Indian tribes with higher education degrees to return to Indian country; ``(4) to encourage the long-term retention of educated individuals in Indian country; and ``(5) to encourage public service in Indian country, and to encourage investment in Indian country through an increase in the education level of the available workforce. ``(b) Program Authorized.-- ``(1) In general.--From the funds appropriated under subsection (g), the Secretary shall carry out a program of assuming the obligation to repay, pursuant to subsection (c), a loan made, insured, or guaranteed under this part or part D (excluding loans made under sections 428B and 428C, or comparable loans made under part D) for any borrower, who-- ``(A) obtains or has obtained a bachelor's or graduate degree from an institution of higher education; and ``(B) obtains employment in Indian country. ``(2) Award basis; priority.-- ``(A) Award basis.--Subject to subparagraph (B), loan repayment under this section shall be on a first- come, first-served basis, and subject to the availability of appropriations. ``(B) Priorities.--The Secretary shall, by regulation, establish a system for giving priority in providing loan repayment under this section to individual based on the following factors: ``(i) The level of poverty in the locality within Indian country where the individual is employed. ``(ii) Whether the individual is an enrolled member of an Indian tribe. ``(iii) Whether such enrolled member is performing employment in the Indian country of the Indian tribe in which they are enrolled. ``(iv) The ratio of the individual's student loan debt to the individual's annual income. ``(v) Whether the individual's employer will provide an additional amount or a matching percentage for student loan repayment for the individual. ``(3) Outreach.--The Secretary shall post a notice on a Department Internet web site regarding the availability of loan repayment under this section, and shall notify institutions of higher education (including Tribal Colleges and Universities) and the Bureau of Indian Affairs regarding the availability of loan repayment under this section. ``(c) Qualified Loan Amounts.-- ``(1) Percentages.--Subject to paragraph (2), the Secretary shall assume or cancel the obligation to repay under this section-- ``(A) 15 percent of the amount of all loans made, insured, or guaranteed after the date of enactment of the Indian Country Educational Empowerment Zone Act to a student under part B or D, for each of the first and second years of employment in Indian country; ``(B) 20 percent of such total amount, for each of the third and fourth years of such employment; and ``(C) 30 percent of such total amount, for the fifth year of such employment. ``(2) Maximum.--The Secretary shall not repay or cancel under this section more than-- ``(A) for any student with a bachelor's degree, but without a graduate degree, $20,000 in the aggregate of loans made, insured, or guaranteed under parts B and D; and ``(B) for any student with a graduate degree, $20,000 of such loans for each year of employment. ``(3) Treatment of consolidation loans.--A loan amount for a loan made under section 428C may be a qualified loan amount for the purposes of this subsection only to the extent that such loan amount was used to repay a loan made, insured, or guaranteed under part B or D for a borrower who meets the requirements of subsection (b)(1), as determined in accordance with regulations prescribed by the Secretary. ``(d) Additional Requirements.-- ``(1) No refunding of previous payments.--Nothing in this section shall be construed to authorize the refunding of any repayment of a loan made under this part or part D. ``(2) Interest.--If a portion of a loan is repaid by the Secretary under this section for any year, the proportionate amount of interest on such loan which accrues for such year shall be repaid by the Secretary. ``(3) Double benefits prohibited.-- ``(A) Ineligibility of national service award recipients.--No student borrower may, for the same service, receive a benefit under both this section and subtitle D of title I of the National and Community Service Act of 1990 (42 U.S.C. 12601 et seq.). ``(B) Double forgiveness.--No student borrower may, for the same service, receive a benefit under both this section and section 428J, 428K, or 460 of this Act or section 108 of the Indian Health Care Improvement Act (25 U.S.C. 1616a). ``(4) Repayment to eligible lenders.--The Secretary shall pay to each eligible lender or holder for each fiscal year an amount equal to the aggregate amount of loans which are subject to repayment pursuant to this section for such year. ``(e) Application for Repayment.-- ``(1) In general.--Each eligible individual desiring loan repayment under this section shall submit a complete and accurate application to the Secretary at such time, in such manner, and containing such information as the Secretary may require. Such application shall contain verification from the employer of the employment in Indian country. ``(2) Conditions.--An eligible individual may apply for loan repayment under this section after completing each year of employment in Indian country. The borrower shall receive forbearance while engaged in such employment unless the borrower is in deferment while so engaged. ``(f) Regulations.--The Secretary is authorized to issue such regulations as may be necessary to carry out the provisions of this section. ``(g) Authorization of Appropriations.--There are authorized to be appropriated to carry out this section $20,000,000 for fiscal year 2005, and such sums as may be necessary for each of the 4 succeeding fiscal years. ``(h) Definition of Indian Tribe.--In this section, the term `Indian tribe' means any Indian tribe, band, nation, or other organized group or community, including any Alaska Native village, which is recognized as eligible for the special programs and services provided by the United States to Indians because of their status as Indians.''.
Indian Country Educational Empowerment Zone Act -Amends the Higher Education Act of 1965 to authorize the Secretary of the Interior to carry out a program of repaying the student loans for any borrower who obtains employment in Indian country.
in this paper , we first study natural filtrations on the full theta lifts for any real reductive dual pairs . we will use these filtrations to calculate the associated cycles and therefore the associated varieties of harish - chandra modules of the indefinite orthogonal groups which are theta lifts of unitary lowest weight modules of the metaplectic double covers of the real symplectic groups . we will show that some of these representations are special unipotent and satisfy a @xmath0-type formula in a conjecture of vogan in @xcite . [ [ s11 ] ] let @xmath1 be a real symplectic space and @xmath2 be the metaplectic double cover of the symplectic group @xmath3 . for every subgroup @xmath4 of @xmath3 , we let @xmath5 denote its inverse image in @xmath2 . let @xmath6 be a real reductive dual pair in @xmath3 . we fix a maximal cartan subgroup @xmath7 of @xmath3 such that @xmath8 and @xmath9 are maximal compact subgroups of @xmath10 and @xmath11 respectively . let @xmath12 and @xmath13 denote the complexified cartan decompositions of the complex lie algebras of @xmath10 and @xmath11 respectively . let @xmath14 be an irreducible @xmath15-module . we will recall the definition of its full theta lift @xmath16 in . we suppose that @xmath16 is nonzero . it is a @xmath17-module of finite length . let @xmath18 be the set of nilpotent @xmath19-orbits in @xmath20 . similarly let @xmath21 be the set of nilpotent @xmath22-orbits in @xmath23 . we will review the definition of associated cycle @xmath24 of the harish - chandra module @xmath14 in section [ sec : def ] . this is a formal nonnegative integral sum of nilpotent @xmath22-orbits in @xmath23 . we will also recall the definition of theta lift of nilpotent orbits @xmath25 in section [ s14 ] . this in turn defines @xmath26 . this paper is motivated by @xcite , @xcite and @xcite . our hope is to prove that for dual pair @xmath6 we have the identity @xmath27 for any irreducible harish - chandra module @xmath14 of @xmath11 such that @xmath16 is nonzero . this identity is certainly false in general . in a recent paper @xcite , the first two authors prove that holds for a type i reductive dual pair @xmath6 in stable range where @xmath11 is the smaller member . the proof uses a natural filtration on @xmath16 and assumes some of its properties . the first objective of this paper is to provide the proofs of these properties . see section [ sec : fil ] . the second objective of this paper is to provide evidence that the identity extends beyond the stable range . we will work with the dual pair @xmath28 . using a more detail analysis of the geometry of the null cones and moment maps , we are able to prove in this paper that for certain range outside the stable range , continues to hold if @xmath14 is a unitary lowest weight module . see theorem [ thm : ll ] . this extends and gives a shorter proof to a main result of @xcite . we can also compute the associated cycles of certain @xmath16 when fails . see theorem [ tc ] . although we only work with the orthogonal symplectic dual pairs , most of our results extends to the dual pairs @xmath29 and @xmath30 too . see section [ s7 ] . we hope that our investigation would shed light on how to extend in general beyond the stable range . for the rest of this section , we will describe and state our main theorems . first we briefly review the definitions of associated varieties , associated cycles and other related invariants of a @xmath31-module . see section 2 in @xcite for details . let @xmath32 be a @xmath33-module of finite length and let @xmath34 be a good filtration of @xmath32 . then @xmath35 is a finitely generated @xmath36-module where @xmath37 is the symmetric algebra on @xmath38 and @xmath19 is a complexification of @xmath0 . let @xmath39 be the associated @xmath19-equivariant coherent sheaf of @xmath40 on @xmath41 . associated variety _ of @xmath32 is defined to be @xmath42 in @xmath20 . its dimension is called the _ gelfand - kirillov dimension _ of @xmath32 . let @xmath43 be the nilpotent cone in @xmath20 . alternatively , we may identify @xmath44 using the killing form and @xmath45 is defined as the subset of @xmath20 which corresponds to the set of nilpotent elements @xmath46 in @xmath38 . it is well known that @xmath47 is a closed @xmath19-invariant subset of @xmath45 . let @xmath48 such that @xmath49 are distinct open @xmath19-orbits in @xmath47 . by lemma 2.11 in @xcite , there is a finite @xmath36-invariant filtration @xmath50 of @xmath39 such that @xmath51 is generically reduced on each @xmath52 . for a closed point @xmath53 , let @xmath54 be the natural inclusion map and let @xmath55 be the stabilizer of @xmath56 in @xmath19 . now @xmath57 is a finite dimensional rational representation of @xmath58 . we call @xmath59 an _ isotropy representation _ of @xmath32 at @xmath56 . its image @xmath60 $ ] in the grothendieck group of finite dimensional rational @xmath61-modules is called the _ isotropy character _ of @xmath32 at @xmath56 . the isotropy representation depends on the filtration . on the other hand the isotropy character is an invariant , i.e. it is independent of the filtration . we define the _ multiplicity of @xmath32 at @xmath49 _ to be @xmath62 and the _ associated cycle _ of @xmath32 to be @xmath63.\ ] ] in , we will study the filtrations of local theta lifts generated by the joint harmonics . let @xmath6 be a real reductive dual pair in @xmath3 . we recall some basic facts of theta correspondences . let @xmath64 denote the complex lie algebra of @xmath3 and let @xmath65 be the fock model ( i.e. @xmath66-module ) ) of the oscillator representation . let @xmath14 be a genuine @xmath67-module . by @xcite , @xmath68 where @xmath16 is a @xmath17-module called the _ full ( local ) theta lift _ of @xmath14 . theorem 2.1 in @xcite states that if @xmath69 , then @xmath16 is a @xmath17-module of finite length with an infinitesimal character and it has an unique irreducible quotient @xmath70 called the _ ( local ) theta lift _ of @xmath14 . we set @xmath71 if @xmath72 . let @xmath73 denote the set of irreducible @xmath74-modules such that @xmath75 . then @xmath76 is bijection from @xmath73 to @xmath77 . similarly , we could define the theta lifting from @xmath78 to @xmath79 . we recall the complexified cartan decompositions @xmath80 and @xmath13 . there are two moment maps @xmath81_{\psi } w \ar[r]^{\phi } & \fpp^*.}\ ] ] we also recall the set of nilpotent elements @xmath45 in @xmath20 and the set of nilpotent @xmath19-orbits @xmath18 in @xmath20 . similarly we have @xmath82 and @xmath83 . for a @xmath22-invariant closed subset @xmath84 of @xmath85 , we define _ the theta lift of @xmath84 _ to be @xmath86 it is a @xmath19-invariant closed subset of @xmath20 . if @xmath87 , then @xmath88 . if @xmath89 is the closure of a @xmath22-orbit @xmath90 and @xmath91 is the closure of a @xmath19-orbit @xmath92 , then we denote @xmath92 by @xmath93 . conversely for a closed @xmath19-invariant subset @xmath94 of @xmath45 , we define @xmath95 which is a closed @xmath22-invariant subset of @xmath82 . when @xmath96 , we define @xmath97 . we extend theta lifts of nilpotent orbits to cycles linearly . more precisely , we define @xmath98 ) = \sum_{j } m_j[\overline{\theta(\co'_j)}]$ ] if every @xmath99 admits a theta lift . [ [ s15 ] ] from section [ s2 ] onwards , we specialize to the dual pair @xmath100 in @xmath101 . choosing a maximal compact subgroup @xmath7 as in , we set @xmath102 and @xmath103 to be the maximal compact subgroups of @xmath10 and @xmath11 respectively . we also denote @xmath104 by @xmath105 . a harish - chandra module of @xmath106 is called _ genuine _ if it does not descend to a harish - chandra module of @xmath10 . we will introduce some genuine harish - chandra modules in this paper . @xmath107 : let @xmath108 be the genuine one dimensional character of @xmath109 such that @xmath110 is trivial . it exists if and only if @xmath111 is a split double cover of @xmath11 , i.e. @xmath112 is even . we say that the dual pair @xmath113 is under the _ stable range _ if : @xmath114 let @xmath115 . then , in stable range , it is a nonzero unitarizable genuine harish - chandra module of @xmath116 ( c.f . @xcite ) . @xmath117 : [ item : def.l ] let @xmath118 be a genuine @xmath119-module . let @xmath120 be the @xmath15-module , which is the theta lift of @xmath118 . it is well known that @xmath117 is a unitary lowest module and it is also the full theta lift of @xmath118 . here @xmath121 denotes the lowest @xmath122-type . @xmath123 : we suppose that @xmath124 is an even integer . then the double cover @xmath111 for the dual pairs @xmath125 and @xmath126 are isomorphic . let @xmath123 be the @xmath127-module which is the theta lift of the @xmath117 . [ [ sec : lift.char ] ] before we state our main results , we have to describe the theta lifts of certain orbits . since all groups appearring here are classical , we will use signed young diagram to parametrize nilpotent @xmath19-orbits in @xmath18 . more precisely , let @xmath128 denote the real lie algebra of a classical group @xmath10 and let @xmath129 denote the set of nilpotent @xmath10-orbits in @xmath128 . then @xmath129 is parametrized by signed young diagrams or signed partitions(c.f . section 9.3 @xcite ) . as before we identify @xmath44 using the killng form . then the kostant - sekiguchi correspondence identifies @xmath129 with the set of nilpotent @xmath19-orbits @xmath130 . theorem 9.5.1 @xcite ) . @xmath131 : we consider the compact dual pair @xmath132 . let @xmath133 denote the zero orbit of @xmath134 . let @xmath135 where @xmath136 . here @xmath131 only depends on @xmath137 . indeed let @xmath138 denote the complexified cartan decomposition for the lie algebra of the hermitian symmetric group @xmath11 . then @xmath131 is the @xmath139-orbit in @xmath140 generated by a sum of @xmath137 strongly orthogonal non - compact long roots in @xmath140 . in terms of partitions , we have @xmath141 @xmath142 : [ item : o.b ] suppose @xmath143 . let @xmath144 . by @xcite or @xcite , @xmath145 is the closure of a single @xmath19-orbit @xmath142 . in terms of partitions , @xmath146 note that the young diagram of @xmath142 is obtained by adding a column to the left of the young diagram of @xmath147 . @xmath148 : suppose @xmath149 in the definition of @xmath142 , we set @xmath150 in @xcite , the above orbits are denoted by @xmath151 and @xmath152 . we first recover the following theorem which is known to the experts . [ thm : acchar ] let @xmath153 be the dual pair in the stable range defined by and @xmath112 is even integer . let @xmath108 be the genuine unitary character of @xmath111 . let @xmath154 as in . then @xmath155 and @xmath156 $ ] . let @xmath157 and let @xmath158 be the stabilizer of @xmath159 in @xmath19 . then @xmath160 here @xmath161 denote the maximal parabolic subgroup of @xmath162 which stabilizes an @xmath163-dimensional isotropic subspace of @xmath164 and @xmath165 denotes the quotient map . let @xmath166 denote the character @xmath167 . then the isotropy representation of @xmath168 at @xmath159 is @xmath169 where @xmath170 is the minimal @xmath7-type of the oscillator representation @xmath65 ( see ) . moreover , we have @xmath171 part ( i ) is a result of @xcite . if @xmath172 , then it is also a result of @xcite . equation is part of yang s thesis @xcite . our contribution is to recognize and construct a natural @xmath0-equivariant invertible sheaf on @xmath173 . we add that such a sheaf is implicit in @xcite . the merit of this is that we could now bypass the @xmath0-types or hilbert polynomials computations in the previous work alluded above and prove ( i ) and ( iii ) more conceptually and efficiently . we remark that yamashita has used the concept of coherent sheaves to compute the associated cycles of unitary lowest weight modules @xcite . most of is a special case of @xcite . one main reason for introducing theta lifts of unitary characters is that @xmath123 could be obtained by taking covariants of @xmath174 ( see ) . this will allow us to compute the associated cycles and associated varieties of @xmath123 . [ [ section ] ] we assume that @xmath175 is in the stable range and @xmath124 is an even integer . equivalently we have @xmath176 let @xmath177 denote the unitary lowest module of @xmath111 which is the theta lift of @xmath178 as in . we denote the dual harish - chandra module of @xmath179 by @xmath180 . by @xcite , @xmath181 where @xmath182 . in order to describe the associated cycles of @xmath123 , we have to divide into two cases : when @xmath183 , we will denote this as case i. when @xmath184 , we will denote this as case ii . the geometries of the moment maps in these two cases differ significantly . [ [ section-1 ] ] first we describe the main result for case i. [ thm : ll ] suppose that @xmath175 is in the stable range satisfying and @xmath183 . let @xmath185 be a non - zero irreducible unitarizable lowest module of @xmath111 . a. the theta lift @xmath123 is nonzero . b. let @xmath136 . then @xmath186 c. we fix a closed point @xmath187 and a closed point @xmath188 . let @xmath189 be the isotropy representation of @xmath190 at @xmath188 ( see ) . then there is a map @xmath191 such that the isotropy representation of @xmath192 at @xmath159 is @xmath193 d. we have @xmath194 = ( \dim\tchi_{x ' } ) [ \overline{\theta(\co_d ) } ] = \theta({\mathrm{ac}}(l(\mu')^*)).\ ] ] the proof of the above theorem is given in . part ( iv ) extends the main result in @xcite where @xmath195 is considered . [ [ section-2 ] ] next we describe the main result for case ii . [ tc ] suppose that @xmath175 is in the stable range satisfying and @xmath184 . let @xmath185 be an irreducible unitarizable lowest module of @xmath111 . we fix a closed point @xmath187 and a closed point @xmath188 where @xmath136 . a. let @xmath196 be the levi part of a levi decomposition of @xmath197 , then @xmath198 . b. let @xmath199 be the isotropy representation of @xmath200 at @xmath201 . then @xmath202 c. the lift @xmath123 is nonzero if and only if @xmath199 is nonzero . d. if @xmath203 , then @xmath204 and @xmath205.\ ] ] the proof of the above theorem is given in . in general @xmath206 is not equal to the dimension of the isotropy character @xmath189 of @xmath207 so is usually invalid for case ii . [ [ section-3 ] ] let @xmath208 denote the complexified cartan decomposition . the inclusion @xmath209 includes a projection map @xmath210 . by , @xmath123 occurs discretely as a submodule in @xmath174 . a general theory of @xcite gives @xmath211 . for the representations considered in theorems [ thm : ll ] and [ tc ] , the containment is in fact an equality . see lemma [ lem : orbit.o1 ] [ [ section-4 ] ] in both cases i and ii above , we have the following theorem on the @xmath0-spectrum of @xmath123 . [ thm : ks ] suppose @xmath212 satisfies and @xmath213 , then @xmath214 the proof is given in section [ sec : kspec ] and it is a consequence of . motivated by geometric quantization of orbits , vogan defined an _ admissible _ isotropy representation in definition 7.13 in @xcite . it is not difficult to see that @xmath123 for @xmath215 and @xmath216 has admissible isotropy representation . then by theorems [ thm : acchar](iii ) and [ thm : ks ] , these representations satisfy vogan s conjecture 12.1 in @xcite . such modules are candidates for the conjectured unipotent @xmath217-modules attached to the orbits @xmath142 . in , we will show that @xmath123 is a special unipotent representation in the sense of @xcite . a first draft of this paper was written before @xcite . the current paper is a major revision where we incorporate ideas from @xcite , bypass the @xmath0-types and asymptote calculations , and give more geometric and conceptual proofs to theorems [ thm : acchar ] to [ thm : ks ] above . in this paper , all varieties and schemes are defined over @xmath218 . we will denote the ring of regular functions on a variety or scheme @xmath219 by @xmath220 $ ] . for a real lie group @xmath0 , its complexification is denoted by @xmath19 . the first author is supported by an national university of singapore grant r-146 - 000 - 131 - 112 . we thank k. nishiyama for his comments on associated cycles . in this section we will construct good filtrations using local theta correspondences . unless otherwise stated , all lie algebras are complex lie algebras . we let @xmath6 be an arbitrary reductive dual pair in @xmath3 . we do not assume that they are in stable range . let @xmath0 and @xmath221 denote the maximal compact subgroups of @xmath10 and @xmath11 respectively . we set @xmath222 and @xmath223 to be the complexified cartan decomposition of the lie algebras of @xmath10 and @xmath11 respectively . a tilde above a group will denote an appropriate double cover which is usually clear from the content . first we review section 3.3 in nishiyama and zhu @xcite . we recall that the fock model of the oscillator representation of @xmath2 is realized on the fock space @xmath224 $ ] of complex polynomials on the complex vector space @xmath225 . we follow howe s notation @xcite about diamond dual pairs . let @xmath226 be the dual pair in @xmath227 . let @xmath228 denote the cartan decomposition of the complexified lie algebra of @xmath229 such that @xmath230 acts by multiplication of @xmath221-invariant quadratic polynomials on @xmath225 and @xmath231 acts by degree two @xmath221-invariant differential operators . fact 3 in howe s paper@xcite states that in @xmath232 , we have @xmath233 the projection of @xmath234 to @xmath230 under the decomposition of the left hand side of is a @xmath0-isomorphism . we identify @xmath234 as @xmath230 via this projection . we also have the compact dual pair @xmath235 in @xmath3 . in a similar fashion , we have a cartan decomposition @xmath236 where @xmath237 is the non - compact part of @xmath238 . let @xmath239 be the subspace of complex polynomials in @xmath240 $ ] of degree not greater than @xmath241 . then @xmath242 and gives a filtration of the fock model @xmath65 . let @xmath243 be the full theta lift of an irreducible @xmath15-module @xmath244 . let @xmath245 be the natural quotient map . let @xmath246 be a lowest degree @xmath247-type of @xmath248 of degree @xmath249 . let @xmath250 be the image of the joint harmonics . we define filtrations on @xmath251 and @xmath252 by @xmath253 and @xmath254 respectively . the filtration @xmath255 is a good filtration of @xmath252 since @xmath14 is irreducible , and @xmath256 is a good filtration of @xmath251 because @xmath257 due to @xcite . we view @xmath258 . let @xmath259 be an irreducible @xmath122-submodule with type @xmath260 which pairs perfectly with @xmath261 . by theorem 13 ( 5 ) in @xcite , the lowest degree @xmath122-type @xmath261 has multiplicity one in @xmath14 . hence @xmath261 and @xmath262 are well defined subspaces in @xmath252 and @xmath263 respectively . we define a good filtration @xmath264 on @xmath263 . let @xmath265 be a nonzero linear functional on @xmath252 . we consider the composite map @xmath266^<>(.5){\pi } & v_{\rho } \otimes v_{\rho ' } \ar@{->>}[r]^<>(.5){{{\mathrm{id}}}\otimes l } & v_{\rho}. } \ ] ] we define @xmath267 . since @xmath268 , we also have @xmath269 . [ l41 ] we have @xmath270 and @xmath271 . the proof is given in . we remark that the first equality of the above lemma was proved in @xcite . the second equality was assumed without proof in that paper . suggests that @xmath272 is a natural choice of filtration . we define @xmath273 to be the corresponding graded module of @xmath274 . [ [ section-5 ] ] for any @xmath275-module @xmath276 and @xmath277 , we set @xmath278 and @xmath279 where @xmath280 and @xmath281 the next proposition gives another realization of @xmath282 which is crucial for us . [ prop : theta ] we have @xmath283 since @xmath221 is compact , the last equality follows if we identify @xmath221-invariant quotient as @xmath221-invariant subspace . now we prove the first identity . let @xmath284 as in . we have @xmath285 the last equality in follows from the fact that @xmath65 is @xmath122-finite and @xmath286 . starting from , we reverse the steps by replacing @xmath65 with @xmath287 in and we get @xmath288 this proves the proposition . [ [ section-6 ] ] let @xmath289 . we summarize in the following diagram @xmath290^{\eta } \ar@{->>}[r]^<>(.5){{{\mathrm{pr}}}_{\fpp ' } } & ( v_{\rho'^ * } \otimes \sy)_{\fpp ' } \ar@{->>}[r]^<>(.5){{{\mathrm{pr}}}_{k ' } } & \left((v_{\rho'^ * } \otimes \sy)_{\fpp'}\right)^{k ' } = v_\rho . } \ ] ] where @xmath291 is the projection map to the @xmath292-coinvariant quotient space , @xmath293 is the projection map to the @xmath221-invariant subspace and @xmath294 is the natural quotient map . we define a filtration on @xmath295 by @xmath296 and a filtration on @xmath251 by @xmath297 . [ l33 ] the filtrations @xmath298 and @xmath299 on @xmath300 are the same . since @xmath301 , we have @xmath302 . hence @xmath303 . on the other hand , for @xmath304 , @xmath305 by , @xmath306 and this proves the lemma . taking the graded module , @xmath307 induces a map @xmath308 & { { \mathrm{gr}\,}}({{\mathrm{pr}}}_\fpp({{\mathbf{e } } } ) ) \ar@{->>}[r]^<>(.5){{{\mathrm{gr}\,}}{{\mathrm{pr}}}_k } & { { \mathrm{gr}\,}}\rho.}\ ] ] we will study more thoroughly in @xcite . [ [ s24 ] ] we recall that the unitary group @xmath309 is a maximal compact subgroup of @xmath3 . let @xmath310 denote the complexified cartan decomposition . let @xmath170 be the minimal one dimensional @xmath311-type of the fock model @xmath65 . for @xmath312 , @xmath313 is equal to the determinant of the @xmath314 action on @xmath225 . we extend @xmath170 to an @xmath315-module where @xmath316 acts trivially . we will continue to denote this one dimensional module by @xmath170 . in this way , @xmath317 $ ] where @xmath7 acts geometrically on @xmath240 $ ] . since @xmath6 is a reductive dual pair in @xmath3 , we denote the restriction of @xmath170 as a @xmath318-module by @xmath319 . similarly we get a one dimensional @xmath320-module @xmath321 . let @xmath322 and @xmath323 . since @xmath14 is a genuine harish - chandra module of @xmath324 , @xmath325 is an @xmath326-module . similarly @xmath327 is an @xmath36-module . we take @xmath170 into account and the fact that @xmath221 acts on @xmath328 $ ] reductively and preserves the degrees . then gives the following @xmath36-module morphisms @xmath329 \ar@{->>}[r]^<>(.5){\eta_1 } & \left(a \otimes_{{{\mathcal{s}}}(\fpp ' ) } { { \mathbb c}}[w ] \right)^{k'_{{\mathbb c } } } \ar@{->>}[r]^<>(.5){\eta_0 } & b. } \ ] ] the merit of introducing @xmath170 is that the @xmath330 action on @xmath331 descends to a geometric @xmath332 action on @xmath240 $ ] . [ l3 ] suppose @xmath6 is in the stable range where @xmath11 is the smaller member . then @xmath333)^{k'_{{\mathbb c } } } \stackrel{\sim}{\rightarrow } b$ ] in is an isomorphism of @xmath334-modules . we refer to @xcite for a proof . throughout this section , we let @xmath335 be a dual pair in stable range ( see ) . we consider the theta lift @xmath336 of the genuine unitary character @xmath108 of @xmath111 . we will the discuss the associated cycle and the isotropy representation of @xmath336 . [ [ section-7 ] ] let @xmath108 be the genuine unitary character of @xmath111 . it exists if and only if the double cover @xmath111 splits over @xmath11 , i.e. @xmath112 is even . first we recall some facts in @xcite about the local theta lift of @xmath108 to @xmath106 . also see @xcite when @xmath337 . let @xmath338 denote the complexifixed lie algebra of @xmath10 . let @xmath339 be the maximal compact subgroup of @xmath10 . [ p32 ] suppose @xmath125 is in stable range where @xmath11 is the smaller member satisfying and @xmath112 is even . let @xmath108 be the genuine character of @xmath111 . then @xmath340 is a nonzero , irreducible and unitarizable @xmath17-module . in particular @xmath341 . the fact that @xmath340 is irreducible follows from @xcite or @xcite . it is also a special case of theorem a in @xcite . the fact that @xmath336 is unitarizable follows from @xcite . the @xmath247-types of @xmath342 are well known . for example see @xcite and @xcite . it is also a special case of propositions 2.2 and 3.2 in @xcite . we state it as a proposition below . [ t22 ] let @xmath343 denote the @xmath247-harmonics in the fock model @xmath65 for the dual pair @xmath6 in stable range . then as @xmath247-modules , @xmath344 [ [ section-8 ] ] we refer to . if we set @xmath345 to be the genuine character of @xmath111 , then @xmath346 is a one dimensional trivial @xmath347-module . as a representation of @xmath348 , @xmath349 . let @xmath350 . we note that in this special case , is a direct consequence of . we recall the moment maps @xmath351 and @xmath352 in . we set @xmath353 and it is called the _ null cone _ in @xmath225 . let @xmath354 be the @xmath19-orbit in . then @xmath355 is an open @xmath19-orbit in @xmath356 which we call the open null cone . furthermore , @xmath357 $ ] is precisely the radical ideal @xmath358 of @xmath356 in @xmath240 $ ] ( see @xcite ) . let @xmath359 be the radical ideal of @xmath360 in @xmath37 . we state a corollary of proposition [ l3 ] . [ cor : iso.c ] we have an @xmath36-module isomorphism @xmath361 \otimes a)^{k'_{{\mathbb c}}}\cong b.\ ] ] furthermore @xmath362 and @xmath327 is a @xmath363,k_{{\mathbb c}})$]-module . if we regard @xmath325 as the trivial @xmath364-module , then @xmath365 = { { \mathbb c}}[w]/i(\bcn ) = $ ] @xmath240 \otimes_{{{\mathcal{s}}}({{\mathfrak p } } ' ) } a$ ] . the identity in the corollary follows from proposition [ l3 ] . we have @xmath366 so @xmath362 . [ prop : iso.cn ] the natural inclusion @xmath365\to { { \mathbb c}}[\cn]$ ] induces an isomorphism of @xmath367-modules @xmath368 \otimes a)^{k'_{{\mathbb c}}}.\ ] ] it suffices to prove that @xmath369 \otimes a)^{k'_{{\mathbb c}}}\cong ( { { \mathbb c}}[\cn ] \otimes a)^{k'_{{\mathbb c}}}$ ] as admissible @xmath19-modules . this is verified in @xcite . we remark that if @xmath370 , then the above lemma follows immediately from the fact that @xmath365 \cong { { \mathbb c}}[\cn]$ ] . indeed , in these cases , @xmath356 is a normal variety by @xcite and @xmath371 has codimension at least @xmath372 in @xmath356 . [ [ section-9 ] ] let @xmath373 be the coherent sheaf associated to the module @xmath327 on @xmath20 . clearly @xmath374 . in order to calculate the isotropy representation of @xmath375 , we first recall a special case of corollary a.5 in @xcite . fix a @xmath376 such that @xmath377 . let @xmath61 be the stabilizer of @xmath159 in @xmath19 . then the fiber @xmath378 in @xmath356 is a single @xmath22-orbit where @xmath379 acts freely . moreover , there is a ( unique ) surjective homomorphism @xmath380 such that @xmath381 for @xmath382 , @xmath383 is the unique element in @xmath22 such that @xmath384 . the above definitions are summarized in the diagram below . @xmath385&f_x \ar@{^(->}[r]\ar[d ] & \cn \ar[d]\ar@{^(->}[r]&\bcn \ar[d]^{\phi|_{\bcn}}\ar@{^(->}[r ] & w\ar[d]^{\phi}\\ & \set { x } \ar@{^(->}[r ] & \co \ar@{^(->}[r]^{i_{\co } } & \overline{\co}\ar@{^(->}[r ] & \fpp^ * } \ ] ] [ l46 ] we assume the notation in the above diagram . we also recall the one dimensional character @xmath386 of @xmath22 . then we have @xmath387 as a one dimensional character of @xmath61 . therefore the isotropy representation @xmath388 of @xmath375 at @xmath159 is @xmath389 let @xmath390 denote the maximal ideal of @xmath159 . since @xmath159 generates an open dense orbit @xmath92 in @xmath391 and @xmath392 is surjective , the scheme theoretic fiber @xmath393/\ci_x{{\mathbb c}}[\bcn]\right)$ ] is reduced and equals to @xmath394 . then the isotropy representation @xmath395\otimes \sigma'\right)^{k'_{{\mathbb c}}}/\left(\ci_x \left({{\mathbb c}}[\bcn]\otimes a\right)^{k'_{{\mathbb c}}}\right ) \\ & = & \left({{\mathbb c}}[\bcn]/\ci_x { { \mathbb c}}[\bcn ] \otimes a\right)^{k'_{{\mathbb c } } } \quad \text{(by the exactness of taking $ k_{{\mathbb c}}$-invariants)}\\ & = & \left({{\mathbb c}}[f_x]\otimes \sigma'^*\right)^{k'_{{\mathbb c}}}\\ & = & \left({{\mathrm{ind}}}_{k_x\times_\alpha k'_{{\mathbb c}}}^{k_x\times k'_{{\mathbb c } } } { { \mathbb c}}\otimes a\right)^{k'_{{\mathbb c } } } \quad \text{(because $ f_x = ( k_x\times k'_{{\mathbb c}})/s_w$)}\\ & \cong & a\circ \alpha.\end{aligned}\ ] ] by , @xmath396 . by , @xmath397 so @xmath398 . hence @xmath399 . by definition @xmath400 and @xmath401 $ ] . let @xmath402 be the locally free coherent sheaf on @xmath403 associated with the @xmath61-module @xmath404 . by @xcite there is an equivalence of categories between certain @xmath19-equivalent quasi - coherent sheafs and rational @xmath61-modules . applying this to gives @xmath405 . in particular @xmath406 . it remains to show that @xmath407 is an isomorphism . we recall that @xmath408 . then @xmath409 \otimes a)^{k'_{{\mathbb c } } } = \sb(\co)$ ] by . this completes the proof of the theorem . throughout this section , we let @xmath6 be the dual pair @xmath410 as before . we will discuss some unitary lowest weight modules of @xmath111 and their theta lifts to @xmath106 . [ [ s34a ] ] the group @xmath11 is hermitian symmetric so its has complexified cartan decomposition @xmath411 and @xmath412 where @xmath413 are @xmath19-invariant abelian lie subalgebras of @xmath414 . let @xmath415 . let @xmath416 be the fock model of the oscillator representation for compact dual pair @xmath417 . let @xmath418 . we define @xmath419 and @xmath420 . here @xmath117 is a lowest weight module with lowest @xmath122-type @xmath121 , and @xmath190 is its contragredient module . it is a well known result of @xcite and @xcite that all unitarizable lowest weight modules of @xmath11 up to unitary characters are obtained from compact dual pair correspondences . let @xmath421 be the complex space with hermitian form compatible with @xmath422 . its fock model is @xmath423 \cong \sy_2^*$ ] . the @xmath424 action on @xmath425 is by multiplying degree two @xmath426-invariant polynomials . it gives an algebra homomorphism @xmath427^{k^t}$ ] which in turn defines the moment map @xmath428 in , we have a filtration on @xmath190 which gives a graded module @xmath429 . despite the fact that the dual pair is not in stable range , it is well known that extends to the graded module ( for example see @xcite and @xcite ) and we have @xmath430\otimes \tau)^{k^t}\ ] ] where @xmath431 is the minimal @xmath432-type of @xmath425 and @xmath433 we recall @xmath434 where @xmath435 . hence the graded module @xmath436 is a finitely generated @xmath437$]-module . let @xmath438 . we consider following diagram @xmath439\ar[d ] & w_2 \ar[d]\ar[dr]^{{\psi_2}}\\ \set { x ' } \ar[r]^{i_{x ' } } & \bcop \ar[r ] & ( \fpp'^-)^*. } \ ] ] let @xmath440 be the coherent sheaf on @xmath441 associated with the module @xmath436 . let @xmath442 be the stabilizer of @xmath443 in @xmath22 . then the isotropy representation of @xmath440 and @xmath429 at @xmath443 are @xmath444\otimes \tau)^{k^t } \quad \text{and } \quad \tchi_{x ' } = \varsigma_2|_{\wtk ' } \otimes \chi_{x'}\ ] ] respectively . the representation @xmath445 is calculated in @xcite . we state the result for pair @xmath446 . [ t52 ] a. the module @xmath177 is nonzero if and only if @xmath447 is nonzero . b. suppose @xmath448 . then @xmath449 where @xmath450 is a nilpotent subgroup . the isotropy representation is @xmath451 as an @xmath452-module and the other subgroups of @xmath442 act trivially on it . suppose @xmath453 , then we have @xmath454 . the isotropy representation is @xmath455 as an @xmath456-module . the next theorem follows immediately from the definitions in and the above theorem on the isotropy representations . we have @xmath457 and @xmath458 = \begin{cases } ( \dim_{{{\mathbb c } } } \mu ) \ , [ \bcop_{t } ] & \text{if } t \leq n\\ ( \dim_{{{\mathbb c } } } ( \varsigma_2|_{\wtk^t}\otimes \mu)^{{{\mathrm{o}}}(t - n ) } ) \ , [ \bcop_n ] & \text{if } t > n. \end{cases } \qed\ ] ] we consider the dual pairs @xmath459 and @xmath460 . let @xmath461 . for reasons which will be clear later ( see ( ii ) ) , we assume . then @xmath462 contains @xmath463 . we note that @xmath464 is in the stable range but @xmath6 could be outside the stable range . let @xmath465 denote a maximal compact subgroup of @xmath466 compatible with @xmath10 and @xmath467 . let @xmath468 ( resp . @xmath65 , @xmath416 ) be fock model of the oscillator representation associated to the dual pair @xmath469 ( resp . @xmath470 , @xmath471 ) where @xmath472 . then as an infinitesimal module of @xmath473 , @xmath474 we note that @xmath475 is a split double cover of @xmath11 . on the other hand @xmath476 and it is a split double cover of @xmath11 if and only if @xmath477 is even . without fear of confusion , we will denote all the three @xmath478 s by @xmath111 . let @xmath479 and @xmath480 be the theta lifting maps . for @xmath481 , we set @xmath177 to be the unitary lowest module as defined in . let @xmath482 denote the complex lie algebra of @xmath483 . [ p24 ] a. for @xmath484 , we have @xmath485 as ( possibly zero ) @xmath486-modules . b. suppose @xmath212 satisifes so that @xmath487 is a nonzero and unitarizable@xmath488-module by proposition [ p32 ] . if @xmath489 is nonzero , then it is a unitarizable and irreducible @xmath490-module . in particular @xmath491 part ( i ) is proved in @xcite as a consequence of a see - saw pair argument . in ( ii ) , @xmath492 is unitarizable because it is a submodule of the unitarizable module @xmath493 . in particular , @xmath492 is a direct sum of its irreducible submodules . on the other hand @xmath492 is a full theta lift so it has a unique irreducible quotient module . this proves that @xmath492 is irreducible . in propositions [ t22 ] and [ p24 ] , we have assumed and . one could easily extend the definitions of @xmath340 and @xmath492 beyond these assumptions . equation continues to hold . however both @xmath340 and @xmath492 are not necessarily nonzero or irreducible . we will briefly discuss below . we will denote @xmath174 by @xmath494 and @xmath123 by @xmath495 . we refer the reader to @xcite and @xcite for more details . outside the stable range , @xmath496 is nonzero if and only if one of the following situation holds : a. we have @xmath497 . if @xmath498 , then @xmath499 which is in the stable range . by , if @xmath500 , then @xmath501 . if @xmath502 , then @xmath503 is finite dimensional and its associated variety is the zero orbit . b. we have @xmath504 and @xmath505 . let @xmath506 denote the one dimensional character of @xmath507 which is @xmath508 on @xmath104 and trivial on @xmath509 . then @xmath510 and we are back to the stable range . by , @xmath511 . finally in case ( a ) , it is possible that @xmath512 but @xmath513 . most of these lifts @xmath495 s are non - unitarizable . this situation arises because the maximal howe quotient @xmath340 is reducible . it is possible to analyze of the @xmath0-types of @xmath340 as in @xcite and compute its associated cycles . this is tedious so we will omit this case . [ [ section-10 ] ] in this section , we assume the notation in : @xmath514 , @xmath515 contains @xmath463 and @xmath212 satisfies . we pick a @xmath484 and we let @xmath177 be the lowest weight @xmath67-module . by @xmath123 is the full theta lift . in lemma [ l33 ] , we define the grade module @xmath516 via a natural filtration @xmath517 on @xmath518 where @xmath519 is the lowest degree @xmath247-type . similarly we define the graded module @xmath520 via a natural filtration @xmath521 on @xmath522 where @xmath523 is the lowest degree @xmath524-type . the following lemma is a commutative version of . [ lem : res.c ] as @xmath525-modules , @xmath526 the proof is given in . we will describe some moment maps . these maps are given explicitly in terms of complex matrices in . with reference to , we denote the following moment maps with respect to the dual pairs : @xmath527 we have the decomposition @xmath528 , @xmath412 and @xmath529 . the containment @xmath530 gives a decomposition @xmath531 . if we replace @xmath10 by @xmath462 in the above table , then we have the moment maps @xmath532 where @xmath533 is the moment map for pair @xmath534 . with respect to dual pair @xmath535 , we have @xmath536 finally we get @xmath537 . on the other side of , we have @xmath538 and @xmath539 . let @xmath540 be the natural projection and @xmath541 be the projection induced from @xmath542 . they form a commutative diagram : @xmath543[r]{$w_h = $ } w \oplus w_2 \ar[r]^{\ \ \ \ { { \mathrm{pr } } } } \ar[d]_{\phi_h } & w \ar[d]^{\phi } \\ \fpp_h^ * \ar[r]_{{{\mathrm{pr}}}_h } & \fpp^*.}\ ] ] for the rest of this paper , we refer to the orbits in and we set @xmath544 in @xmath545 , @xmath546 in @xmath547 , @xmath548 in @xmath20 and @xmath549 to be the null cone corresponding to the dual pair @xmath464 . [ lem : orbit.o1 ] we have @xmath550 which is the zariski closure of the @xmath19-orbit @xmath92 . we have @xmath551 . by , @xmath552 so it suffices to show that @xmath553 . let @xmath554 . then @xmath555 if and only if @xmath556 . hence @xmath557 if and only if @xmath558 and @xmath559 . since @xmath560 , we have @xmath561 as required . this proves the lemma . [ [ section-11 ] ] let @xmath562 denote the lowest @xmath563-type of @xmath468 . we recall @xmath564 . let @xmath565 . we set @xmath566 as in . then by , we have @xmath567 since @xmath568 is a @xmath569,k_h)$]-module , applying to shows that @xmath327 is a @xmath363,k_{{\mathbb c}})$]-module . let @xmath373 be the quasi - coherent sheaf on @xmath360 associated with @xmath327 . in particular , @xmath570 . fix @xmath571 and let @xmath572 be the inclusion map . let @xmath573 be the maximal ideal in @xmath37 defining @xmath159 . let @xmath574 and let @xmath575 be the set theoretic fiber . [ l55 ] we have @xmath576 = { { \mathbb c}}[\bcn]/\ci_x{{\mathbb c}}[\bcn ] = { { \mathbb c}}[x ] \otimes_{{{\mathbb c}}[\bco ] } { { \mathbb c}}[\bcn]$ ] . let @xmath577 be the scheme theoretic fiber of @xmath159 . we claim that @xmath577 is reduced and thus equals to @xmath578 . indeed in characteristic zero , a generic scheme theoretic fiber is reduced . since @xmath159 generates the dense open orbit @xmath92 in @xmath360 , @xmath577 is reduced and the claim follows . taking regular functions of @xmath579 gives the lemma . by , @xmath580\otimes a)^{k'_{{\mathbb c}}}$ ] where @xmath581 . since @xmath37 is @xmath582-invariant , and give @xmath583/\ci_{x}{{\mathbb c}}[\bcn ] \otimes a)^{k'_{{\mathbb c}}}\otimes \tau)^{k_{{{\mathbb c}}}^t } \\ = & ( { { \mathbb c}}[z ] \otimes a\otimes \tau)^{k'_{{\mathbb c}}\times k_{{{\mathbb c}}}^t}. \end{split}\ ] ] in , we see that @xmath584 is surjective if and only if @xmath183 . from now on we split out calculations into two cases , depending on whether @xmath585 is surjective or not surjective . we assume that @xmath584 is surjective , i.e. @xmath183 . let @xmath586 . fix @xmath587 . let @xmath588 . let @xmath589 and let @xmath590 in @xmath421 . we consider the diagram : @xmath591_t^{\cong } z_y\ar[d]^{{{\mathrm{pr}}}}\ar@{^(->}[r ] & z\ar@{->>}[d]^{{{\mathrm{pr}}}}\\ & \set { y } \ar@{^(->}[r]^{i_y } & y. } \ ] ] the map @xmath592 will be given in ( iii ) below . let @xmath593 and @xmath594 . we now state our key geometric . its proof is given in . [ lem : geo.i ] suppose @xmath584 is a surjection . a. then @xmath595 is a single @xmath596-orbit generated by @xmath597 so @xmath598 . b. there is group homomorphism @xmath599 such that @xmath600 we denote the right hand side by @xmath601 . c. there is a bijection @xmath602 such that @xmath592 commutes with the actions of @xmath426 and @xmath603 for all @xmath604 and @xmath605 . let @xmath606 denote the structure sheaf of @xmath578 . clearly @xmath576 = ( ( { { \mathrm{pr}}}|_{z } ) _ * \so_z)(y)$ ] . by ( i ) above , @xmath595 is a single @xmath607-orbit . let @xmath608 be the ideal of @xmath597 in @xmath609 $ ] . we recall that @xmath595 is affine . again by the generic reduceness of scheme theoretical fiber in characteristic @xmath133 , @xmath610 = { { \mathbb c}}[z]/\ci_y{{\mathbb c}}[z]$ ] . therefore by @xcite , @xmath611 = { { \mathrm{ind}}}_{s_y}^{k_{x}\times k'_{{\mathbb c } } } ( i_y^ * ( { { \mathrm{pr}}}|_{z})_*\so_z ) = { { \mathrm{ind}}}_{s_y}^{k_{x}\times k'_{{\mathbb c } } } ( { { \mathbb c}}[z]/\ci_y { { \mathbb c}}[z ] ) = { { \mathrm{ind}}}_{s_y}^{k_{x}\times k'_{{\mathbb c } } } { { \mathbb c}}[z_y].\ ] ] putting into , we have @xmath612 \otimes a \otimes \tau \right)^{k_{{\mathbb c } } ' \times k_{{\mathbb c}}^t } = \left({{\mathrm{ind}}}_{s_y}^{k_x \times k'_{{\mathbb c } } } { { \mathbb c}}[z_y ] \otimes a \otimes \tau \right)^{k'_{{\mathbb c}}\times k_{{\mathbb c}}^t } \\ & = & \left({{\mathrm{ind}}}_{s_y}^{k_x \times k'_{{\mathbb c}}}({{\mathbb c}}[z_{y } ] \otimes \tau)^{k_{{\mathbb c}}^t } \otimes a \right)^{k'_{{\mathbb c}}}.\end{aligned}\ ] ] by ( iii ) , @xmath592 induces an isomorphism @xmath613 \otimes \tau)^{k^t } = ( { { \mathbb c}}[z_{x ' } ] \otimes \tau)^{k^t}$ ] of @xmath614-modules through @xmath615 . by , @xmath616 \otimes \tau)^{k^t } \cong \chi_{x'}$ ] . moreover , @xmath617 hence @xmath618 as a representation of @xmath197 . the isotropy representation of @xmath192 at @xmath159 is @xmath619 this proves ( iii ) . suppose that @xmath620 . by , @xmath621 so @xmath622 . this implies that @xmath623 and proves ( i ) . we have seen before that @xmath570 . since @xmath624 , @xmath625 and @xmath626 . hence @xmath627 . this proves ( ii ) . part ( iv ) is an immediate consequence of ( ii ) and ( iii ) . this completes the proof of . we assume that @xmath628 is not surjective , i.e. @xmath629 . then @xmath197 has a levi decomposition @xmath630 ( see p. 184 in @xcite ) where @xmath196 is the levi part and @xmath450 is the unipotent part . by the calculation in , we have @xmath631 . we would like to mimic in case i which constructs an orbit @xmath595 . more precisely we will construct in an ( affine ) algebraic set @xmath632 and a surjective algebraic morphism @xmath633 with the following properties : a. [ item : ind.a ] let @xmath634 . there is a @xmath635-action on @xmath632 such that @xmath636 is @xmath635-equivariant . b. the set @xmath632 is an @xmath635-orbit . fix @xmath637 . we form the following set theoretical fiber : @xmath638^{\cong}_t z_m \ar@{^(->}[r ] \ar[d ] & z \ar[d]^{\pi } \\ & \set { m } \ar@{^(->}[r ] & \cm } \ ] ] by the same argument as in the proof of , @xmath639 is equal to the scheme theoretical fiber @xmath640 because the latter is reduced . c. let @xmath641 be the stabilizer of @xmath642 in @xmath635 . we set @xmath643 and @xmath644 . then @xmath645 here @xmath646 embeds diagonally into @xmath647 , @xmath648 lies in @xmath426 and @xmath649 is the natural projection . d. [ item : ind.d ] let @xmath650 be the dual pair with maximal compact subgroup @xmath651 as in ( c ) . then @xmath652 is in the stable range ( c.f . ) . we put subscript @xmath653 for all objects with respect to this pair . for example , @xmath654 denotes the genuine character of @xmath655 . correspondingly we have the closed null cone @xmath656 . e. let @xmath641 act on @xmath657 with trivial @xmath646-action . then there is a @xmath641-equivariant bijection @xmath658 . f. [ item : ind.z2 ] by ( b ) , we have @xmath659 and @xmath660 = { { \mathrm{ind}}}_{q_m}^q { { \mathbb c}}[z_m ] = { { \mathrm{ind}}}_{q_m}^q { { \mathbb c}}[\bcns].\ ] ] the proofs of ( a ) to ( e ) are given in . part follows from @xcite using a similar argument as . putting into , the isotropy representation of @xmath327 at @xmath159 is @xmath661\otimes a \otimes \tau \right)^{k'_{{\mathbb c}}\times k^t_{{\mathbb c}}}\\ = & \left ( ( { { \mathbb c}}[\bcns ] \otimes a|_{k'_s})^{k'_s } \otimes \tau \right)^{k_{{\mathbb c}}^{t - q}}. \end{split}\ ] ] note that @xmath662 , @xmath663 , @xmath664 . by , @xmath665 \otimes a|_{k'_s})^{k'_s } = b_s$ ] . finally , we get @xmath666 as a representation of @xmath667 . this proves ( ii ) . by , @xmath668 implies that @xmath669 . conversely if @xmath123 is non - zero then by below ( whose proof does not depend on the result of this subsection ) @xmath199 is non - zero . this proves ( iii ) . the proof of ( iv ) is the same as that of ( ii ) and ( iv ) . we have @xmath670 . if @xmath623 , then @xmath622 so @xmath625 and @xmath671 . hence @xmath627 . this proves ( iv ) . this completes the proof of . in this section , we assume the notation of the sections [ s57 ] and [ s58 ] : @xmath672 and @xmath673 . since @xmath674 , follows from proposition [ t61](ii ) below . we emphasize that the proof of the proposition is independent of the calculations of @xmath388 in and . [ t61 ] suppose that @xmath212 satisfies and @xmath675 . let @xmath388 be the @xmath61-module calculated in and . a. we have @xmath676 as @xmath677-modules where @xmath678 is the natural open embedding and @xmath402 is the @xmath19-equivariant coherent sheaf with fiber @xmath388 at @xmath159 in sense of @xcite . b. as @xmath19-modules , @xmath679 in particular , @xmath680 if and only if @xmath681 . by definition @xmath682 . on the other hand , @xmath683 and @xmath684 by @xcite . we will show ( ii ) , i.e. @xmath685 under the restriction map . then ( i ) follows because both @xmath373 and @xmath686 are quasi - coherent sheaves over an affine scheme with the same space of global sections . by and @xmath687\otimes a \otimes \tau)^{k'_{{\mathbb c}}\times k_{{\mathbb c}}^t } = ( { { \mathbb c}}[\cn]\otimes a \otimes \tau)^{k'_{{\mathbb c}}\times k_{{\mathbb c}}^t}.\ ] ] on the other hand , since @xmath38 is @xmath688-invariant , localization commutes with taking @xmath688-invariants . let @xmath689 and we consider @xmath690^{i_{{\mathcal{d}}}}\ar[d]_{\phi\circ { { \mathrm{pr}}}|_{{\mathcal{d } } } } & \cn\ar[d]^{\phi \circ { { \mathrm{pr}}}}\\ \co \ar@{^(->}[r]^{i_{\co } } & \bco . } \ ] ] since @xmath691 is an open embedding , it is flat and we have ( c.f . corollary 9.4 in @xcite ) @xmath692.\ ] ] this gives @xmath693\otimes a \otimes \tau)^{k'_{{\mathbb c}}\times k_{{\mathbb c}}^t}$ ] . therefore , it suffices to show that @xmath694 \to { { \mathbb c}}[{{\mathcal{d } } } ] = h^0({{\mathcal{d}}},\so_\cn)\ ] ] is an isomorphism . [ l62 ] suppose @xmath695 , then @xmath696 is a @xmath697-orbit and @xmath698 has codimension at least @xmath372 in @xmath699 . the proof of the lemma is given in . we continue with the proof of the theorem . we note that @xmath699 is a @xmath700-orbit and @xmath696 is a @xmath701-orbit . now follows from theorem 4.4 in @xcite . this proves . we will briefly review special unipotent primitive ideals and representations in chapter 12 in @xcite . also see section 2.2 in @xcite . we will apply these to @xmath123 . let @xmath702 . we consider @xmath123 where @xmath703 , @xmath704 , @xmath216 and @xmath705 . the infinitesimal character of @xmath123 corresponds to the weight @xmath706 under the harish - chandra parametrization @xcite . here @xmath707 denotes the half sum of the positive roots of @xmath708 , and we insert or remove zeros from @xmath709 if the string of numbers is too short or too long . the restriction of @xmath710 to @xmath711 decomposes into a finite number of irreducible @xmath711-submodules . let @xmath712 denote one of these irreducible @xmath711-submodules . since @xmath713 is also a @xmath714-orbit , @xmath712 also have associated variety @xmath92 . we claim that the weight @xmath709 also represents the infinitesimal character of @xmath712 as a @xmath711-module . indeed we may have an ambiguity only if @xmath112 is even where the infinitesimal character is either @xmath709 or @xmath715 . here @xmath653 is the involution induced by the outer automorphism of @xmath338 . in this case @xmath477 is even and the weight @xmath709 contains a zero in the string of numbers . hence @xmath709 and @xmath715 represent the same infinitesimal character and proves our claim . under the kostant - sekiguchi correspondence , @xmath716 generates a nilpotent @xmath717-orbit @xmath718 in @xmath719 . the orbit @xmath718 has the same young diagram as that of @xmath716 in less the plus and minus signs . let @xmath720 denote the primitive ideal of @xmath712 in @xmath721 . the ideal @xmath720 has a filtration @xmath722 . by @xcite , @xmath723 cuts out the variety @xmath724 in @xmath719 . let @xmath725 and @xmath726 be the roots and the root lattice of @xmath338 . let @xmath727 denote the simple lie algebra with @xmath725 as coroots . in particular @xmath728 if @xmath112 is even and @xmath729 if @xmath112 is odd . we refer to @xcite on the order reversing map @xmath137 ( resp . @xmath730 ) from the set of complex nilpotent orbits in @xmath719 ( resp . @xmath731 ) to the complex nilpotent orbits in @xmath731 ( resp . @xmath719 ) . the orbit @xmath718 is called special if it is in the image of @xmath730 and for a special orbit @xmath718 , we have @xmath732 . suppose @xmath718 is a special orbit . by the jacobson - morozov theorem , let @xmath733 be the @xmath734-triple such that @xmath735 . we may assume that @xmath736 lies in @xmath737 . in this way @xmath736 defines an infinitesimal character of @xmath721 via the harish - chandra homomorphism . if @xmath738 has young diagram @xmath739 then @xmath740 . suppose @xmath736 gives the same infinitesimal character @xmath709 as that of @xmath712 . let @xmath741 denote the unique maximal primitive ideal of @xmath721 with infinitesimal character @xmath736 @xcite . we will call @xmath741 a special unipotent primitive ideal . by corollary a3 in @xcite , the variety cut out by @xmath742 in @xmath719 is the same as that of @xmath723 , namely @xmath724 . by corollary 4.7 in @xcite , @xmath743 . we say that @xmath712 is a special unipotent representation . [ p86 ] suppose @xmath704 , @xmath215 and @xmath216 . then @xmath718 is a special orbit and @xmath712 is a special unipotent representation . the orbit is special could be read off from page 100 in @xcite . indeed @xmath744 where @xmath745 if @xmath477 is odd ( i.e. @xmath727 of type @xmath746 ) and @xmath747 if @xmath477 is even ( i.e. @xmath727 of type @xmath748 ) . furthermore @xmath749 . we have @xmath750 which is the infinitesimal character of @xmath712 . the conclusion that @xmath712 is special unipotent follows from the discussion prior to the proposition . the above argument applies to @xmath336 too by setting @xmath751 . it is easier and we leave the details to the reader . most of our methods and results extend to the two dual pairs @xmath125 in table [ tab : ls ] below . we have omitted them in the main body of this paper in order to keep this paper simple . in this section , we will briefly describe these two dual pairs . .list of dual pairs [ cols="^,^,^,^,^,^",options="header " , ] there is also a notion of theta lifts of @xmath22-orbits on @xmath23 to @xmath19-orbits on @xmath20 , and conversely . first we suppose @xmath125 is in stable range where @xmath11 is the smaller member . this condition is given in the second column of . let @xmath108 be a genuine unitary character of @xmath111 . for the dual pair @xmath30 , @xmath752 alway splits and @xmath108 is unique . the local theta lift @xmath107 is nonzero and unitarizable , and it also the full theta lift ( c.f . proposition [ p32 ] ) . by an almost identical proof as that of , one shows that @xmath753 $ ] where @xmath754 . furthermore we have @xmath755 where @xmath199 is the isotropy representation @xcite . next we consider the dual pair @xmath756 . let @xmath118 be an irreducible genuine representation of @xmath119 such that @xmath757 is nonzero . then @xmath117 is a unitary lowest weight harish - chandra module of @xmath111 . by @xcite , @xmath758 is the zariski closure of an orbit @xmath90 in @xmath424 . the isotropy representation @xmath189 of @xmath190 at a closed point @xmath759 is also computed explicitly in @xcite . let @xmath108 be a genuine unitary character of @xmath111 for the dual pair @xmath175 in stable range . the local theta lift @xmath760 to @xmath116 is also the full theta lift ( c.f . proposition [ p24 ] ) . again we have to divide into cases i and ii as in theorems [ thm : ll ] and [ tc ] . the conditions for cases i and ii are given in the third and fourth columns of respectively . in case i , we have results similar to that of . more precisely , @xmath761 is nonzero . its associated variety is @xmath762 and it is the zariksi closure of single @xmath22-orbit @xmath92 . we fix a closed point @xmath571 . let @xmath61 and @xmath442 be the stabilizers of @xmath159 and @xmath443 in @xmath19 and @xmath22 respectively . then there is a group homomorphism @xmath191 such that the isotropy representation of @xmath763 at @xmath159 is @xmath764 therefore @xmath765 satisfies , i.e. @xmath766 = ( \dim\tchi_{x ' } ) [ \overline{\theta(\co ' ) } ] = \theta({\mathrm{ac}}(l(\mu')^*)).\ ] ] the last column lists the conditions in case i such that ( c.f . ) @xmath767 for case ii , the situation is more complicated . equation continues to hold but fails in general . [ [ sl41 ] ] _ proof of . _ the map @xmath768 in factors through the @xmath122-covariant subspace @xmath769 of type @xmath770 of @xmath65 . let @xmath771 , @xmath772 and @xmath773 denote the @xmath770-isotypic components of @xmath65 , @xmath774 and @xmath775 respectively . since @xmath122 has reductive action on @xmath65 and preserves degrees , @xmath771 maps bijectively onto the covariant @xmath769 . moreover @xmath776 and @xmath777 . hence @xmath778 let @xmath779 denote the @xmath770-isotypic component in the harmonic subspace @xmath780 of @xmath240 $ ] for @xmath122 . by @xcite , we have @xmath781 and by @xcite we have @xmath782 since @xmath230 acts by degree two polynomials , @xmath783 if @xmath784 and @xmath785 . it follows from that @xmath786 . we will prove @xmath787 by induction . first we have @xmath788 . suppose @xmath789 where @xmath790 . since @xmath791 , it suffices to show that @xmath792 . by , hence @xmath794 this shows that @xmath795 and completes the proof of the lemma . [ [ sec : proof.res ] ] _ proof of . _ by @xmath796 . this defines another filtration @xmath797 on @xmath123 . let @xmath798 be the degree of the lowest degree @xmath247-type @xmath519 and @xmath799 be the degree of the lowest degree @xmath524-type @xmath523 . let @xmath249 be the smallest integer such that @xmath800 . in order to prove , it suffices to prove that @xmath801 indeed , by lemma [ l33 ] and , we have a surjection @xmath802 & e_j.}\ ] ] let @xmath803 be the degree of @xmath121 . then @xmath804 is the natural filtration on @xmath190 . taking the @xmath118-coinvariant of @xmath805 gives a surjection @xmath806 & ( e_j \otimes \mu)^{k^t } = e'_j . } \ ] ] hence the image @xmath807 of @xmath808 is also the filtration for @xmath123 defined before lemma [ l33 ] up to a degree shifting . now follows from lemma [ l33 ] . in this section , we will denote the space of @xmath809 by @xmath163 complex matrices ( resp . symmetric matrices ) by @xmath810 ( resp . @xmath811 ) . let @xmath812 denote the matrix in @xmath810 such that @xmath813 . we identify @xmath814 with the set of complex matrices @xmath815 we will denote an element in @xmath816 by @xmath817 and an element in @xmath818 by @xmath819 . the projection map @xmath820 is given by @xmath821 and @xmath822 is given by @xmath823 . in particular @xmath585 is surjective if and only if @xmath183 . as always we will denote @xmath824 by @xmath162 . an element @xmath825 acts on @xmath225 by @xmath826 . let @xmath827 be the @xmath809 by @xmath163 matrix with @xmath163-linearly independent column vectors whose column space is isotropic . let @xmath828 be the stabilizer of the isotropic subspace spanned by the columns of @xmath829 . it is a maximal parabolic subgroup of @xmath162 . then the column space of the complex conjugation @xmath830 is an isotropic subspace dual to the column space of @xmath829 . this gives a levi decomposition @xmath831 with @xmath832 its unipotent radical . let @xmath833 be the group homomorphism defined via quotient by @xmath834 . let @xmath853 . we define an action of @xmath635 on @xmath854 by @xmath855 where @xmath856 and @xmath857 . the subgroup @xmath858 acts trivially . we define @xmath859 by @xmath860 here @xmath861 . note that @xmath636 commutes with the action of @xmath635 . let @xmath862 let @xmath863 in @xmath854 . using , we deduce that @xmath632 is an @xmath635-orbit generated by @xmath642 . one can also check that @xmath864 where @xmath592 is a bijection which maps the above element in the parathesis to @xmath865 and @xmath866 is the null cone for pair @xmath867 clearly @xmath879 so @xmath696 is non - empty and open in @xmath699 . if @xmath699 is irreducible , then @xmath696 is open dense in @xmath699 . if @xmath699 is not irreducible , then @xmath696 is open dense because @xmath880 permutes the irreducible components . let @xmath881 . by the action of @xmath162 , we may assume that @xmath882 . now ( ii ) would follow from the claim that @xmath883 is in the @xmath884-orbit of @xmath885 . indeed it is a case by case elementary computation to show that @xmath886 our claim is an application of the witt s theorem to . we will leave the details to the reader . note that the column spaces of @xmath893 are the same . hence there is @xmath894 such that @xmath895 , i.e. @xmath896 . now @xmath897 . viewing @xmath891 as an injection , we get @xmath898 . therefore @xmath899 by which proves the claim . we claim that for every @xmath906 , there is a unique @xmath907 , denoted by @xmath908 , such that @xmath909 . it is easy to see that @xmath910 is a group homomorphism @xmath911 . the image of @xmath615 is contained in @xmath912 since @xmath908 stabilizes @xmath913 . next we prove the uniqueness of @xmath922 in our claim . indeed if @xmath923 where @xmath924 . then @xmath925 as matrices so @xmath926 . since @xmath838 has rank @xmath163 , @xmath927 . this proves our claim and ( ii ) . \(iii ) an element of @xmath928 is of the form @xmath929 . since @xmath930 is in the null cone @xmath931 , @xmath932 , i.e. @xmath933 . on the other hand , for any @xmath933 , @xmath934 since @xmath838 already has full rank . we define @xmath602 by @xmath935 . this is a bijection which satisfies ( iii ) . _ proof of . _ the proof of the lemma involves some elementary but tedious case by case consideration . the case @xmath936 and @xmath937 are symmetric , so we only sketch the proof for @xmath938 . by ( ii ) , @xmath696 is a single @xmath939-orbit . let @xmath940 . then @xmath941 is the @xmath942-orbit of @xmath943 and @xmath944 . let @xmath945 denote the co - dimension of @xmath946 in @xmath699 . it is also the codimension of @xmath947 in @xmath948 . we need to show that @xmath949 . if @xmath950 so there exists a row of zeros in @xmath951 and we set @xmath952 to be the @xmath953 by @xmath163 matrix obtained from the matrix @xmath951 by interchanging a zero row and with the last row . if @xmath954 , we interchange the @xmath163-th row with the @xmath955-th row . in both cases , let @xmath956 denote the @xmath957-orbit generated by @xmath952 . the codimension @xmath959 where @xmath960 and @xmath961 are the stabilizers of @xmath951 and @xmath952 respectively in @xmath957 . one may compute that @xmath962 similarly @xmath963 has the same formula as @xmath964 above except that we have to reduce @xmath477 by @xmath965 . with these , we compute that @xmath966 as required . a. daszkiewicz , w. krakiewicz , t. przebinda , _ dual pairs and kostant - sekiguchi correspondence . ii . classification of nilpotent elements _ , central european journal of mathematics 3(3 ) , ( 2005 ) , 430 - 474 . t. enright , r. howe and n. wallach , _ a classification of unitary highest weight modules . _ representation theory of reductive groups ( park city , utah , 1982 ) , 97143 , progr . , 40 , birkhuser boston , boston , ma , ( 1983 ) . roger howe _ perspectives on invariant theory : schur duality , multiplicity - free actions and beyond . _ the schur lectures ( 1992 ) ( tel aviv ) , 1182 , israel math . proc . , 8 , bar - ilan univ . , ramat gan , 1995 . t. kobayashi , _ discrete decomposability of the restriction of @xmath967 with respect to reductive subgroups . iii . restriction of harish - chandra modules and associated varieties . _ 131 , no . 2 ( 1998 ) , 229 - 256 . k. nishiyama , h. ochiai and k. taniguchi , _ bernstein degree and associated cycles of harish - chandra modules hermitian symmetric case . _ nilpotent orbits , associated cycles and whittaker models for highest weight representations . astrisque no . 273 ( 2001 ) , 1380
in this paper we first construct natural filtrations on the full theta lifts for any real reductive dual pairs . we will use these filtrations to calculate the associated cycles and therefore the associated varieties of harish - chandra modules of the indefinite orthogonal groups which are theta lifts of unitary lowest weight modules of the metaplectic double covers of the real symplectic groups . we will show that some of these representations are special unipotent and satisfy a @xmath0-type formula in a conjecture of vogan .
a total of 31 patients were diagnosed with lumbar and lumbosacral tuberculosis from august 2012 to august 2013 based upon radiological findings ( mri ) and histopathology reports ( sample obtained by computed tomography [ ct ] guided biopsy ) . of these , 13 patients developed progressive neurological deterioration or increasing back pain despite conservative measures and underwent posterior decompression and pedicle screw fixation and posterolateral fusion . there were 8 males and 5 females and their mean age at the time of surgery was 35.2 years ( range , 22 to 55 years ) . the mean duration of symptoms was 4 months ( range , 2 to 7 months ) . indication of surgical procedure was intolerable back pain and/or progressive neurological deficit despite ongoing conservative management . a complete blood count , erythrocyte sedimentation rate ( esr ) , c - reactive protein ( crp ) , mantoux test , plain radiography of the lumbosacral spine ( anteroposterior and lateral views ) , and mri were carried out in all patients . under general anaesthesia , the midline posterior approach was used with the patient placed in prone position in all cases . laminectomy or laminotomy was done at the affected levels and pedicle screw fixation was done cephalad and caudad including healthy pedicles of the affected vertebrae ( table 1 ) . posterolateral fusion was done in all cases . infected material was sent for histopathological examination and culture sensitivity . log roll , side turning , and pelvic lift exercises were started on postoperative day one and mobilization with the support of lumbosacral belt was started as early as possible . mean hospital stay was 9 days ( range , 7 to 14 days ) and suture removal was done on postoperative day 13 in all except 1 case with superficial infection ( postoperative day 17 ) . functional outcome ( visual analogue scale [ vas ] for back pain),7 ) neurological recovery ( frankel grading),8 ) and segmental kyphosis ( on plain radiographs ) were assessed preoperatively and at 3 , 6 , and 12 months following surgery . in all patients , segmental kyphotic angle was measured as the angle between caudal and cephalad end plates nearest to the lesion . a complete blood count , erythrocyte sedimentation rate ( esr ) , c - reactive protein ( crp ) , mantoux test , plain radiography of the lumbosacral spine ( anteroposterior and lateral views ) , and mri were carried out in all patients . under general anaesthesia , the midline posterior approach was used with the patient placed in prone position in all cases . laminectomy or laminotomy was done at the affected levels and pedicle screw fixation was done cephalad and caudad including healthy pedicles of the affected vertebrae ( table 1 ) . posterolateral fusion was done in all cases . infected material was sent for histopathological examination and culture sensitivity . log roll , side turning , and pelvic lift exercises were started on postoperative day one and mobilization with the support of lumbosacral belt was started as early as possible . mean hospital stay was 9 days ( range , 7 to 14 days ) and suture removal was done on postoperative day 13 in all except 1 case with superficial infection ( postoperative day 17 ) . functional outcome ( visual analogue scale [ vas ] for back pain),7 ) neurological recovery ( frankel grading),8 ) and segmental kyphosis ( on plain radiographs ) were assessed preoperatively and at 3 , 6 , and 12 months following surgery . in all patients , segmental kyphotic angle was measured as the angle between caudal and cephalad end plates nearest to the lesion . preoperatively , the esr was elevated in 8 cases ( 61.5% ) and the crp level in 10 cases ( 76.90% ) . normal values of esr were attained in all 8 patients at final follow - up . the crp level had fallen to normal values in all the 10 cases at the end of 3 months after surgery . the mantoux test was positive in 6 cases ( 46.15% ) and histopathology reports demonstrated tubercular osteomyelitis in all 13 cases with presence of typical caseating granulomas . however , culture was positive only in 5 cases ( 38.46% ) . the mean vas score for back pain improved from 7.89 ( range , 9 to 7 ) preoperatively to 2.2 ( range , 3 to 1 ) at final follow - up ( table 2 ) . frankel grading was grade b in 3 , grade c in 7 , and grade d in 3 patients preoperatively , which improved to grade d in 7 and grade e in 6 patients at last follow - up ( table 3 ) . radiological healing was evident in the form of reappearance of trabeculae formation , resolution of pus , fatty marrow replacement , and bony fusion on sequential follow - ups in all cases ( figs . 1 and 2 ) . one patient had a pocket of pus in the psoas muscle and healed at the end of 17 months postoperatively . the mean correction of segmental kyphosis was 9.85 ( range , 9 to 14 ) postoperatively . the mean loss of correction at final follow - up was 3.15 ( range , 1 to 8 ) ( table 4 ) . complication in the form of superficial wound infection was present in 1 case , which was resolved by regular dressing of the wound . complication in the form of superficial wound infection was present in 1 case , which was resolved by regular dressing of the wound . based on current evidence , spinal tuberculosis can be considered a medical condition that requires operative treatment only in the presence of neurological deficits caused by spinal cord compression , disabling back pain , and spinal deformity in spite of ongoing antitubercular therapy.2569 ) the surgical approach in spinal tuberculosis has evolved from anterior to posterior . the anterior approach , popularised by hodgson et al.10 ) in 1960 , was advocated traditionally in view of the predilection of the pathology of tuberculosis for the vertebral bodies and disc spaces . the anterior approach concedes direct access to the infected focus and is convenient for debriding infection and reconstructing the defect.1112 ) in the lumbar region , attainment of bony stability through anterior instrumentation may be insubstantial due to presence of the concomitant osteoporosis associated with infection of tuberculosis that renders the vertebrae structurally weak and thereby preventing adequate fixation.1314 ) also , anterior fixation is not feasible in the lumbar and lumbosacral spine due to the presence of the common iliac vessels anterolaterally.15 ) a combined anterior plus posterior approach helps to overcome stability - related drawbacks of the anterior approach alone.13141516 ) however , it involves 2 surgeries ( it can be a single event or performed as a staged procedure ) , and when performed as a single event , it is associated with increased operative time and blood loss along with exposure of vital structures such as peritoneum in already immunocompromised tuberculosis patients , leading them susceptible to further infection and thus contributing to further additional morbidity.6 ) campbell et al.17 ) have reported higher rates of complications with isolated anterior fixation and combined anterior and posterior spinal fusion in comparison to isolated posterior fusion . recently , the posterior approach has gained popularity because it is less invasive , allows circumferential cord decompression , can be extended proximally and distally from the involved segment , and provides a stronger three - column fixation through uninvolved posterior elements via pedicle screws.161819 ) in 2005 , bezer et al.20 ) reported transpedicular drainage and posterior instrumentation as a less demanding single - stage procedure in patients with lumbosacral tuberculosis . functional recovery evaluated in terms of vas in our study was comparable to that of sahoo et al.21 ) with a mean value of 1.9 at the end of 1-year follow - up . significant improvement in neurological grading was evident with an improvement of two grades in more than 50% of the cases . evaluation of radiological healing in cases of spinal tuberculosis has been described by jain et al.22 ) as the remineralization and reappearance of bony trabeculae , sharpening of the articular and cortical margin , sclerosis of the vertebral body and end plates , fusion of vertebral bodies on plain x - rays , resolution of enhanced vertebral body on mri , and paravertebral collection and fatty replacement of marrow seen as enhanced intensity on sequential t1 and t2 images . in the present study , the loss of correction at final follow - up was 3.15 , which was not statistically significant ( p > 0.05 ) . this was consistent with the findings of zhang et al.23 ) recently , percutaneous posterior fixation with or without anterior debridement has been published.2425 ) only posterior fixation was done for patients with back pain only . patients with neurological involvement underwent anterior debridement in addition , thus having complications associated with anterior approach . in our study , decompression was required in all patients thus ruling out the option of percutaneous fixation . in conclusion , single - stage posterior decompression and instrumented fusion is an effective and safe procedure for surgical treatment of lumbar and lumbosacral tuberculosis in adults . further studies with a large number of patients and a longer follow - up will be necessary .
backgroundfor surgical treatment of lumbar and lumbosacral tuberculosis , the anterior approach has been the most popular approach because it allows direct access to the infected tissue , thereby providing good decompression . however , anterior fixation is not strong , and graft failure and loss of correction are frequent complications . the posterior approach allows circumferential decompression of neural elements along with three - column fixation attained via pedicle screws by the same approach . the purpose of this study was to evaluate the outcome ( functional , neurological , and radiological ) in patients with lumbar and lumbosacral tuberculosis operated through the posterior approach.methodstwenty-eight patients were diagnosed with tuberculosis of the lumbar and lumbosacral region from august 2012 to august 2013 . of these , 13 patients had progressive neurological deterioration or increasing back pain despite conservative measures and underwent posterior decompression and pedicle screw fixation with posterolateral fusion . antitubercular therapy was given till signs of radiological healing were evident ( 9 to 16 months ) . functional outcome ( visual analogue scale [ vas ] score for back pain ) , neurological recovery ( frankel grading ) , and radiological improvement were evaluated preoperatively , immediately postoperatively and 3 months , 6 months , and 1 year postoperatively.resultsthe mean vas score for back pain improved from 7.89 ( range , 9 to 7 ) preoperatively to 2.2 ( range , 3 to 1 ) at 1-year follow - up . frankel grading was grade b in 3 , grade c in 7 , and grade d in 3 patients preoperatively , which improved to grade d in 7 and grade e in 6 patients at the last follow - up . radiological healing was evident in the form of reappearance of trabeculae formation , resolution of pus , fatty marrow replacement , and bony fusion in all patients . the mean correction of segmental kyphosis was 9.85 postoperatively . the mean loss of correction at final follow - up was 3.15.conclusionsposterior decompression with instrumented fusion is a safe and effective approach for management of patients with lumbar and lumbosacral tuberculosis .
yoga originated around 3000 years b.c . , under mystical and philosophical concepts in the hindu tradition . its principles rest in metaphysics , which are hard to understand in western countries and for those who do not practice it . some authors , who have gone deep into its study and practice , have tried to explain it . etymologically , yoga means to add , join , unite or attach ( sanskrit , ioga ) where the body ( anga ) , mind ( chitta ) , emotions and the soul ( atma or atman ) . a complete explanation of this ancient discipline was given by eliade in his treatise yoga , immortality and freedom , and defined as a collection of specific techniques to seek a truth hidden in the silence and in the inner calm of people , a fundamental truth which enables one to free the soul from false reality , a state of liberation of the waves of thought or ecstasy ( samadhi ; sanskrit , sam o samialk [ complete ] and dhi [ mentally absorbed ] ) . the one practiced in western societies is an integral yoga described by patajali ( ii century b.c . ) . he condenses in his yoga stras , a collection of aphorisms in a buddhist / hindi text or manual , the traditions and practices of ancient and contemporary practitioners ( yogis ) . this type of yoga was brought to the american continent by swami vivekananda at the end of the 19 century ( 18941896 ) and was scientifically and philosophically enriched by eliade . however , through the years yoga has undergone many transformations and adaptations , thereby changing its original principles and fundamentals . as opposed to the traditional practice , physical focus on yoga became very popular in the west beginning in the second half of the 20 century and is often referred simply as to hatha yoga ( hy ) . hatha yoga refers to a set of physical ( asanas ) and mental exercises , designed to align the body and mind , in such way the vital energy ( pra ) can flow freely . it consists of respiratory exercises ( prnymas or shatkarma ) , physical stretching postures , isometric force , balance , relaxation ( yoganidra ) and concentration ( dharana ) , whose purpose is to ensure that anga is fit for meditation ( dhyana ) . these elements are conducive to a unique level of consciousness and self - realization , leading to liberation ( kaivalya ) of the self ( atman ) . hatha yoga reduces stress , improves overall physical fitness and reduces some risk factors for cardiovascular diseases . other health effects include prevention of cardiac arrhythmias , hypertension , insomnia , cardiopulmonary disorders , depression and anxiety , epilepsy , cancer , menopause symptoms and chronic back pain . that is why it is adopted as part of a healthy lifestyle or as a therapeutic resource in alternative medicine . to give just an example , ross et al . , postulate that the frequency of yoga practice at home favorably predicted ( p < 0.001 ) : mindfulness , subjective well - being , healthy body mass index ( bmi ) , fruit and vegetable consumption , vegetarian status and vigor . moreover , specifics components of yoga practice ( e.g. , physical poses , breath work , meditation , and study of yoga philosophy ) improve health behaviors or lifestyle - related health conditions . however , its benefits in other physical and mental disturbances remain not conclusive , and even harmful effects have even been reported when practiced incorrectly , by unskilled people or disabled . anyhow , hy should be considered as a preventive strategy for improving several metabolic conditions although its utility in complementary medicine , as compared with conventional medical therapies , is under - recognized by the health care community . lastly , yoga is also a lifestyle , so the physiological events are complemented with other environmental factors such as the change in eating patterns . here 's a conceptual review of this subject , with particular emphasis on the changes in eating behaviors ( ebs ) and bioenergy ( be ) management in yogis and the practice of hy as a mean to improve lifestyle and eating patterns in nonyoga practitioners . the issues to be addressed in the following sections of this critical review , derived from a systematic search for information on 5 databases ( medline [ pubmed ] , lilacs [ scielo ] , latindex , science direct , google scholar ) are recognized in the field of yoga and its health impacts . the following medical subject headings ( mesh , tree number ) were used in combination with yoga and hy with the purpose of gathering and evaluating judiciously : energy metabolism ( em ) ( mesh : g03.495.335 ) , energy expenditure ( ee ) ( mesh : g03.495.335 ) , food ( mesh : j02.500 ) , diet ( mesh : g07.610.240 ) , eating disorders ( ed ) ( mesh : f03.375 ) , ebs ( mesh : f01.145.113.547 , f01.145.407 ) , eating ( mesh : g07.610.593.260 , g07.700.620.260 , g10.261.326.240 ) , anorexia nervosa ( mesh : f03.375.100 ) , bulimia nervosa ( mesh : f03.375.250 ) and binge eating . anticipating the specificity and unexplored nature of certain topics , the information gathered from these databases within the philosophy of yoga , the body 's energy is studied from a more subtle and difficult way to measure , which plays a part on the control of total energy intake ( tei ) and total energy expenditure ( tee ) . so , a more holistic view of energy balance has to be addressed , here referred to as be or kundalini energy . with the purpose of achieving the desired freedom ( kaivalya ) of the inner self ( atman ) , the yogi tries to control the body 's energy centers ( kundalini - chakras ) and senses ( jnanendriya ) . the kundalini energy comes in three states : the common dormant , the aroused and the awakened states . when dormant , one 's spiritual understanding is restricted , and everything is perceived and interpreted according to a mundane and selfish perspective . when aroused , it gives a sudden temporary state of spiritual insight and spiritual energy , but it is not stable . only the awakened kundalini energy gives stable transformations of consciousness and progressive realization . the yogi is not interested in developing physical strength or athletic abilities , at least not in the way they are perceived in the west . the yogi is only interested in the control of its body for the development of atman . to achieve this bioenergetic level , the yogi integrates abstinence ( yamas ) , purity , moderation , and modesty ( niyamas ) into his / her daily life , and even some dietary and physical activity aspects rely on these principles . therefore , it is somewhat meaningless for the yogi to seek athletic ability using kundalini energy . nevertheless , some studies using subjective methods to study the effect of hy in be demonstrate that systematic practice improves the yogi`s vitality and perception of its own physical condition , social functioning and quality of life . also , because of the nature of physical exercises ( asanas ) performed in hy , it is common to find exceptional physical abilities in trained yogis especially in muscular flexibility , strength and stress control . given the mystical - philosophical roots of hy , the contemporary yogi continues to strive for something more than merely physical and mental health . however , due to the benefits of overall health , it is important to continue to study in detail the subject of be of hy as compared to other exercise protocols . in the following paragraphs , only the measurable energetic aspects are evaluated from the tee point of view and in terms of changes in ebs which in turn modify tei . on the metaphysical aspect of hy , psychology and anthropology can provide better arguments and theories , an aspect that escapes the purpose of this review scientific studies on em are focused on measuring tei or tee . the latter is generally measured at rest ( resting energy expenditure [ ree ] ) or at sleep ( basal , basal energy expenditure ) or as a result of different pathological , pharmacological , physiological or nutritional modifications . at cellular and molecular levels , many ionic , enzymatic , biosynthetic and genetic mechanisms are involved with either tei or tee . consequently , several metabolic indicators , forms of measurement and equations to estimate the study of the body`s energy balance and body weight control have been generated . recently goshvarpour et al . reported chaotic heart rate signals as a result of kundalini meditation , which are quite different from those observed in chinese chi meditation . from a physiological perspective , meditation is a physiological state of demonstrated reduced metabolic activity different from sleep that elicits physical and mental relaxation . therefore , the ee involved in kundalini meditation is ree . also , while performing hy ( asanas ) the physical intensity measured as consumption of oxygen ( vo2 ) or metabolic equivalents ( mets ) , is low ; in fact , it is lesser than that expended in other physical activities such as walking , jogging , running , cycling and swimming . , while studying young adults , found that the mets while performing hy ( asanas + pranayamas + dhyana ) is 53% less than jogging at 3.5 mill / h ( 2.2 vs. 3.3 mets ) ; hagins et al . , found that it is similar to walking at 2 mill / h ( ~2.5 mets ) , and that asanas performed in sitting or lying position expend lesser energy ( 1.5 mets ) than those performed in standing positions ( 2.3 mets ) . on the other hand , and wallace and benson , found relevant reductions in vo2 during meditation and relaxation ( yoganidra ) as compared to resting conditions ( ~2.6 vs. 4.0 ml of o2/kg of body mass / min ) . all of the above indicates that common protocols of hy are characterized as being of very low intensity , with little possibility of cardiovascular benefits . however , there is a possibility to improve the physical performance , hemodynamic function and increased cardio - respiratory reserve in hy , in spite of the low exercise stimulus ; this as a consequence of concerted physical and mental events , such as local muscular adaptation during some physically intense asanas , breathing exercises ( pryma ) and psycho - physiological control ( concentration ) . it is noteworthy that asanas performed at different intensities may increase tee up to 3.0 kcal / min while that expended on breathing exercises or during meditation is 2.0 and 1.4 kcal / min , respectively . in view of these arguments , from a cardiopulmonary conditioning standpoint , it is necessary to include complementary aerobic exercises into the hy routine or to perform it with greater intensity . sun salutation ( surya namaskar ) is one of the oldest yoga exercises known to man and is one of the most popular and well - acclaimed yoga postures . it has been practiced for centuries and consists of 1012 different postures which are preferably performed at dawn . each posture counteracts the preceding one producing a balance between flexion and extension with synchronized breathing and aerobic activity . the posture cycle can be repeated several times and at different velocities in the same workout , thereby placing more emphasis on increase of tee and cardiovascular conditioning . however , surya namaskar requires , for its proper performance , an adequate amount of flexibility and muscular strength which is why the studies of this practice have only been done on people who are young or physically fit . further , the execution must be rhythmic in nature , with each posture and its transition being executed in smooth cadence , and the postures must be performed with minimal jerks or ungainly movements . mody , reported vo2 of 26 ml / kg / min during each round , resulting in an tee of 234 kcal during a 30 min session for a 60 kg individual . that ee is enough to maintain body weight or to improve aerobic conditioning . in view of this and in accordance with its intensity and tee , surya namaskar is classified as a moderate - to - high intensity exercise that can be used , when performed at high rhythms , as a form of cardiopulmonary conditioning for people who are young or physically fit . furthermore , it could be included in contemporary hy sessions . in order to demonstrate that surya namaskar is a safe exercise , omkar et al . , studied the force and moment effects on specific joints ( wrist , elbow , shoulder , hip , knee and ankle ) during practice of surya namaskar . using a mathematical model , they found that none of the joints were overstressed during surya namaskar practice , and concluded that the joints involved are subjected to submaximal loadings as compared to more high impact exercises for which the ee is comparable . this is of particular importance for older people and for those who have functional limitations in performing aerobic training . other alternatives for increasing tee and cardiovascular conditioning while performing hy , is increasing its intensity and duration of the sessions or adding complementary aerobic and muscular resistance exercises . ray et al . , also demonstrated improved aerobic capacity and decreased perceived exertion after the maximal exercise of hy . ramos - jimnez et al . , found that 11 weeks of an intensive hy program , under a more intense protocol than usual performed by trained practitioners of yoga , produced an increase in vo2 max ( ~3 ml / kg / min ) , a decrease in body fat ( ~1.5 kg ) , systolic blood pressure ( ~5.5 mmhg ) and diastolic blood pressure ( ~3 mmhg ) , as well as a weekly tee of ~1000 kcal . so , intense hy would fulfill the minimum guidelines of the american college of sports medicine for maintaining body weight . in conclusion , given the growing popularity of hy , it can be considered as an alternative to increase the level of physical activity . however , it is recommended to increase the intensity and duration and to include alternative exercises like surya namaskar to ensure a maximal tee and cardiovascular fitness . for instance , the practice of asanas could be an optimal method for preserving the physical function in older people if exercise series are adapted to muscle and join performance as demonstrated in the yoga empowers senior study . lastly , the surya namaskar could be a better alternative for cardiovascular health , but this should be practiced with caution , especially in people with low fitness levels . the dietary pattern of a person is one of the most important predictors of health risk . there is substantial evidence that a diet rich in fruits , vegetables , whole grain cereals , lean meat , and fish are inversely associated with the risk of chronic illnesses like cardiovascular disease , cancer , or diabetes . however , the food selection is a complex behavioral process since individuals and groups make dietary choices based on food familiarity , availability , cost , cultural norms , taste preferences , health , and convenience , among other factors . the current environment of modern food , with the wide variety of food options , can be so large that it can become difficult to identify a consistent food pattern among people . healthy eating can also be considered a practice to seek for and attain harmonic body / mind balance . according to yoga philosophy , there are intimate connections of diet with mind , and foods have an unknown subtle essence difficult to prove through modern scientific methods . according to yoga , there are three types of foods : sattvic , rajasic and tamasic . the sattvic diet ( pure and balanced ) is believed to increase energy , produces happiness , calmness , and mental clarity . it could enhance longevity , health , and spirituality . according to maha narayana upanishad ( ~5000 b.c . ) it promotes a life expectancy of 100150 years and it is recommended for saints . all foods included in this diet are fresh , juicy , nutritious , and tasty , thus including the consumption of fresh fruits and vegetables , sprouted grains , roots , tubers , nuts , cow milk , curd , and honey . the sattvic dietary pattern appears to be similar to a modern but prudent dietary pattern.the rajasic diet ( over stimulating ) is believed to produce jealousy , anger , unfaithfulness , fantasies , and selfishness . it is recommended to leaders and fighters since it may cause excitement , confidence and increase in intelligence . the foods in this diet are bitter , tart , salty , spicy , hot , and dry ; they also include white sugar , radishes , and fried foods.the tamasic diet ( weakens and makes sleepy ) is believed to increase pessimism , weakness , laziness , and doubt . the yoga practitioners mention that this dietary pattern makes one dull , enhances anger and criminal tendency and impedes spiritual progress . the foods in this diet include meats from big tamed animals , onions , mushrooms , stale , undercooked - and highly fried foods , high fat fried foods , salt , sugar , spices , chilies pepper , butter and liquor ; medicines and stimulants are also included.agte and chiplonkar compiled a database of nutrient contents of 110-food items in two nonconsecutive 24 h - dietary recall of 109 apparently healthy adults . they classified the foods according to their gunas ; the sattvic food had the highest micronutrient density , followed by rajasic and tamasic . although fiber content was quite similar ( ~14 mg / kcal ) , the fat content was 18% , 42% , and 72% , respectively . dietary intake of sattvic , rajasic and tamasic were ~802 , ~61 and ~213 g / d . sattvic food intake had the highest correlation with food micronutrients ( ~r = 0.5 , p < 0.01 ) , rajasic only with thiamin intake ( r = 0.47 , p < 0.01 ) and tamasic with zinc ( r = 0.23 , p < 0.01 ) , iron ( r = 0.30 , p < 0.01 ) and the presence of anxiety ( r = 0.37 , p < 0.01 ) . from this evidence , the authors argued that sattvic food have the better health benefits . the authors included in their study a diet plan that would allow reduction of tamasic and rajasic foods and an increase in those sattvic . the sattvic diet ( pure and balanced ) is believed to increase energy , produces happiness , calmness , and mental clarity . it could enhance longevity , health , and spirituality . according to maha narayana upanishad ( ~5000 b.c . ) it promotes a life expectancy of 100150 years and it is recommended for saints . all foods included in this diet are fresh , juicy , nutritious , and tasty , thus including the consumption of fresh fruits and vegetables , sprouted grains , roots , tubers , nuts , cow milk , curd , and honey . the sattvic dietary pattern appears to be similar to a modern but prudent dietary pattern . the rajasic diet ( over stimulating ) is believed to produce jealousy , anger , unfaithfulness , fantasies , and selfishness . it is recommended to leaders and fighters since it may cause excitement , confidence and increase in intelligence . the foods in this diet are bitter , tart , salty , spicy , hot , and dry ; they also include white sugar , radishes , and fried foods . the tamasic diet ( weakens and makes sleepy ) is believed to increase pessimism , weakness , laziness , and doubt . the yoga practitioners mention that this dietary pattern makes one dull , enhances anger and criminal tendency and impedes spiritual progress . the life expectancy is low , and it is bad for health . the foods in this diet include meats from big tamed animals , onions , mushrooms , stale , undercooked - and highly fried foods , high fat fried foods , salt , sugar , spices , chilies pepper , butter and liquor ; medicines and stimulants are also included . it is noteworthy that sattvic food included a significant amount of functional foods such as soy milk ( flavonoids ) , tomatoes ( lycopene ) , herbal teas ( polyphenols ) and red amaranth ( bioactive peptides ) . there is also some evidence that practicing other disciplines in which the mind - body axis is used ( e.g. tai chi ) , changes the pattern of food consumption . in this respect , practitioners ( mainly from asian countries ) of these disciplines have diets based on a wide variety of vegetables , fruits , vegetables , and spices , thereby obtaining a high intake of micronutrients and functional ingredients , in addition to a reduced amount of dietary fat . cross - sectional studies show that yoga practitioners have better dietary patterns than their sedentary counterparts . palasuwan et al . , when evaluating dietary intake and cardiovascular risk factors in pre and postmenopausal thai women who practice yoga versus . tai chi practitioners or sedentary women , found that yoga practitioners have lower intakes of fats and ( bmi , kg / m ) ; the enzymatic antioxidant activity were similar among groups . when multidisciplinary interventions in which dietary habits , physical activity , stress management and hy are included , significant improvements in overall health is shown as a result of the intervention . postulate that home practice of yoga predicts healthy lifestyle changes including an increased intake of fresh fruits and vegetables . also , preliminary data obtained from a dietary evaluation in mexican hy practitioners indicate important changes toward a healthier and adequate diet [ table 1 ] and the gradual inclusion of functional ingredients such as fiber foods , lignans and flavonoids [ figure 1 ] . daily nutrient adequacy ( % ) of mexican yoga practitioners * from northern mexico lignans ( g / d ) and flavonoid intake ( mg / d ) of mexican yoga practitioners * from northern mexico . source : author 's unpublished data ; * 52 16 years old , 65% female however , there are still important questions about the relationship between diet and yoga , particularly on how a practice of yoga changes other specific aspects of the diet ( e.g. , antioxidant intake ) or how it modifies other biomarkers of dietary change ( e.g. , homocysteine ) . studies on these issues should not only consider the qualitative and quantitative aspects involved but also the holistic nature of the phenomenon . nevertheless and despite very few studies involving dietary assessment in yoga practitioners ( beginners and/or advanced ) , it can be safely concluded that the improvement in the spiritual well - being results in a healthier eb in the long - term . in this sense , ayurvedic ( sanskrit : ayus [ meaning life ] and veda [ knowledge ] ) treatments which consists of use herbal preparations , diet , yoga , meditation , and other practices , is gaining recognition in western societies as a holistic alternative intended to treat many metabolic and neuropsychiatric disorders from a predictive , preventive and personalized medicine standpoint . the diagnostic and statistical manual of mental disorders - fourth edition , text revision , lists three major types of ed : anorexia nervosa , bulimia nervosa , and unidentified disorders . anxiety and depression are common neuropsychiatric conditions in individuals with ed , being seen in ~60% of patients with anorexia and bulimia nervosa . on the other hand , binge eating is a disorder that is characterized by several criteria which include consuming large amounts of food accompanied by feeling of a lack of control . clinical , community , and population studies have reported that this disorder is associated with being overweight and severe adiposity . the treatment for the different forms of ed is based mainly on cognitive - behavioral and interpersonal therapy with the purpose of inducing positive behavioral changes concerning the people 's food intake . however , the lack of progress in treatment development , at least in part , reflects the fact that little is known about the pathophysiologic mechanisms that account for the development and persistence of ed . yoga , while seeking for the harmony of the mind and body , benefits people at risk or with established ed . dittmann and freedman , when studying body self - perception , attitudes toward food , and the spiritual beliefs of 158 female yoga practitioners , observed improvements in body satisfaction and self - acceptance along with reduced disordered eating associated to their yoga practice . similarly , intervention programs in which yoga is included as an alternative to the treatment of ed in persons with chronic obesity have shown that 12 weeks of hy practice reduces compulsive eating ( binge eating ) , lengthens meal times and improves food quality . other interventions , in which problems of anorexia and bulimia nervosa are dealt with yoga practice , also show similar results . carei et al . , when studying 54 girls ( 1121 years ) with ed , found that 1 h/2 times / week sessions of yoga for 8 weeks reduces symptoms of depression , anxiety , and worries about food as compared to girls with ed treated with conventional clinical methods . however , other studies have shown the opposite , especially when yoga is compared to other psychological strategies . , when studying ed in school age women through yoga and cognitive dissonance techniques , found that yoga fails to change these disorders , but cognitive dissonance reduced anxiety and the inability to express emotions ( alexithymia ) , improving self - perception of the body as well . it is important to note at this point that the success of clinical interventions for patients with ed , depends on many factors , but some of them have to do with age , the type of disorder and the severity of symptoms that often accompany them . in conclusion , these and other studies show that the common practice of hy can impact positively on several eds . however , more studies are needed that compare hy versus alternative clinical treatments for ed . contemporary hy ( asanas + pranayamas + dhyana ) , seen holistically , is effective for certain health problems such as hypertension , ed , stress , among others . however , due to their low intensity and low ee , they are not recommended for weight loss or improving cardiovascular conditioning . there are alternative exercises like surya namaskar , which can be included in its everyday practice , thereby improving health benefits . also , the practice of yoga is associated to healthy ebs such a higher consumption of fresh vegetables , dairy products , whole - grains and functional foods ( e.g. soy - based products ) , which could help in ed , but more case control studies are needed to recommend it as a clinical approach in eating disorders .
yoga is an ancient oriental discipline that emerged from mystical and philosophical concepts . today it is practiced in the west , partly due to the promotion of its benefits to improve the lifestyle and overall health . as compared to non - hatha yoga ( hy ) practitioners , healthier and better - eating patterns have been observed in those who practice it . agreement with the brought benefits , hy can be used as a therapeutic method to correct abnormal eating behaviors ( aeb ) , obesity , and some metabolic diseases . however , the energy expenditure during traditional protocols of hy is not high ; hence , it is not very effective for reducing or maintaining body weight or to improve cardiovascular conditioning . even so , several observational studies suggest significant changes in eating behaviors , like a reduction in dietary fat intake and increments in that of fresh vegetables , whole grains and soy - based products , which in turn may reduce the risk for cardiovascular diseases . given the inconsistency of the results derived from cross - sectional studies , more case control studies are needed to demonstrate the efficacy of hy as an alternative method in the clinical treatment of disordered eating and metabolic diseases .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Health Insurance Tax Credit Assistance Act of 2007''. SEC. 2. FINDINGS. Congress makes the following findings: (1) Health care spending in the United States has grown rapidly to a rate of approximately 10 percent a year. (2) According to the Congressional Budget Office, with the cost of health care rising rapidly, spending for Medicare and Medicaid is projected to grow even faster--in the range of 7 percent to 8 percent annually. (3) More and more Americans with health insurance coverage are experiencing increases in out-of-pocket expenses for health care. (4) The rising costs of healthcare is driving more citizens to be uninsured or underinsured. According to the Bureau of the Census, Department of Commerce, the number of Americans without health insurance in 2005 increased by 800,000 to 46,600,000 from 45,800,000 in 2004. (5) Many of these uninsured, nonelderly adults face chronic conditions. (6) The rising costs of healthcare are compounded for Americans who suffer from a chronic disease that requires expensive treatments. Some of these uninsured adults with chronic conditions forgo needed medical care or prescription drugs, due to the prohibitive costs. (7) Many families who have a loved one with an expensive chronic condition often face a difficult dilemma: if they receive public assistance through State Medicaid programs, they must meet and maintain a certain income threshold and if they leave public assistance for private insurance, they must then be able to meet higher premiums, co-payments and drug costs. (8) Currently, nonprofit charitable organizations have recognized a need to develop financial assistance programs for patients with expensive chronic illnesses to access treatment and therapies to lead productive and healthy lives. (9) These patient assistance organizations (PAOs) prevent patients with expensive chronic illnesses and conditions from depleting financial resources to qualify for public assistance programs by subsidizing health insurance premiums; pharmacy and treatment co-payments; and expense associated with Medicare. (10) The Federal Government should be looking for ways to reduce the costs to public programs like Medicaid at the same time transitioning beneficiaries into the private health market. One way to do this is to create incentives for beneficiaries and their families to enter the workforce, earn a better living and ultimately, participate in the private health insurance market. (11) A targeted tax credit is one way the Federal Government could encourage citizens to donate to qualified PAOs. (12) The benefits of a tax credit provides the Federal Government with a greater savings than the cost of the tax credits themselves by transitioning patients off public programs such as Medicaid, lifting them out of poverty, and enabling them to access health insurance coverage. (13) This tax credit also contributes to PAOs that can cover the ``TrOOP'' or ``doughnut hole'' expenses that Medicare part D does not cover for disabled and senior citizens. (14) This tax credit in the end fosters a tax policy that addresses three major areas of public policy concern-- (A) uninsured and underinsured citizens; (B) treatment for Medicare beneficiaries (``doughnut hole''); and (C) cost savings for Medicaid. SEC. 3. CREDIT FOR CHARITABLE CONTRIBUTIONS TO CERTAIN PRIVATE CHARITIES PROVIDING HEALTH INSURANCE PREMIUM ASSISTANCE AND DRUG COPAYMENT ASSISTANCE TO THE UNINSURED AND UNDERINSURED. (a) In General.--Subpart A of part IV of chapter 1 of the Internal Revenue Code of 1986 (relating to nonrefundable personal credits) is amended by inserting after section 25D the following new section: ``SEC. 25E. CREDIT FOR CONTRIBUTIONS TO THE CHRONICALLY ILL UNINSURED AND UNDERINSURED. ``(a) In General.--In the case of an individual, there shall be allowed as a credit against the tax imposed by this chapter for the taxable year an amount equal to the qualified charitable contributions made by the taxpayer. ``(b) Limitation.--The amount allowed as a credit to the taxpayer under subsection (a) shall not exceed $1,000 ($2,000 in the case of a joint return). ``(c) Qualified Charitable Contribution.--For the purposes of this section, the term `qualified charitable contribution' means a charitable contribution (as defined in section 170(c)) made in cash to a qualified charity. ``(d) Qualified Charity.--For purposes of this section-- ``(1) In general.--The term `qualified charity' means an organization described in section 501(c)(3) and exempt from tax under section 501(a)-- ``(A) which is certified by the Office of Inspector General of the Department of Health and Human Services as meeting the requirements of paragraph (2), and ``(B) which is organized under the laws of a State at the time the contribution is made and is exempt from income taxation (if any) by such State. ``(2) Charity must work to assist chronically ill patients with health insurance premium assistance and copayment assistance.--An organization meets the requirements of this paragraph only if the predominant activity of such organization is the subsidizing of health insurance premiums and pharmacy co-payments of individuals who are uninsured or cannot otherwise afford health insurance or drug treatments. ``(e) Denial of Double Benefit.--No deduction shall be allowed under any other provision of this chapter for any contribution for which a deduction or credit is allowed under subsection (a). ``(f) Election to Not Take Credit.--No credit shall be allowed under subsection (a) for any contribution if the taxpayer elects to not have this section apply to such contribution.''. (b) Clerical Amendments.--The table of sections of such subpart is amended by inserting after the item relating to section 25D the following new item: ``Sec. 25E. Credit for contributions to the chronically ill uninsured and underinsured.''. (c) Effective Date.--The amendments made by this section shall apply to taxable years beginning after the date of the enactment of this Act.
Health Insurance Tax Credit Assistance Act of 2007 - Amends the Internal Revenue Code to allow a tax credit for charitable contributions to tax-exempt charities which subsidize health insurance premiums and pharmacy co-payments of uninsured individuals or individuals who cannot otherwise afford health insurance or drug treatments.
Spirits were high inside the House chamber on Thursday, November 16, when, in the early afternoon, the gavel fell and a measure to rewrite the American tax code passed on a partisan tally of 227 to 205. As the deciding votes were cast—recorded in green on the black digital scoreboard suspended above the floor—the speaker of the House, Paul Ryan, threw his head back and slammed his hands together. Soon he was engulfed in a sea of dark suits, every Republican lawmaker wanting to slap him on the shoulder and be a part of his moment. Ryan was the man of the hour. Having spent a quarter-century in Washington—as an intern, waiter, junior think-tanker, Hill staffer and, since 1999, as a member of Congress—he had never wavered in his obsession with fixing what he viewed as the nation’s two fundamental weaknesses: its Byzantine tax system and ballooning entitlement state. Now, with House Republicans celebrating the once-in-a-generation achievement of a tax overhaul, Ryan was feeling both jubilant and relieved—and a little bit greedy. Reveling in the afterglow, Ryan remarked to several colleagues how this day had proven they could accomplish difficult things—and that next year, they should set their sights on an even tougher challenge: entitlement reform. The speaker has since gone public with this aspiration, suggesting that 2018 should be the year Washington finally tackles what he sees as the systemic problems with Social Security, Medicare and Medicaid. Story Continued Below Tinkering with the social safety net is a bold undertaking, particularly in an election year. But Ryan has good reason for throwing caution to the wind: His time in Congress is running short. Despite several landmark legislative wins this year, and a better-than-expected relationship with President Donald Trump, Ryan has made it known to some of his closest confidants that this will be his final term as speaker. He consults a small crew of family, friends and staff for career advice, and is always cautious not to telegraph his political maneuvers. But the expectation of his impending departure has escaped the hushed confines of Ryan’s inner circle and permeated the upper-most echelons of the GOP. In recent interviews with three dozen people who know the speaker—fellow lawmakers, congressional and administration aides, conservative intellectuals and Republican lobbyists—not a single person believed Ryan will stay in Congress past 2018. Ryan was tiring of D.C. even before reluctantly accepting the speakership. He told his predecessor, John Boehner, that it would be his last job in politics—and that it wasn’t a long-term proposition. In the months following Trump’s victory, he began contemplating the scenarios of his departure. More recently, over closely held conversations with his kitchen cabinet, Ryan’s preference has become clear: He would like to serve through Election Day 2018 and retire ahead of the next Congress. This would give Ryan a final legislative year to chase his second white whale, entitlement reform, while using his unrivaled fundraising prowess to help protect the House majority—all with the benefit of averting an ugly internecine power struggle during election season. Ryan has never loved the job; he oozes aggravation when discussing intraparty debates over “micro-tactics," and friends say he feels like he’s running a daycare center. On a personal level, going home at the end of next year would allow Ryan, who turns 48 next month, to keep promises to family; his three children are in or entering their teenage years, and Ryan, whose father died at 55, wants desperately to live at home with them full time before they begin flying the nest. The best part of this scenario, people close to the speaker emphasize: He wouldn’t have to share the ballot with Trump again in 2020. And yet speculation is building that, Ryan, even fresh off his tax-reform triumph, might not be able to leave on his own terms. He now faces a massive pileup of cannot-fail bills in January and February. It’s an outrageous legislative lift: Congress must, in the coming weeks, fund the government, raise the debt ceiling, modify spending caps, address the continuation of health care subsidies, shell out additional funds for disaster relief and deal with the millions of undocumented young immigrants whose protected status has been thrown into limbo. It represents the most menacing stretch of Ryan’s speakership—one that will almost certainly require him to break promises made to his conference and give significant concessions to Democrats in exchange for their votes. To meet key deadlines, he’ll have to approve sizable spending increases and legal status for minors who came to the U.S. illegally—two things that could raise the ire of the GOP base and embolden his conservative rivals on Capitol Hill. There is no great outcome available, Ryan has conceded to some trusted associates—only survival. “Win the day. Win the next day. And then win the week,” Ryan has been preaching to his leadership team. The speaker can't afford to admit he’s a lame duck—his fundraising capacity and deal-making leverage would be vastly diminished, making the House all the more difficult to govern. When asked at the end of a Thursday morning press conference if he was leaving soon, Ryan shot a quick “no” over his shoulder as he walked out of the room. Clockwise, from upper left: Paul Ryan with President Donald Trump after a November meeting of the House Republican Conference; outgoing House Speaker John Boehner hands the gavel off to Ryan in Oct. 2015; Ryan and Senate Majority Leader Mitch McConnell in the halls of the Capitol en route to a Nov. 2015 GOP policy luncheon; Rep. Mark Meadows, head of the House Freedom Caucus, fields reporters' questions in April. | Getty Images Ryan is backed by the vast majority of the GOP Conference, but even a small group of dissenters can make the speaker’s life miserable—and he knows it. When Ryan succeeded Boehner in the fall of 2015, the new speaker sought to eliminate—or at least weaken—the parliamentary tactic that had been used against his predecessor. By filing a “motion to vacate the chair,” Rep. Mark Meadows of North Carolina had found a way to force a vote on the speakership at any time—a potential humiliation that Boehner avoided by retiring. Ryan, working with Boehner’s team during the transition, was unsuccessful in banning this practice. But he made it clear to Boehner at the time, and to his own allies upon assuming the speakership, that he would not serve at the whims of Meadows and the House Freedom Caucus, a group of some 35 conservative hard-liners. In an interview this fall with POLITICO Magazine, Ryan said the motion to vacate doesn’t loom large as a constant threat to his job security. “No, because it’s not a job I ever wanted in the first place,” Ryan said. “If I was dying to be speaker, I guess it probably would be a dagger over my head. But I don’t think like that.” Members of the Freedom Caucus don’t necessarily believe this rhetoric from Ryan, but they respect the strategic advantage it gives him. After all, when Boehner left town, Ryan was the only consensus replacement—and even then, members had to beg him to assume the most powerful office on Capitol Hill. Given that history, any conservative who attempts to overthrow Ryan would make the Freedom Caucus—and its two leaders, Meadows and Ohio Rep. Jim Jordan—look like nihilists who collect speakers’ scalps for sport. This is especially true without any obvious, universally acceptable successor waiting in the wings. “There are no more golden boys left,” Meadows said in an interview, discussing the possibility of Ryan’s departure. Ryan’s problems are not limited to the Freedom Caucus; there is, without question, wider discontent in the conference than the speaker appreciates, with legislators across the ideological and experiential spectrums grumbling about a hypercentralized process that gives them a vote on the floor but little else. That said, it requires a special brand of gumption to go after the speaker’s gavel—and the usual suspects can be found in and on the periphery of the Freedom Caucus. These members, who have been eerily quiet for much of 2017, have begun making noise about a mutiny. The expectation of a major betrayal on Ryan’s part—either on spending levels, immigration or a combination of the two—has prompted incessant chatter in recent weeks of someone filing a motion to vacate the chair, perhaps as soon as next month. This could be gamesmanship, a bluff to make Ryan feel pressured to step aside. But with a sudden, pervasive sense that Ryan might be ready to leave anyway, a motion to vacate would make sense as a test of his desire to stay on the job. Either way, the convergence of these realizations—Ryan wanting to retire after 2018, and a possible threat to his speakership even sooner—has sparked a flurry of activity in the offices of Majority Leader Kevin McCarthy and Majority Whip Steve Scalise, the two most likely successors to Ryan. Both believed Ryan would leave late next year and were therefore planning their next steps at an appropriately deliberate pace. This has abruptly changed: According to multiple GOP sources, both McCarthy and Scalise have taken recent meetings with members loyal to them who have been eager to strategize about life after Ryan. There is little chance Scalise runs against McCarthy, but the whip—knowing McCarthy lacked the votes to become speaker in 2015, prompting Ryan to accept the job—is taking careful stock of the conference, preparing to launch his own candidacy should McCarthy stumble a second time. Left: Kevin McCarthy takes questions from the press in the U.S. Capitol, Oct. 2015. Right: Steve Scalise at the National Christmas Tree Lighting, Nov. 2017. | Getty Images The one person who can keep these dominoes from falling, at least in the near term, is Trump. The president and the speaker have been a better pairing than anyone could have imagined a year ago, considering Ryan abandoned the GOP nominee during the homestretch of the 2016 campaign. The speaker has kept shoulder to shoulder with the White House at moments of vulnerability, knowing Trump can shield him—and his members—from the fury of the right. If the president endorses whatever grotesque legislative meatball comes out of the House in the coming weeks—publicly and unambiguously—it’s impossible to see Ryan facing any real threat. If the president distances himself from the speaker, however, the floodgates could open—and quickly. Conservatives, having whispered in the president’s ear about Ryan not sharing his interests, will be watching carefully for cues. So too will Steve Bannon, who has been conspicuous this year in holding his fire on Ryan, an old nemesis, while laying waste to Senate GOP leader Mitch McConnell. Bannon and Meadows, a pair of disruptive forces, have spent the past year keeping Ryan’s blood out of the water—but in the unlikely scenario that Trump suddenly sours on the House speaker, they will be inviting the sharks to dinner. Underscoring all of this palace intrigue are some strange realities—such as the fact that Ryan’s survival as speaker might require cover from the very president who once believed that Ryan was trying to sabotage his presidential campaign. Or the notion that Ryan, should he secure his final year in office, will use it to pursue the type of dramatic, politically risky entitlement reforms that Trump explicitly ruled out while running for president. Perhaps no piece of irony is more striking, or effective in capturing this volatile period of Republican history, than the juxtaposition between Ryan celebrating his dream of rewriting the tax code—while hearing of renewed threats to his speakership. *** Ryan nearly walked away from Congress once before. It was November 2012, after Mitt Romney’s loss to Barack Obama, and the would-be vice president found himself despondent and homesick. Ryan told his wife, Janna, that he was considering retirement. That’s when Boehner called. The speaker, concerned about the stability of his conference, could not afford to lose Ryan; he promised the influential Budget Committee chairman a waiver so he could lead the panel for another two years. Ryan agreed, and as the sting of 2012 receded, he began to map out his political future—and his exit strategy from Congress. Having run and lost a national campaign, Ryan rejected pleas to consider his own presidential prospects; instead, he set his sights on the Ways and Means Committee. The chairmanship, which he had long viewed as a dream job, would open after 2014, and Ryan saw it as the perfect perch from which to both pursue his long-standing policy goals and influence the direction of the national party in 2016. Ryan had it all figured out, according to interviews at the time with his friends, family and staff: He would chair the committee, help a newly elected Republican president write a sweeping overhaul of the tax code, and then ride off into the sunset. But it wasn’t meant to be. Less than a year into Ryan’s Ways and Means tenure, Boehner decided to call it quits. He had asked his protege several times over the previous year—since the primary defeat of Majority Leader Eric Cantor—to succeed him as the speaker. “I gave him the Heisman every time,” Ryan told POLITICO. The Ways and Means chairman was content to support his friend McCarthy. But the Freedom Caucus wasn’t. Jordan and Meadows, concerned that McCarthy, a pragmatic Californian, would lead no differently than Boehner, made him a series of offers—their support in exchange for something from him. One of the proposals called for process reforms, including a drastic restructuring of the Steering Committee, which decides committee assignments and chairmanships. Another, more politically explicit offer, promised McCarthy the group’s votes if he could make either Jordan or Meadows the majority leader. When McCarthy bristled, suggesting he couldn’t possibly deliver what they wanted, the group told him he wouldn’t have enough votes on the House floor to become speaker—even if he had already scored an overwhelming majority in the closed-door conference election. The Friday Cover Sign up for POLITICO Magazine’s email of the week’s best, delivered to your inbox every Friday morning. Email Sign Up By signing up you agree to receive email newsletters or alerts from POLITICO. You can unsubscribe at any time. Hours before that private vote was set to occur, McCarthy called Ryan to say he didn’t want the job—and that really, Ryan should take it. He still wasn’t interested. Only after several days of around-the-clock phone calls from prominent Republicans did Ryan open himself to the possibility. He began thinking about his conditions for accepting the job. One was family time on the weekends, which was non-negotiable; another was support from the Freedom Caucus. Ryan had watched Boehner struggle to contain the rebellion after the tea party wave of 2010; he would not assume the speakership over the objections of the same rambunctious members who had helped drive Boehner from office. By securing their support up front, Ryan hoped to inoculate himself against the inevitable future grumblings from House conservatives. Some Freedom Caucus members had reservations about Ryan, but others were ecstatic at his willingness to take the job. Unlike with Boehner, they saw the Wisconsinite—an Ayn Rand devotee and fierce critic of the welfare state—as one of their own. He was equally appealing to other factions of the conference—a sober-minded, well-spoken, telegenic leader with policy experience and people skills. After five years of civil war, there was no other figure who could unite the fractured House Republican Conference. Ryan’s colleagues teasingly called him “The Chosen One,” and in late October 2015, he assumed the speakership. The cease-fire was short lived. Conservatives say Ryan failed a critical first test just weeks after taking the gavel, when he refused to leverage government funding to impose new restrictions on the nation’s refugee settlement program in the wake of the mass shooting in San Bernardino, California. Jordan pleaded with the new speaker to hold out for increased vetting, telling him that it would show Obama and Democratic Senate leader Harry Reid that “there’s a new sheriff in town.” Ryan refused—an original sin that chafes Jordan to this day. As he struggled to adjust to his complex new role—“like Einstein learning to write poetry,” is how one of Ryan’s admirers described it—he committed another strategic error that would prove costly with some of his members: dismissing the reality TV star running for his party’s nomination. In private conversations, Ryan called Trump “a joke” whose penchant for identity politics was dividing the country and dooming the Republican Party’s future. He wasn’t much gentler in public: For most of the campaign, Ryan made it seem he felt honor-bound to denounce the candidate’s latest incendiary remark or antic, as though the two were personally engaged in a tug-of-war for the GOP’s soul. This annoyed some of Ryan’s members—both pro-Trumpers and others who disliked him but respected the anger he was tapping into. When the speaker initially withheld his support after Trump became the presumptive nominee—then continued to poke at him even after issuing a grudging endorsement—some of Ryan’s colleagues wondered if he was attempting to sabotage the GOP ticket. Ryan made a point, for instance, never to be photographed with Trump—fearful of how it would be used to tarnish his brand, according to multiple sources. But the speaker came to agree that the icy relationship between them was unhelpful to the national party. Under mounting pressure from his members, as well as his longtime friend, Republican National Committee Chairman Reince Priebus, Ryan offered an olive branch, inviting the GOP nominee to his district for a Saturday afternoon campaign rally. All hell broke loose on the evening before the event, when the Washington Post published a decade-old recording of Trump boasting about his sexual exploits—and his ability to grope women because of his celebrity status. It seemed to be the nail in his campaign’s coffin. Ryan immediately disinvited Trump from Elkhorn, railing to Priebus and other GOP officials about the man he had never trusted or respected in the first place. Feeling validated, and certain that his members were equally outraged, Ryan wanted to take decisive action—even entertaining the idea of withdrawing his endorsement of Trump. On an emergency leadership conference call the weekend of the “Access Hollywood” tape, Ryan floated this idea as House GOP leaders debated how far to distance themselves from Trump. He would cripple their majority, the speaker said; cutting him off might be their best hope of saving the House. It was ultimately McCarthy—who has become Trump’s favored member of the GOP leadership—who talked Ryan down. Withdrawing their support, he said, according to multiple sources familiar with the call, might backfire by hurting their own members. He suggested they cool off and not act on impulse. Ryan agreed, yet somehow still crossed the line with many House Republicans when he declared—on another conference call the following Monday morning, this time with all Republican members—that he would no longer defend or campaign for Trump. That call, leaked to reporters in real time, lit a fire in the grass roots. Congressional phone lines exploded thereafter, with irate GOP constituents calling for Ryan’s head. Some members began questioning the sustainability of his speakership; in late October 2016, after leaders scheduled Ryan’s speakership nomination, a number of pro-Trump House members urged Ryan to postpone it so they had more time to consider if he should lead the conference. One of those was Rep. Jim Renacci, an establishment-friendly Republican who had long served with Ryan on the Ways and Means Committee. The Ohio Republican started garnering signatures on a secret letter arguing that “the conference is divided” and “there is no reason to hastily hold elections.” Freedom Caucus members sensed an opportunity. At a secret meeting at Meadows’ downtown D.C. apartment, days before the election, Freedom Caucus board members devised a plan to deny Ryan the 218 votes needed to retain his speakership. The strategy called for Jordan to serve as the “sacrificial lamb,” running against Ryan—not to win, but to keep the speaker from having the votes needed for reelection. The idea was that Ryan, who talked frequently (and annoyingly, to some members) about how he had never wanted the job in the first place, would step aside to avoid the spectacle. Conservatives had already begun searching for a new speaker from outside their narrow ranks—someone who would command the respect of the conference. Rep. Mike Pompeo—then a little-known, dry-witted defense hawk who’d later make friends in high places and become Trump’s CIA director—became their top choice. As Republicans schemed against their speaker, the underlying assumption was that Trump would lose and the conservative base would be out for blood—that, or Trump would win and kick Ryan to the curb. Either way, he would be finished. *** Less than an hour before the polls closed on November 8, 2016, Ryan made the phone call he’d been dreading. With a handful of staffers and family members lingering nearby, Ryan was patched through to senior officials at the RNC in Washington. They had been analyzing voting patterns and running turnout models throughout the day, and were prepared to share their projections with the speaker: Trump was going to go down in flames, earning just 220 electoral votes. Republicans would lose nearly 20 House seats. Democrats would retake control of the United States Senate. Exactly the debacle Ryan had feared. Stewing inside his team’s war room at the Holiday Inn in Janesville—the site of his own election night party—Ryan could not stomach the thought of working with President Hillary Clinton. That said, he wasn’t exactly thrilled about working with Trump, whose campaign—fueled by anger, resentment and nativism—was, in his view, a rejection of conservatism’s highest ideals. As disappointed as he was about Clinton’s apparent victory, the speaker saw a silver lining: He would seize the occasion of Trump’s defeat—beginning that night—to speak about a return to an inclusive, aspirational, Jack Kemp-inspired “happy warrior” conservatism, and a rejection of Trumpism. But Ryan never got the chance. His own race had been called early, and attendees waited patiently in the ballroom for his victory speech. But the speaker was paralyzed in the war room, watching in disbelief as Trump surged past Clinton in the pivotal battlegrounds of Florida and North Carolina. The RNC’s numbers, his advisers told him, were garbage: The GOP’s Senate majority appeared safe, only a handful of House Republicans were losing, and if the current trends held, Trump was going to win the biggest upset in presidential history. Just before 10 p.m. Eastern, Ryan finally took the stage and spoke for three minutes. “I’ve just been sitting there watching the polls,” Ryan said, the shock written all over his face. “By some accounts, this could be a really good night for America. This could be a good night for us. Fingers crossed.” Ryan would seize the occasion of Trump’s defeat to speak about a return to an inclusive, aspirational, Jack Kemp-inspired “happy warrior” conservatism, and a rejection of Trumpism. He never got the chance. Ryan faced a legacy-shaping decision that night: Stay true to himself and step down as speaker, or muzzle himself and serve alongside Trump in a unified GOP government. It was a no-brainer: This was Ryan’s chance to actually achieve the things he had only fantasized about. Even if that meant getting in bed with the likes of Trump and Bannon. And even if that meant accommodating behavior from a Republican president that he would never tolerate from a Democrat. It was a trade-off Ryan could not refuse. It was, in the refrain of the speaker’s allies, “Paul’s deal with the devil,” one that he would make all over again. Chasing his legislative dreams would require keeping his criticisms of Trump to himself. “You can’t create a sideshow, even if there’s cause for a sideshow, because it’s going to get in the way of getting the big things done,” Boehner told POLITICO Magazine of Ryan’s approach to Trump. “Paul has got his head on straight. He’s very comfortable with who he is and what he’s got to do.” As some conservatives watched eagerly for a smoke signal from the president-elect—hoping he would remember the speaker’s disloyalty and recommend a replacement—Ryan moved quickly to secure his standing. He spoke with both Trump and his longtime friend, Vice President-elect Mike Pence, in the hours before Trump’s victory speech, and made swift plans Wednesday morning to meet with them in Washington the next day. Before their meeting, Ryan shared with several friends that he planned to start his talk with Trump by mentioning their bad blood during the campaign, and explaining why he had said and done certain things. They cut him off: That was a terrible idea. Don’t remind Trump of how much he despised you in the past, they said. Focus on the future. Ryan listened. And the advice was sound: To this day, despite Trump’s famously long memory, sources in both camps say the president and speaker have never once revisited their old feud. Indeed, a surprising subplot of the unified GOP government’s first year has been the unlikely alliance between Trump and Ryan. The healing process that began in D.C. two days after the election culminated in Pence delivering the message to House Republicans, just minutes before the speakership election, that Trump supported Ryan. (Only one Republican, Thomas Massie of Kentucky, voted against him.) The relationship since has been strangely drama free: Trump and Ryan talk often throughout the week, chewing on questions of policy and process and politics. Never once has there been a blowup, either in person or over the phone. Sources close to both men say they occasionally vent about the other—Trump telling aides that the speaker can’t count votes; Ryan complaining to leadership comrades about the president saying things unbecoming of his office—and yet these feuds are, somewhat miraculously, kept in the family. President Trump and Speaker Ryan during a June 2017 meeting with House and Senate leadership in the Roosevelt Room of the White House. | Getty Images Ryan’s allies paint this as part of a broader picture—his stronger-than-expected partnership with the president; his landmark victories in passing tax reform, as well as a repeal and replacement of the Affordable Care Act, on the House floor; his prolific, historic fundraising on behalf of the embattled GOP majority—to argue that his Faustian bargain has proven worthwhile. And they cite these same examples to dismiss the sanity of threats against his speakership: What more could House Republicans possibly want from him? “Paul Ryan is by far and away the best possible person we have to lead this group of people in the right direction,” said one such ally, Rep. Tom Rooney of Florida. “I just think that any talk of him leaving, I hope that’s not true. It would be a major setback for our cause.” Every speaker deals with varying degrees of discontent in their conference. In Ryan’s case, it owes less to ideology than process. Specifically, members who felt marginalized under Boehner—who ran a top-down operation that cut out committee chairs and left little room for lawmakers to shape legislation—feel the House is even more centralized under Ryan. This was evident in the speaker’s first, botched attempt at repealing Obamacare: He wrote the bill on his own, then framed it as a “binary choice” for members to either back his proposal or be viewed as supporters of Obamacare. The stunt rubbed Ryan’s colleagues the wrong way, particularly Freedom Caucus members who had extracted promises from him in 2015 about opening up the House and restoring regular order. Conservatives aren’t the only ones annoyed with Ryan’s approach, and it isn’t just back-bench members voicing displeasure. Sources close to House Budget Chairwoman Diane Black, a longtime Ryan ally, said she was deeply upset over the summer about Ryan’s treatment of her budget process—though she, like many other senior members friendly with the speaker, would never voice these criticisms publicly. Ryan trampled on Black’s budget in order to expedite the push for tax reform. But when the time came to draft the legislation, members of the Ways and Means Committee—who had worked alongside Ryan for years—grew upset at what they saw as the speaker’s dictatorial approach. Tax writers vented to the White House that he wielded an iron grip on the process and that they had little imprint on the final product; members grumbled about Ryan big-footing Ways and Means Chairman Kevin Brady. Members of his committee said they didn’t see the final bill until just days before they voted on it. It’s no coincidence that more than half a dozen members on Ways and Means—one of the most powerful and desired positions in Congress—are walking away from the House in 2018. Renacci, in an interview, specifically cited Ryan’s top-down style as a reason he was leaving the House to run for governor of Ohio. “You’ve got to be willing to let everyone bring one pebble of sand to the beach, so they at least feel they helped build the beach,” Renacci said. “And that’s regular order. If you don’t get that, you’re never going to be a leader here in the conference.” Similar complaints dogged the previous speaker. But unlike Boehner, who bunkered down and lost touch with many of his newer members, Ryan has made a sustained effort to engage the full spectrum of his colleagues on a regular basis, with both group and individual meetings. This has given rank-and-file lawmakers greater access to the speaker—though not a greater role in the legislative process. He’s more controlling than Boehner … and I voted against John Boehner and worked with Mark Meadows to vacate the chair,” said Rep. Walter Jones. “I’ve been here 22 years and this is the most closed shop I’ve ever seen.” At the end of the day, the real threat to Ryan exists in the same place it did during Boehner’s speakership: on the right flank of the conference. Early this fall, as the tax-reform battle was heating up, Rep. Walter Jones of North Carolina—a constant thorn in Boehner’s side—joked to fellow conservatives that he wanted to issue a formal apology to the former speaker. Having experienced Ryan’s tight grip on the House, Jones said, he now viewed Boehner as a legend— a remark that elicited laughs but also murmurs of agreement in the room. Since then, the idea of conservatives writing an apology letter to Boehner has became a running joke on the right. “He’s more controlling than Boehner … and I voted against John Boehner and worked with Mark Meadows to vacate the chair,” Jones said. “I’m very dissatisfied. I’ve been here 22 years and this is the most closed shop I’ve ever seen.” Even so, there is no comparing the two speakers at this point. Whereas Boehner had lost all goodwill with conservatives by the time of his exit, Ryan today has strong allies on the right—even if there are an equal number of detractors. “I think Ryan has done a good job,” said Republican Study Committee Chairman Mark Walker. “He to this day has my full, 100 percent support. … He has to herd all these different factions and people on a daily basis, and I respect that.” *** Ryan and his team have operated under the assumption that if Republicans enact the first tax overhaul in 30 years, much of the frustration will wane — and partywide euphoria at the realization of their first major legislative victory will make the year-end, bipartisan deals easier to swallow. But rank-and-file members aren’t so sure. And they worry that Ryan’s tunnel vision on tax reform has weakened the GOP’s negotiating hand against the Democrats. During a recent meeting with elders of the RSC, Rep. Tom Graves, a senior member on the Appropriations Committee, argued that a “mystic, hypnotic fog of tax reform” had crippled the conference over the past two months, “paralyzing” Republicans from creating an effective spending strategy to advance other Trump priorities. That concern has echoed throughout the conference in recent weeks. Many House Republicans fear they will be forced to back a massive spending package that drives up the deficit—and an immigration compromise antithetical to the beliefs of the party base. During a meeting in Ryan’s office in early November, Rep. Warren Davidson, the Ohio Republican who replaced Boehner and promptly joined the Freedom Caucus, held up the speaker’s “Better Way” pamphlet from 2016 and told him: “There is no ‘DACA amnesty play’ in the playbook.” Hoping to assuage these concerns, Ryan last week summoned representatives from the conference’s various factions to meet and come up with a unifying plan ahead of the December 22 deadline to fund the government. The resulting strategy, one that Ryan pledged to execute, has House Republicans sending a funding bill to the Senate next week that includes GOP priorities, such as increased Pentagon funding, but nothing for Democrats—and then leaving town and daring Senate Democrats to vote no. (It’s a difficult promise to keep, since McConnell needs at least eight Democratic votes to approve any deal—not to mention that the Senate is accustomed to jamming the House, not the other way around.) Ryan can see the storm clouds gathering. But people close to him insist he would never quit mid-Congress, even if passing tax reform into law provides the perfect opportunity to walk away—and even if recent accounts of sexual misconduct among House members have made the job even more stressful. (Two friends say Ryan was visibly shaken after demanding that Arizona Rep. Trent Franks resign his seat, telling them, “I didn’t realize slitting throats was part of my job.”) Two friends say Ryan was visibly shaken after demanding that Arizona Rep. Trent Franks resign his seat, telling them, “I didn’t realize slitting throats was part of my job.” Even though few members believe Ryan’s job is truly in jeopardy, the whispers of his not-far-off retirement have sent various constituencies scrambling to prepare for a shake up. Members loyal to Scalise have urged him to have a candid discussion with McCarthy about his inability to unify the conference, while McCarthy’s allies have urged him to line up the president’s support so it’s ready at a moment’s notice. Some neutral parties think Scalise has the inside track—that Trump’s backing won’t be enough to put him over the top, and that Scalise’s already-high stock has skyrocketed since he survived an assassination attempt in June. But the Freedom Caucus will be focused less on personalities than process: As in 2015, conservative members are drafting various demands in exchange for the next speaker to win their votes. This will cause eyes to roll in some quarters of the conference. But the reality is that while Ryan would surely win a hypothetical near-term battle over his speakership, the Freedom Caucus is already winning the war. This is simple math: Because of its size and willingness to vote as a bloc, the Freedom Caucus will almost certainly provide the margin to crown the next speaker. And assuming Republicans lose seats next year—with swing-district moderates the first to fall—the conservatives will have even more leverage over GOP leadership in the coming Congress. In a period of particular tension a few months back, one conservative member presented Meadows with a fake draft of a motion to vacate the chair. It was meant to make light of what conservatives viewed as their sorry situation in that moment: nine months into a unified Republican government, and still without a single legislative victory. Meadows told the member the prank was “not funny.” But to some members, the prospect of taking out Ryan clearly isn’t a joke. It only takes one of them, eager to antagonize the leadership and win lots of headlines, to file the motion and plunge the House into temporary chaos. The question at that point becomes how hard Ryan is willing to fight to retain his speakership—and how forcefully other Republicans come to his defense. When Meadows made his attempt on Boehner, dozens of allies rushed to the former speaker’s office to strategize, demanding an immediate vote to show their strength. Ryan, who keeps a small circle of close friends, does not have any comparable apparatus of longtime loyalists determined to protect him—nor does he view his legacy in Congress as inextricably tied to the position of speaker of the House. “You’ve got to remember, I’m the only guy in the modern era who didn’t want this job,” Ryan told POLITICO Magazine this fall. “I did this because I had to do it. And I’m happy and I’m grateful for the job and it’s a great honor. And I feel like I was made for this moment. So I’m good with it. But I’m not a person who covets it. And I never was. So I always feel liberated by that.” Whenever Ryan exits, familiar questions will resurface about whether the Republican Party is governable—and whether, in Congress, there will ever be a leader capable of uniting its tribes. Congress runs on relationships: Boehner was personally popular among members, even those who voted against his initiatives, and the same can largely be said for Ryan. But this is no longer seems sufficient. Dissension in the House Republican ranks is explained not by incompatible personalities, but rather by a fundamental disconnect between the leadership and the rank and file over questions of legislative involvement and procedural transparency. Whoever wishes to succeed Ryan would do well to realize it. During a conference meeting last week, Raúl Labrador of Idaho, a founding member of the Freedom Caucus, ripped into the leadership. “It’s not that we don’t like you,” he said to McCarthy, who stood at the podium. “It’s that we don’t trust you.” Tim Alberta is national political reporter at Politico Magazine. Rachael Bade is a congressional reporter for Politico. ||||| Speaker Paul Ryan Paul Davis RyanReligious tensions flare after chaplain's ouster WATCH: GOP lawmaker says House chaplain downplayed resignation The Hill's 12:30 Report MORE (R-Wis.) says he’s not quitting Congress anytime soon. Asked at the end of his weekly press conference whether he was leaving Congress “soon,” Ryan chuckled and replied as he walked off the stage: “I’m not, no.” Rumors have been swirling for weeks that Ryan — who this October marked his second year in the Speaker’s office — could resign from Congress shortly after passing his No. 1 legislative priority: tax reform. The House and Senate are expected to pass a final version of their historic tax-cuts bill next week, with President Trump Donald John TrumpSpicer, Scaramucci among party goers at Beatles-themed WHCD bash Trump cites GOP report to dismiss collusion claims: Mueller 'should never' have been appointed Pence to visit US-Mexico border next week: report MORE planning to sign it into law by Christmas Day. In early November, a number of Ryan’s GOP colleagues told The Hill that the Speaker could pass tax reform and quickly quit Congress, choosing to go out on top with a victory rather than wait to be forced out like his predecessor, Speaker John Boehner John Andrew BoehnerReligious tensions flare after chaplain's ouster Conservative leader: Next House chaplain should have a family House chaplain forced out by Ryan MORE (R-Ohio). “There is certainly a school of thought that says ‘leave on a high note.’ And passage of tax reform would be a high note for a guy that’s spent 18 years in Congress working on it.” one GOP lawmaker close to Ryan told The Hill last month. The rumors of Ryan’s possible departure kicked into high gear this week after HuffPost published a piece titled: “When will Paul Ryan step down?” The story prompted a reporter Thursday to ask Ryan whether he planned to step down anytime soon. Later Thursday, Politico published a lengthy story detailing that the 47-year-old Speaker has told close confidants that he will retire after the 2018 midterm election. A native of Janesville, Wis., Ryan was first elected to the House in 1998, and went on to serve as chairman of the Ways and Means and Budget committees as well as Mitt Romney's vice presidential running mate in 2012. Right now, it’s unclear who could succeed Ryan if he decides to quit in the coming weeks or at the end of his term. There is no clear heir apparent. But his top deputies — Majority Leader Kevin McCarthy Kevin Owen McCarthyPence to visit US-Mexico border next week: report Experimental drugs bill runs aground despite Trump, Pence support Harassment rules play into race for Speaker MORE (R-Calif.), Majority Whip Steve Scalise Stephen (Steve) Joseph ScaliseOvernight Energy: Trump eyes easing offshore drilling safety rules | NY threatens lawsuit over climate rule repeal | Pruitt aide was cleared to work for GOP firm Overnight Finance: US economic growth slows in first quarter | Deadline for Trump tariff exemption nears | Scalise offers anti-carbon tax resolution | Banking regulator to retire Republican worries 'assassination risk' prompting lawmaker resignations MORE (R-La.) and GOP Conference Chair Cathy McMorris Rodgers Cathy McMorris RodgersMillennial GOP lawmakers pleased with McMorris Rodgers meeting on party messaging The Hill's Morning Report: Trump’s Cabinet mess McMorris Rodgers seeks to tamp down unrest MORE (R-Wash.) — all have been raising their profiles in recent weeks, positioning themselves to climb the leadership ladder once Ryan makes a call. The chairman of two powerful conservative blocs — Rep. Mark Walker Bradley (Mark) Mark WalkerReligious tensions flare after chaplain's ouster Food stamp reform slowly gains momentum in House Ryan explains decision to dismiss House chaplain MORE (R-N.C.) of the Republican Study Committee and Rep. Mark Meadows Mark Randall MeadowsGOP report: Clapper told CNN host about Trump dossier in 2017 Food stamp reform slowly gains momentum in House McMorris Rodgers seeks to tamp down unrest MORE (R-N.C.) of the Freedom Caucus — have been deeply involved in the health and tax bills and are key players to watch. ||||| Story highlights House Speaker Paul Ryan has been in that job since October 2015 Politico reported Thursday that he's considering leaving after next year's elections (CNN) House Speaker Paul Ryan has had soul searching conversations about his future with friends, some of his close friends tell CNN. Those people close to Ryan told CNN they believe it is possible that he could leave Congress after the 2018 midterm elections, if he can achieve his goal of passing GOP backed overhaul of the US tax system. Some say his departure could possibly happen even sooner. Some friends indicate that Ryan may be suffering from a bout of "Trump-haustion," but others believe there is serious contemplation of leaving Congress in 2018. According to people close to Ryan, the idea that he would resign immediately after tax reform, because it's all he's ever wanted, is not accurate. Ryan particularly dislikes the toll the job takes on his family, according to multiple sources. Politico published a report Thursday attributed to unidentified sources that Ryan, a Wisconsin Republican, has made clear to those around him that he "would like to serve through Election Day 2018 and retire ahead of the next Congress." Ryan vehemently denied the report, telling reporters that he is here to stay. Read More
– Paul Ryan could be leaving Congress by the end of 2018—if not much sooner. Politico spoke to three dozen people close to Ryan, and none of them believed the speaker of the House would still be in Congress after 2018. On the verge of passing tax reform—one of his major goals since entering Congress in 1999—sources say Ryan wants to use 2018 to tackle his other, more politically risky goal: reforms to Social Security, Medicare, and the rest of the social safety net. They say Ryan would serve through 2018, then retire before the next Congress is seated. Sources say Ryan doesn't want to campaign alongside Trump in 2020, "feels like he's running a daycare," and wants to spend time with his actual children instead. Close friends tell CNN Ryan is suffering from "Trump-haustion." But it's no guarantee Ryan will even last that long. The next few weeks in Congress—with success needed on tax reform, government funding, and more—could sink the speaker. He's reportedly told some close to him that his current strategy is short term: “Win the day. Win the next day. And then win the week." During his weekly press conference Thursday, Ryan denied that he's quitting Congress "soon," the Hill reports. "I ain't goin' anywhere," he told reporters when asked about the rumors. A spokesperson later added that reports to the contrary are "pure speculation."
ongoing optical surveys have discovered new classes of supernovae ( sne ) , including sub - luminous ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) and super - luminous ( e.g. , * ? ? ? * ; * ? ? ? * ) events . many of these events are difficult to explain in the context of standard models for which radioactive decay powers the optical light curve ( e.g. , sn 2008es , * ? ? ? alternative explanations fall into two main categories . overluminous type iin sne with narrow hydrogen absorption lines are thought to be powered , at least in part , by the interaction of the supernova ejecta with circumstellar material , effectively re - thermalizing the supernova shock energy @xcite . alternatively , the sn ejecta may be re - energized by the spindown power of a rapidly rotating magnetar which formed in the core collapse ( * ? ? ? * hereafter kb10 ) . for either of these two mechanisms to significantly modify the sn light curve , the energy input must occur at relatively late times ( weeks to months ) when radiative diffusion through the ejecta is efficient . accretion onto a central compact remnant represents another potential means of injecting large amounts of energy in either successful or failed supernova explosions . compact object accretion is associated with large - scale outflows in neutron stars @xcite and stellar mass black holes ( microquasars , e.g. , * ? ? ? * ) , and these outflows can carry away as much as @xmath3 of the gravitational binding energy of the infalling material . this is particularly true when the accretion flow is rotationally supported and radiatively inefficient @xcite . in failed " sne , the entire star accretes onto the central remnant , a black hole . if the progenitor lacks sufficient angular momentum to form a disk and hence tap the available accretion energy , these events are `` unnovae '' @xcite , i.e. , stars disappearing suddenly from view . in the opposite case where even the iron core becomes rotationally supported , the accretion energy may power a long - duration gamma ray burst ( grb ) ( the collapsar mechanism , * ? ? ? * ; * ? ? ? much longer gamma ray transients may also be possible if either the mantle @xcite or the hydrogen envelope ( * ? ? ? * ; * ? ? ? * hereafter qk12 ) becomes rotationally supported and drives a relativistic jet . the timescale associated with the energy injection corresponds roughly to the free - fall time of a stellar layer , about @xmath4 s for the iron core , but as long as @xmath5 yr for the hydrogen envelope of a red giant . powerful winds from the accretion disk may eventually provide sufficient energy to turn the failed sn into a successful one , exploding the remainder of the star @xcite . in successful sne , accretion from the `` fallback '' of the fraction of material remaing bound can be significant as well . the fallback may influence the resulting nucleosynthesis @xcite or delay the pulsar mechanism in a young proto - neutron star such as in sn 1987a @xcite . early time fallback may also provide a link between the explosion mechanism and the remnant mass distribution @xcite or alter the radiated neutrino spectrum @xcite . for red supergiant ( rsg ) progenitors with typical explosion energies ( @xmath6 ) , the fallback mass is relatively small ( @xmath7 ) . however , in more compact stars ( e.g. , blue supergiants , bsgs ) the formation of a strong reverse shock at the h / he interface can decelerate the ejecta and enhance the fallback mass to @xmath8 @xcite . for weak explosions , most of the star may fall back , with only a small fraction of the mass ejected in a dim sn @xcite . while the dynamics of supernova fallback have been studied in a various contexts , little has been said about how fallback may impact the optical sn light curve . the energy released from fallback accretion may profoundly affect what we observe , if two conditions are met . first , the accretion energy must be injected at relatively late times ( @xmath9 days ) otherwise it will be largely degraded by adiabatic expansion . such late time accretion may be possible for progenitors with extended envelopes , or for those where a reverse shock develops and gradually slows the inner layers of ejecta . second , the accretion energy must be thermalized within the sn ejecta . this is likely to occur if the energy injection takes the form of a nearly isotropic disk wind . if , on the other hand , the energy is in a beamed relativistic jet , we must consider whether the jet can breakout of the ejecta ( perhaps producing a grb ) or whether it is trapped and thermalized in the interior . when these two conditions are met , fallback should produce a peculiar optical light curve , powered directly by the accretion energy . we study the impact of late time fallback accretion of sn light curves , and suggest that the wide range of potential events from sub luminous to super - luminous may be of relevance in explaining recent observations of peculiar sne . in [ accenergy ] , we crudely estimate the efficiency of fallback - accretion - driven outflows . we numerically calculate accretion rates for a wide range of stellar progenitors to explore the variety of outcomes for accretion powered supernova light curves ( [ outcomes ] ) , including sample light curves and their comparison to some recent unusual events ( [ sec : candidate - events ] ) . we also attempt to address the various requirements for these events to occur in nature : the interaction of the outflows with both the infalling material and the outgoing ejecta ( [ interactejecta ] ) , and angular momentum and disk formation ( [ angmom ] ) . the major results are summarized in [ summary ] . at both low and high accretion rates compared to @xmath10 , where @xmath11 is the eddington luminosity , accretion flows onto compact objects become hot and geometrically thick due to their inability to cool efficiently ( @xmath12 ) . such radiatively inefficient accretion flows are expected to produce large - scale outflows @xcite and/or poynting flux dominated jets @xcite . this behavior is observed in the accretion flow onto the galactic center black hole , sagittarius a * , where the accretion rate at the bondi radius ( e.g. , * ? ? ? * ; * ? ? ? * ) is several orders of magnitude larger than that onto the black hole ( e.g. , * ? ? ? the fallback accretion rate following a successful supernova explosion is highly super - eddington and extremely optically thick to photons . for all timescales of interest here ( @xmath13s after the explosion ) , the disk is not dense enough to cool by neutrino emission @xcite . we expect then that it should be radiatively inefficient , geometrically thick , and should drive large - scale outflows . the resulting mass outflow rate can be estimated following @xcite by assuming that the accretion rate increases as some power of radius @xmath14 where @xmath15 and @xmath16 are the accretion rate and radius at the outer disk edge , and @xmath17 . we will write the radius in units of the schwarzchild radius , @xmath18 . the outflow speed should be comparable to the escape speed , @xmath19 , and the energy in each disk annulus is , @xmath20 where @xmath21 parameterizes our ignorance of the outflow physics , such as the fraction of fallback mass that is blown out again . the actual value of @xmath22 is highly uncertain , but @xmath23 is a reasonable choice ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) , in which case the total outflow rate integrated over disk radius is , @xmath24 where @xmath25 is the inner disk edge , either the black hole event horizon or the surface of the proto neutron star . for typical parameters we take @xmath26 ( @xmath27 ) and @xmath28 . these choices give an outflow energy @xmath29 with @xmath30 . for the parameters considered here , other choices of @xmath31 give similar results . if the outflow is instead a jet launched from near the inner disk edge , the outflow energy is @xmath32 , where a conventional choice is @xmath33 , although depending on the accreted magnetic field geometry this value could be much larger @xcite . since @xmath34 , the resulting energy injection , @xmath35 would be nearly identical to the case of a disk wind . this is the scenario discussed for failed supernova explosions by @xcite and qk12 . the results presented below only depend on the energy injection rate and thus are the same for either a disk or a jet . the results are also expected to be insensitive to whether the central object is a proto neutron star or black hole . in [ interactejecta ] we discuss the dissipation of accretion energy in the infalling material and outgoing ejecta , and its implications for the viability of wind and jet scenarios . we use @xmath36 throughout , although we discuss disk formation and size in [ angmom ] . the outflow energy , either from a wind or jet , is then set by the fallback accretion rate . lccccccccc name & @xmath37 ( @xmath38 ) & @xmath39 ( @xmath38 ) & @xmath40 ( @xmath41 ) & @xmath42 ( @xmath43 ) & @xmath44 ( @xmath38 ) & @xmath45 ( @xmath38 ) & @xmath46 ( km / s ) & @xmath47 ( km / s ) & comparison + s33 & @xmath48 & @xmath49 & @xmath50 & @xmath51 & @xmath52 & @xmath53 & @xmath54 & @xmath55 & sn 1998bw + s39 & @xmath56 & @xmath57 & @xmath50 & @xmath58 & @xmath59 & @xmath60 & @xmath61 & @xmath62 & sn 2008d + s39o & @xmath56 & @xmath57 & @xmath50 & @xmath63 & @xmath64 & @xmath65 & @xmath66 & @xmath67 & sn 2010x + u45 & @xmath68 & @xmath69 & @xmath70 & @xmath71 & @xmath72 & @xmath73 & @xmath74 & @xmath75 & sn 2008es + u60 & @xmath76 & @xmath77 & @xmath70 & @xmath78 & @xmath79 & @xmath80 & @xmath81 & @xmath82 & + z29 & @xmath83 & @xmath84 & @xmath85 & @xmath59 & @xmath86 & @xmath87 & @xmath88 & @xmath89 & + we estimate the fallback accretion rate by simulating supernova explosions using a 1d lagrangian finite difference hydrodynamics code . the code uses a staggered mesh and artificial viscosity shock prescription @xcite . the artificial viscosity parameter is chosen to smooth shocks over @xmath90 zones . this is fairly diffusive , helping with code stability but gives nearly identical results to much smaller coefficients . the courant factor used is @xmath91 , and the time step is set by the minimum required by any zone . the hydro code has been verified via comparisons to the sedov - taylor and 1d shock tube problems , by verifying that pre - supernova stellar models with no explosions remain in hydrostatic equilibrium , and by comparing the solutions for de - pressurized models with analytic freefall solutions ( eq . [ eq:1 ] ) . the initial conditions are taken from a wide range of pre - supernova progenitor star models from @xcite . three sets of models are considered : zero and solar metallicity progenitors with zams masses of @xmath92 , and @xmath93 progenitors with zams masses of @xmath94 . the lower mass solar metallicity progenitors retained their hydrogen envelopes and tended to be red supergiants ( @xmath95 ) , while those in the high mass range were bare helium or c / o stars ( @xmath96 ) . low metallicity stars tended to be blue supergiants of smaller radii ( @xmath97 ) . due to large uncertainties in prescriptions for semi - convection , convective overshoot , and mixing , we view the very low metallicity models as alternative outcomes for possible massive star progenitors rather than necessarily corresponding to extremely metal - poor environments . explosions are simulated using a moving inner boundary ( piston , e.g. * ? ? ? for the first @xmath72s the boundary moves inwards from the location where the specific entropy @xmath98 to @xmath99 , after which time it moves outwards at constant velocity . the inner boundary velocity is set to zero either after a specified amount of time or after the internal energy has changed by the desired amount . the resulting explosions are fairly insensitive to the piston velocity as long as it is large enough to deposit the desired amount of energy in a few seconds . we use the same number of radial zones for the hydrodynamics calculations as were used for the stellar evolution ( @xmath100 ) , but verify that doubling the number of zones and interpolating using the nearest neighbor from the initial condition does not lead to significant changes in the evolution . an outflow boundary condition is employed by copying the acceleration from the outer zone to a single ghost zone . the inner boundary condition can be either a hard ( reflective ) piston or inflow , depending on the time . initially , the piston is used to blow up the star , after which the inner boundary velocity is set to zero . to allow inflow , once the velocity of the inner zone drops below zero , it is copied to the inner boundary . this allows us to use the inner boundary to blow up the star and to record the accretion rate once material begins to fall back . when the inner zone passes through the radius corresponding to the assumed outer disk edge , @xmath101 cm , its properties are saved and it is removed from the calculation . the outside of the accreted zone then becomes the inner boundary for the subsequent evolution . we show in figure [ pvd ] an example of the @xmath102 erg explosion of a @xmath103 , zero metallicity stellar progenitor . in this star , a significant density discontinuity at the helium / hydrogen interface ( @xmath104 cm ) as well as the compact hydrogen envelope ( @xmath105 ) lead to a strong reverse shock forming at @xmath106 s. the two - shock structure can be seen clearly in the curves of @xmath107 and @xmath108 . the filled circles show the results using a pure piston inner boundary condition , while the solid lines show the results for the fallback " boundary condition ( i.e. , piston switched to inflow after the shock was initiated ) . the two are in excellent agreement in the portion of the star with @xmath109 , but differ slightly in the inner regions , as pressure support slows the infall in the pure piston model . in the fallback calculation , the reverse shock turns around and leads to a jump in the accretion rate at @xmath110 s. this calculation is very similar to that shown in figure 1 of @xcite , and the solutions are in qualitative agreement . the quantitative differences are likely due to a difference in progenitor models . we follow the explosions until late time ( @xmath111 s ) , and calculate the accretion rate through the inner boundary from the properties of accreted zones . sample accretion rate curves are shown in figure [ mdot ] . in some cases , we find large accretion rates @xmath112 for a week or so after the explosion . this late time accretion is due either to the fallback of stellar layers at large radii , or from the deceleration of inner layers by the reverse shock . the energy associated with accretion at these rates is sufficient to power luminous supernova light curves . the general behavior of the fallback accretion rate can be easily understood in the two limits where the material is either highly bound or mildly bound . for the highly bound material ( i.e. , those layers where the velocity following the shock propagation is much less than the escape velocity ) the accretion rate can be estimated from the free - fall time , @xmath113 from each radial and mass coordinate in the progenitor star : @xmath114 for an approximately power law density profile in a particular shell of a star , @xmath115 , the enclosed mass is @xmath116 ( for @xmath117 ) , and the fallback accretion rate is : @xmath118 where @xmath119 ( cf . eq . 2 of qk12 ) . for @xmath120 , the enclosed mass is roughly constant , and the accretion rate is : @xmath121 where now @xmath122 . in this way , the freefall accretion rate is set by the density profile of the progenitor star . for the other limit of mildy bound material with @xmath123 , the maximum radius , @xmath124 , becomes much larger than the initial one , @xmath125 . then the asymptotic fallback rate , @xmath1 , applies @xcite . this asymptotic scaling applies at the latest times in all three curves in figure [ mdot ] . using the ballistics solution from @xcite , we can bridge these two asymptotic limits to analytically estimate the fallback accretion rate at all times for comparison with our numerical calculations . for each mass shell , the downstream shock velocity is taken from the analytic formulae in @xcite , which are typically an excellent approximation to the numerical calculations . then the total fallback time for each mass element can be calculated from eq . 3.7 of @xcite , and its time derivative is an approximate accretion rate . this assumes that pressure effects are negligible , which is incorrect . however , the true acceleration measured from the numerical calculations described below turns out to be roughly constant at half of the gravitational acceleration . this ballistic estimate reproduces the fallback accretion rate at all times in many progenitors . however , in some cases ( particularly blue supergiants such as sn1987a , * ? ? ? * ) the reverse shock formed at the hydrogen - helium interface is strong enough to decelerate portions of the ejecta below the escape speed . this enhances the accretion rate at late times , and can significantly add to the remnant mass @xcite . the reverse shock formation and evolution is analagous to that formed when the forward shock breaks out of the star and into the interstellar medium ( e.g. , * ? ? ? * ; * ? ? ? * ) . as the simplest possible reverse shock prescription , we solve the strong shock jump conditions for the reverse shock velocity and the downstream velocity at the boundary of @xmath126 helium and hydrogen layers : @xmath127 , where @xmath128 is the shock velocity . the reverse shock velocity evolves in time as the densities in both the expanding ejecta and unshocked hydrogen envelope change , and eventually it turns around . for simplicity , we ignore this and take @xmath129 to be constant at its initial value . then the location of intersection between ejecta and the reverse shock can be found , as well as the resulting ballistic @xmath130 for material that is recaptured after passing through the reverse shock . the reverse shock prescription is important for the z29 curve in figure [ mdot ] . this approximate semi - analytic description does a reasonable job reproducing the numerical calculations in all cases . the largest disagreement is in the reverse shock cases , where the semi - analytic accretion rate overestimates ( underestimates ) the numerical results at early ( late ) times . for the remainder of the paper , we use the results from the numerical fallback calculations . we detail here the possible outcomes of supernova light curves powered by accretion energy . we first assume that a supernova explodes via the traditional core collapse mechanism , whatever that may be . for each progenitor , we ran explosions with a variety of energies , in the range @xmath131 to @xmath132 ergs , in order to explore the full range of possible outcomes . only explosions with positive total energy of non - accreted material at @xmath133 are considered , and the resulting remnant vs. initial mass distribution from these explosions is in excellent agreement with @xcite . the ejection of some stellar layers and the fallback of others is then calculated numerically as described in [ sec : numer - calc ] , which determines the energy input rate from fallback . we then calculate approximate one zone light curves using the methods described in appendix [ sec : light - curve - modeling ] . for these calculations , we need the effective diffusion time through homologously expanding ejecta @xcite , @xmath134 where @xmath135 is the total outflow mass , @xmath136 is the injected accretion energy , and @xmath137 is the final ejecta velocity . note that there is an ambiguity in determining @xmath138 , depending on the interpretation of the fudge factor @xmath21 . if @xmath21 indicates the fraction of outflow mass that interacts with the supernova ejecta , then the above expression for @xmath138 applies . if on the other hand , the mass transfer to the ejecta is more efficient while the specific energy of the outflow is lower , @xmath138 could be significantly larger . we assume a constant opacity @xmath139 , appropriate for electron scattering for fully ionized elements heavier than hydrogen . this is clearly a coarse approximation , as the actual opacity will depend on the composition and the presence of doppler broadened lines . the effects of recombination on the opacity are , however , included in an approximate way ( appendix [ sec : light - curve - modeling ] ) . while our one zone light curve models account for the acceleration of the ejecta due to the input accretion energy , they lack any information on the radial structure of the ejecta . the radiation hydrodynamical calculations of kb10 show that energy deposition at the base of the ejecta ( in that case from a magnetar ) blows a bubble in the inner regions , piling up material into a dense shell . we expect a similar effect in fallback powered sne , which will likely also induce an asymmetry if the energy deposition is anisotropic . for each of the light curves , we measure the time to peak , @xmath140 , and the peak luminosity , @xmath141 . the results are shown in figure [ lptpl ] for @xmath36 . each point represents a single explosion energy and progenitor model , color - coded by the radius of the pre - supernova star : red for @xmath142 ( rsgs ) , purple for @xmath143 , blue for @xmath144 ( bsgs ) , and green for @xmath145 ( he or c / o stars ) . this radius also corresponds to the zero age main sequence metallicity : solar without significant mass loss for rsgs , zero for bsgs , @xmath93 for stars in between , and solar with large amounts of mass loss for compact he and c / o stars . events are only plotted if @xmath141 is larger than the thermal supernova luminosity , @xmath146 the number of points is then set by the number of explosion energies and progenitor models , as well as the fraction of cases where that condition is met . the number of points does not represent an expected rate , since both the choices of explosion energies and progenitor models are arbitrary . figure [ lptpl ] illustrates the wide range of light curves that may result when fallback power is included . many of the successful explosions with energies @xmath147 lead to events with @xmath148 days , @xmath149 . the long durations are similar to those of type ii plateau sne , and a result of the large ejecta masses and correspondingly long diffusion times . the final velocities of these events are also fairly typical of core - collapse supernova explosions ( @xmath150 ) . this is because the amount of fallback is much less than the ejecta mass , so that fallback energy does not appreciably change the total kinetic energy of the explosion . for smaller ejecta masses , the fallback energy can dominate the total explosion energy , significantly increasing the final velocity . the diffusion timescale therefore decreases with decreasing ejecta mass both from the smaller total mass and because of the increasing final ejecta velocity . these effects lead to a strong scaling of @xmath141 with @xmath47 , shown in figure [ tpvf ] . the roughly @xmath151 dependence can be recovered by assuming the fallback energy always dominates the supernova energy ( @xmath152 ) , while the fallback mass contributes negligibly to the ejecta mass . furthermore , the scaling assumes that the total fallback energy scales with peak luminosity ( @xmath153 ) , which is true if the accretion rate at late times scales with its integral over all times . the apparent maximum in @xmath154 is from the case where the fallback mass and energy dominate that of the supernova explosion : @xmath155 for our standard parameters . in the context of the simple outflow models described in section [ accenergy ] , this maximum velocity scales as @xmath156 . the considerable scatter in figure [ tpvf ] is from the breakdown of the above assumptions . different classes of progenitor stars lead to different outcomes in figure [ lptpl ] . first , solar metallicity rsg progenitors for the most part lead to relatively low luminosity events ( @xmath157 ) . at high zams masses , these stars undergo substantial mass loss and become stripped he or c / o stars . these progenitors can lead to events with @xmath158 , @xmath159 . these could potentially explain broad line type ibc grb sne : high velocities are a natural outcome of the injection of large amounts of fallback energy . example fits are shown in figure [ events ] for sn 1998bw @xcite and sn 2008d @xcite . in the context of the collapsar model , this suggests that the central engine could be responsible for all of the observed properties : early time accretion leading to black hole formation , the grb , and the initial supernova explosion ; and late time accretion powering the resulting light curve and the large expansion velocities . bsg progenitors lead to two classes of outcomes depending on the explosion energy . at low explosion energies , they can produce luminosities @xmath160 and peak times of @xmath161 days . the short durations are from the very small ejecta masses , @xmath162 , with nearly all of the star falling back . for @xmath163 used here , the wind mass is comparable to the ejecta mass , and the injected fallback energy is much larger than the initial explosion energy . this leads to large final velocities and short diffusion times . events with @xmath164 days can have light curve shapes very similar to observed luminous type ii - l events . an example fit to the superluminous type ii - l sn 2008es @xcite is shown in figure [ events ] . at high explosion energies , bsg progenitors lead to a range of long duration events with @xmath165 days , @xmath166 . the most luminous cases are either from very massive stars ( @xmath167 ) at low metallicity or from zero metallicity stars with strong reverse shocks . in both cases , the ejecta masses are @xmath168 with low expansion velocities , @xmath169 . subluminous type i and ii events are possible on a variety of timescales . as an example , figure [ events ] shows a comparison of a type i explosion with the transient 2010x @xcite . the steep decay in this case requires that the accretion turn off about 7 days after explosion ( see [ interactejecta ] ) . approximate light curves from examples of each of these type of events are shown in figure [ lcurves ] along with photospheric temperatures and velocities . the model parameters are listed in table [ tab : events ] . the photospheric temperature is taken from the one zone light curve calculations ( see appendix [ sec : light - curve - modeling ] ) . the photospheric velocity is taken to be the maximum of @xmath47 and the photospheric velocity in the expanding ejecta in the absence of injected accretion energy . for light curves with recombination , the photospheric properties are meaningless after the plateau phase , since then formally the ejecta are completely optically thin . for relatively short events , the expansion velocities are high ( @xmath170 , and the fallback energy sets the velocity since the ejecta mass is small ( @xmath171 ) . much slower velocities occur in the longer duration events with large ejecta masses ( @xmath168 ) . the photospheric temperatures are very high at peak in ii - l type events ( @xmath172 ) . when recombination is nt important , the light curves are in excellent agreement with the semi - analytic formula in eq . ( [ powerlawlc ] ) for a power - law injection of energy with @xmath173 . this is because in nearly all cases the late time accretion rate falls as @xmath1 , while any energy injected on timescales @xmath174 day is lost to adiabatic expansion , so that its time - dependence does not influence the light curve . gravitational energy liberated through fallback accretion at late times can power unusual supernova light curves ( figures [ events ] and [ lcurves ] ) . the calculations in this paper have made many simplifying assumptions ; we discuss here some of the uncertainties . we have treated the explosion of stars with crude 1d hydrodynamic calculations using a piston . this method has frequently been used to simulate core collapse supernova explosions and the resulting fallback ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) , and the uncertainties in the numerically calculated fallback accretion rates are probably less than those in the outflow physics and/or parameters ( e.g. , @xmath175 ) . the light curve calculations further assume simple one zone prescriptions for the bolometric luminosity and photospheric temperature . more sophisticated techniques would be required for spectral calculations . the density structures of the pre - supernova models , which directly impact the fallback rate , depend sensitively on uncertain prescriptions for convection ( semi - convection and overshoot ) and compositional mixing in stellar evolution calculations ( e.g. , * ? ? ? probably a bigger issue is that the calculations here are based on a limited set of stellar progenitors , and ignore the effects of rotation and binarity , which may be very common in massive stars ( e.g. , * ? ? ? . there may be additional variety in the range of possible fallback powered transients from stellar progenitors not considered here . further , we have assumed that the stellar material that falls back after the explosion has sufficient angular momentum to form a disk , and that this disk can efficiently drive a massive wind and/or ultrarelativistic jet . these are both important open questions . the angular momentum distribution and surface rotation rates of massive stars remain highly uncertain @xcite . although previous studies have found prominent polar outflows from geometrically thick black hole accretion flows @xcite , more recent calculations have found large - scale circulations to be more common than unbound massive winds @xcite . if so , ultrarelativistic jets may be a more natural explanation for injecting energy into the ejecta . finally , we have assumed that this outflow can thermalize in the outgoing supernova ejecta without expelling infalling material and halting accretion . we outline the requirements below to satisfy these assumptions , and estimate in a few sample cases the required rotation rates for disk formation . regions of @xmath176 vs. @xmath177 parameter space for model z29 where i ) the outflow can not escape the accreting material before depositing most of its energy ( eq . [ tfb ] ) , ii ) the outflow escapes the outgoing supernova ejecta before losing most of its energy ( eq . [ tej ] ) , and iii ) the energy deposited in the accreting material by the outflow exceeds its binding energy ( eq . [ edep ] ) . the remaining parameter space is where an outflow could plausibly power a supernova light curve without shutting off continuing accretion . for @xmath178 , the outflow is arbitrarily changed from a wind ( @xmath179 ) to an ultrarelativistic jet ( @xmath180 ) . constraints i ) and ii ) fix the range of allowed @xmath176 for any disk formation time , @xmath181 , while constraint iii ) sets the time at which fallback accretion will stop ( @xmath182 ) . ] + in order for a fallback accretion powered outflow ( either a ultrarelativistic jet with speed @xmath180 or massive wind with @xmath183 ) to power a supernova light curve , it must be able to : i ) escape the remaining infalling material ii ) without unbinding it and iii ) thermalize in the outgoing supernova ejecta . to order of magnitude , we assess the plausibility for this scenario as follows . following qk12 , we assume a magnetically - dominated outflow , collimated with an opening angle @xmath176 . the propagation of the outflow through the remaining bound fallback is similar to the propagation of a jet through a host star during a long grb ( e.g. , * ? ? ? * ; * ? ? ? the speed of the head of the collimated outflow , @xmath184 , is determined by pressure balance between the outflow and the host star ( eq . 4 of qk12 ) : @xmath185 where @xmath186 is the maximum radius of bound material and @xmath187 its total mass . the outflow escape timescale is then @xmath188 . the outflow also drives a lateral shock into the surrounding material , whose speed is approximately ( qk12 ) , @xmath189 where @xmath190 is the efficiency of depositing outflow energy in the surrounding material . if the outflow is dominated by toroidal magnetic field ( e.g. , in a helical jet or outflow from a rotating disk ) , a typical value from numerical calculations is @xmath191 @xcite . for the outflow to escape , @xmath192 should be shorter than the time for the lateral shock to envelope the star , @xmath193 . for interactions with the remaining bound fallback material , this ratio is : @xmath194 in addition to requiring @xmath195 , continued accretion requires that the energy deposited , @xmath196 should be less than the binding energy of the remaining fallback material , where @xmath181 ( @xmath182 ) is the time after explosion at which the outflow turns on ( off ) . conversely , for the outflow energy to be deposited efficiently in the ejecta the outflow escape time should be shorter than the energy deposition time . since the mechanism for thermalizing the outflow energy and its associated timescale are unknown , we can instead use the same comparison of @xmath197 and @xmath192 as above . in this case , @xmath46 is larger than @xmath184 , and the escape time can be estimated from setting @xmath198 and finding when @xmath199 ( qk12 ) : @xmath200 similarly , we can find the ratio @xmath201 under the same assumptions . the result is : @xmath202 the requirements i ) @xmath203 , ii ) @xmath204 , and iii ) @xmath205 amount to constraints on the outflow opening angle , @xmath176 , and the time over which fallback accretion can continue . excluded regions of @xmath176 vs. @xmath177 parameter space from enforcing these constraints for the model z29 are shown in figure [ tu45 ] . for the interaction of accretion energy with remaining fallback material , we find the maximum radius reached at time @xmath177 by material that will ultimately accrete ( @xmath206 ) , and its remaining total mass ( @xmath138 ) and binding energy ( @xmath207 ) . for the interaction of the accretion energy with the ejecta , we use @xmath44 and @xmath208 estimated at time @xmath177 . the timescale constraints essentially place limits on @xmath176 for each type of outflow for all disk formation times , @xmath181 : at small ( large ) opening angles , the outflow escapes ( is captured ) . these ratios also depend on the other quantities , leading to differences between various models . generally , smaller @xmath209 leads to higher ejecta densities and help to trap the outflow . outflows escaping the ejecta before thermalizing could appear as long duration , high energy transients ( qk12 , * ? ? ? outflows trapped in the material still falling back would likely deposit energy there more effectively , either unbinding the material or prolonging its accretion to later times . at late times , the outflow will unbind any remaining material , shutting off further accretion . this is because the energy deposition into the accreting material at late times scales as @xmath210 , while its binding energy scales as @xmath211 . equating these gives the turn off time ( @xmath182 ) for each event . this turn off time tends to be shorter in higher energy explosions , since the bulk of the late time accretion comes from loosely bound material . sample light curves for events where accretion shuts off are shown in figure [ lcurveonoff ] for a range of @xmath182 , assuming a constant opacity . once the injected energy runs out , the light curve decays according to eq . ( [ freelc ] ) , but with an initial luminosity @xmath212 . this may be particularly relevant for long duration transients in models like u60 and z29 , where the turn off time ( @xmath213 days for z29 ) is likely to be comparable to the time to peak . although these estimates demonstrate the plausibility of accretion - driven outflows powering supernova light curves , detailed physical calculations will be required to assess this scenario accurately . further , the statement that the outflow can not escape the ejecta does not provide an efficient means of thermalization , since we have assumed @xmath214 . using a larger value of @xmath215 would shift the range of allowed opening angles to favor relativistic jets , and lead to the outflow unbinding the accreting the material time at proportionally earlier times . the efficiency of thermalization depends on how exactly energy is transported from the accretion disk to the supernova ejecta . we have assumed that this mechanism is a highly magnetized disk wind or an ultra - relativistic jet . if instead the wind is not highly magnetized , a double ( forward / reverse ) shock structure will form when it catches up with the slowly moving inner layers of the supernova ejecta ( kb10 ) . the situation is analogous to the commonly case of supernovae interacting with circumstellar material , only here the interaction happens inside , rather than outside the remnant . in either case , shocks should be efficient in thermalizing the kinetic energy of the wind . some recent semi - analytic @xcite and numerical @xcite calculations of non - radiative accretion flows have found large - scale circulations or convective motions @xcite as well as or instead of outflows . this may also be a relevant mechanism for transporting accretion energy to large radius . if the accretion energy can not efficiently thermalize , it will likely still lead to high ejecta velocities @xmath216 . in the case of an event like s33 , this could still explain broad line type ibc supernovae : radioactivity would power the light curve and accretion energy would lead to the high observed photospheric velocities . this is also a possible outcome of early time accretion onto a magnetar @xcite . given the viable range of disk formation times for fallback accretion powered supernovae [ interactejecta ] and the initial stellar radii accreting at those times , we can calculate the required angular velocity . curves for models z29 , u45 , and s33 are shown in figure [ vstar ] for forming disks at radii from @xmath217 , or @xmath218 for a @xmath219 black hole . naively assuming rigid rotation , in all cases disks can form at the required times without exceeding breakup at the outer edge of the star . the required rotation rates essentially scale with explosion energy : for large explosion energies , the envelope is expelled , and larger rotation rates are required for the disk to form from material that was originally at smaller radius . under this assumption , we can also calculate the maximum disk size from fallback accretion , and the corresponding viscous time , @xmath220 , where @xmath221 is the accretion flow scale height and @xmath222 is the standard dimensionless viscosity parameter in accretion theory @xcite . even with conservative assumptions ( @xmath223 , @xmath224 ) , this timescale only becomes larger than the fallback timescale for the highest rotation rates and early disk formation times @xmath225s . this is because assuming rigid rotation , the total disk size never greatly exceeds its formation radius . stellar cores spin up as they contract , and depending on the efficiency of angular momentum transfer from magnetic torques @xcite can transfer much of the core angular momentum to the outer layers ( e.g. , * ? ? ? if this mass is retained , as in our models from low metallicity progenitors ( e.g. , z29 and u45 ) , it will likely form a disk upon fallback for modest zams rotation rates . if instead this mass is lost ( e.g. , s33 ) , insufficient angular momentum may remain to form a disk . if the red supergiant is in a binary system , tidal interactions may be an efficient means to spin up the star sufficiently to cause disk formation even if the envelope is lost during subsequent evolution @xcite . the latter scenario may be fairly common , given the frequency of massive stars in binaries @xcite . these scenarios should be considered in more detail in future work . the accretion power released when material falls back onto a compact remnant at late times could power unusual supernova light curves . we have explored the consequences for a variety of progenitors and explosion energies , using numerical calculations of the fallback accretion rate and order of magnitude estimates of the resulting energy injection . while most of the fallback typically occurs at early times , it may be significant at late times in very massive stars , for low explosion energies , or when a strong reverse shock forms at the hydrogen / helium boundary . we have demonstrated that it is plausible that , under certain circumstances , the energy available from accretion could power an outflow which then thermalizes in the supernova ejecta . the events we have described are different and more diverse than what have previously been studied as `` fallback supernovae '' . @xcite , for example , considered the case of massive star collapse in which most of the material fell into the central black hole and only a fraction was ejected . because they also assumed that the surrounding medium was very dense and extended ( due to mass loss prior to explosion ) the supernova shock wave did not breakout of the circumstellar gas until late times . the result was a dim , shock - powered transient lasting from weeks to months . @xcite similarly considered the case in which most of the star fell back and only a very small amount ( @xmath226 ) was ejected . by assuming that this ejecta was enriched with @xmath227ni , they found a brief and sub - luminous radioactively powered transient similar to sn 2005e . both of these previous scenarios neglected the possible input of accretion energy from fallback ( i.e. , they assumed @xmath228 ) . as we have shown , accretion may re - energize the ejecta at late times and hence power much brighter emission . the power from fallback accretion may be relevant for explaining recently discovered classes of peculiar supernovae . these may include the type iil supernovae that are extremely luminous and of relatively short duration ( e.g. , * ? ? ? * ; * ? ? ? * ) as well as those that are moderately bright and of very long duration ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? several of the observed type ii events , however , also show narrow hydrogen emission features in their spectra , indicating that interaction with a dense circumstellar medium is occurring and may be responsible for the luminosity . many of the predicted type ii events with @xmath229 erg explosion energies have very long times to peak ( @xmath230 days ) . accretion energy is likely to unbind the remaining infalling material on a comparable timescale , turning off the power source for the light curve ( [ interactejecta ] ) . we therefore predict that these events could be seen as very bright type ii supernovae that disappear suddenly from view . in general , late time turn off would be an observational signature of fallback accretion powered supernovae . fallback accretion could also power very bright type i events . the models considered in this paper reached peak luminosities of @xmath231 , similar to the broad - lined sne ic like sn 1998bw . if the accretion efficiency is assumed to be higher than our fiducial case , it is possible for some events to reach luminosities @xmath232 , in which case fallback could power the super - luminous hydrogen poor events such as sn 2005ap @xcite . another effect that may produce super - luminous events like sn 2005ap involves mass loss . some events considered here ( e.g. , z29 ) are brightened considerably by enhanced accretion from material decelerated by a reverse shock forming at the h / he interface . a similar outcome could occur in both type i / ii events where the progenitor has experienced considerable mass loss shortly before explosion . in this case , the reverse shock would be formed when the outgoing shock wave reaches the interface between the progenitor star and the massive wind or ejected shell . the subsequent inward propagation of the reverse shock could lead to order of magnitude increases in the fallback accretion rate at later times . interaction of supernova ejecta with circumstellar shells at radii @xmath233 cm is commonly considered to explain superluminous supernovae via thermalization of the kinetic energy ( e.g. , * ? ? ? surprisingly , interaction with circumstellar material at much smaller radii ( @xmath234 cm ) may also lead to an super - luminous event , but by very different means by enhancing fallback accretion and feeding the central compact object at late times . also of potential relevance to the fallback scenario are dimmer supernovae that decline very rapidly after peak . the short duration of these events makes it difficult to explain them as radioactively powered transients . in the case of sn 2002bj at least , the mass of @xmath227ni inferred from the light curve peak exceeds the total ejecta mass inferred from the light curve duration ( diffusion time ) , seemingly ruling out a radioactively powered events . in the fallback scenario , short lived transients are possible , especially if the energy injection from accretion cuts off the fallback abruptly ( figure [ lcurveonoff ] ) . the true range of possible light curves powered by fallback accretion is likely much larger than shown in figure [ lptpl ] . those events are limited in both the variety of progenitor models and by our neglect of the fallback accretion physics . the latter could conceivably lead to variations in @xmath175 in either direction : smaller disks and/or more efficient thermalization of the outflow could lead to higher peak luminosities @xmath235 . conversely , lower efficiencies could help explain a wider variety of sub - luminous supernovae ( e.g. , sn 2008 ha , * ? ? ? further modeling is needed to identify the observational signatures of fallback accretion powered supernovae , and determine how we might distinguish these events from other means of generating unusual light curves . possible signatures include a tail with @xmath236 at late times compared to the diffusion time ( but before the ejecta become completely optically thin ) , somewhat different from those from radioactivity or magnetar spindown . more promisingly , at late times ( @xmath237 days ) it seems likely that the accretion energy will unbind the infalling material ( [ interactejecta ] ) , shutting off accretion and leading to a sudden decrease in luminosity . if instead accretion continues for decades after the explosion , the black hole could emerge as an observable x - ray source @xcite , as has been suggested for sn 1979c @xcite . the conditions for fallback to influence the supernova light curve are apparently quite special , as the scenario requires sufficient angular momentum to form a disk and an evolution that permits fallback to persist long enough to drive energetic outflows at late times . such a confluence of factors may be rare in the universe . on the other hand , observational surveys show that the rate of peculiar sne in particular the rate of very luminous ones is only a small fraction that of standard core collapse events . it is possible that fallback power plays a role in some of these spectacular events . [ summary ] we thank a. heger for making a large number of pre - supernova stellar models publicly available . jd thanks l. bildsten , b. metzger , c. ott , t. piro , e. quataert , e. ramirez - ruiz , and s. woosley for stimulating discussions related to this work . this work is supported by the director , office of energy research , office of high energy and nuclear physics , divisions of nuclear physics , of the u.s . department of energy under contract no . de - ac02 - 05ch11231 , and by a department of energy office of nuclear physics early career award . 70 natexlab#1#1 , w. d. 1979 , , 230 , l37 . 1982 , , 253 , 785 , f. k. , et al . 2003 , , 591 , 891 , s. , zampieri , l. , & shapiro , s. l. 2000 , , 541 , 860 , m. c. 2012 , , 420 , 2912 , m. c. , & cioffi , d. f. 1989 , , 345 , l21 , r. d. , & begelman , m. c. 1999 , , 303 , l1 bucciantini , n. , quataert , e. , arons , j. , metzger , b. d. , & thompson , t. a. 2007 , mnras , 380 , 1541 , j. i. 2004 , radiation hydrodynamics ( cambridge , uk : cambridge university press ) , e. , et al . 2011 , , 729 , 143 chevalier , r. a. 1982 , apj , 258 , 790 . 1989 , apj , 346 , 847 , r. a. , & irwin , c. m. 2011 , , 729 , l6 colgate , s. 1971 , apj , 163 , 221 , j .- p . , hawley , j. f. , krolik , j. h. , & hirose , s. 2005 , , 620 , 878 , r. , wu , k. , johnston , h. , tzioumis , t. , jonker , p. , spencer , r. , & van der klis , m. 2004 , , 427 , 222 , r. j. , et al . 2009 , , 138 , 376 fryer , c. l. 2009 , apj , 699 , 409 , c. l. , belczynski , k. , wiktorowicz , g. , dominik , m. , kalogera , v. , & holz , d. e. 2012 , , 749 , 91 , c. l. , et al . 2009 , , 707 , 193 , a. 2012 , science , 337 , 927 , t. j. , et al . 1998 , , 395 , 670 , s. , et al . 2009 , , 690 , 1313 , a. , woosley , s. e. , & spruit , h. c. 2005 , , 626 , 350 , i. v. , & abramowicz , m. a. 2000 , , 130 , 463 kasen , d. , & bildsten , l. 2010 , apj , 717 , 245 kasen , d. , & woosley , s. e. 2009 , apj , 703 , 2205 , m. m. , et al . 2010 , , 723 , l98 kochanek , c. s. , beacom , j. f. , kistler , m. d. , prieto , j. l. , stanek , k. z. , thompson , t. a. , & yksel , h. 2008 , apj , 684 , 1336 kohri , k. , narayan , r. , & piran , t. 2005 , apj , 629 , 341 lindner , c. c. , milosavljevic , m. , shen , r. , & kumar , p. 2011 , eprint arxiv , 1108.1415 macfadyen , a. i. , & woosley , s. e. 1999 , apj , 524 , 262 macfadyen , a. i. , woosley , s. e. , & heger , a. 2001 , apj , 550 , 410 , d. p. , moran , j. m. , zhao , j .- h . , & rao , r. 2007 , , 654 , l57 , c. d. 2003 , , 345 , 575 matzner , c. d. , & mckee , c. f. 1999 , apj , 510 , 379 , p. a. , et al . 2008 , science , 321 , 1185 mckee , c. f. 1974 , apj , 188 , 335 , j. c. 2006 , , 368 , 1561 , j. c. , tchekhovskoy , a. , & blandford , r. d. 2012 , arxiv e - prints michel , f. 1988 , nature , 333 , 644 , a. a. , et al . 2009 , , 690 , 1303 . 2010 , , 404 , 305 milosavljevic , m. , lindner , c. c. , shen , r. , & kumar , p. 2010 , eprint arxiv , 1007.0763 , i. f. , & rodrguez , l. f. 1998 , , 392 , 673 moriya , t. , tominaga , n. , tanaka , m. , nomoto , k. , sauer , d. , mazzali , p. , maeda , k. , & suzuki , t. 2010 , apj , 719 , 1445 , t. j. , blinnikov , s. i. , tominaga , n. , yoshida , n. , tanaka , m. , maeda , k. , & nomoto , k. 2012 , arxiv e - prints , r. , sadowski , a. , penna , r. f. , & kulkarni , a. k. 2012 , arxiv e - prints , r. , & yi , i. 1994 , , 428 , l13 , d. j. , loeb , a. , & jones , c. 2011 , new astronomy , 16 , 187 , u .- l . , matzner , c. d. , & wong , s. 2003 , , 596 , l207 perets , h. b. , et al . 2009 , eprint arxiv , 0906 , 2003 piro , a. l. , & ott , c. d. 2011 , apj , 736 , 108 , d. v. 1993 , , 414 , 712 , d. , et al . 2010 , science , 327 , 58 , e. 2004 , , 613 , 322 quataert , e. , & kasen , d. 2012 , mnras , 419 , l1 , r. m. , et al . 2011 , , 474 , 487 , a. , et al . 2011 , , 729 , 88 , h. , et al . 2012 , science , 337 , 444 , n. i. , & sunyaev , r. a. 1973 , , 24 , 337 , n. , & mccray , r. 2007 , , 671 , l17 , a. m. , et al . 2008 , , 453 , 469 , h. c. 2002 , , 381 , 923 stone , j. , pringle , j. , & begelman , m. 1999 , mnras , 310 , 1002 woosley , s. e. 1993 , apj , 405 , 273 woosley , s. e. , & heger , a. 2011 , eprint arxiv , 1110.3842 , s. e. , heger , a. , & weaver , t. a. 2002 , reviews of modern physics , 74 , 1015 , s. e. , & weaver , t. a. 1995 , , 101 , 181 zhang , w. , woosley , s. e. , & heger , a. 2008 , apj , 679 , 639 kb10 described a one zone diffusion estimate for bolometric supernova light curves powered by an injection of energy with arbitrary time - dependence , @xmath238 . the argument follows along the lines of @xcite . as the ejecta expand , energy is lost both due to escaping radiation ( @xmath239 ) and adiabatic losses from expansion : for accretion powered light curves , a power law form , @xmath251 with @xmath173 provides an excellent approximate description for the numerical light curves integrated with eq . ( [ lcurve ] ) . the semi - analytic solution for @xmath252 is : where @xmath254 is the lower incomplete gamma function . the incomplete gamma function is complex for negative arguments . since the observed light curve and the integral in eq . ( [ lcurve ] ) are real , the imaginary part in eq . ( [ powerlawlc ] ) vanishes . in the special case of constant energy injection ( @xmath255 ) , the solution is ( cf . 13 of kb10 ) : this light curve estimate assumes a constant opacity . a different limit occurs when the outer portion of the ejecta drops below the ionization temperature and recombines . the opacity drops suddenly in the recombined material , and the effect is that of a recombination wave passing through the ejecta . this effect significantly alters the light curve evolution of type ii - p supernovae ( e.g. , * ? ? ? * ; * ? ? ? * ) . during the passage of the recombination wave through the ejecta , the photosphere remains at the ionization temperature , @xmath260 . the luminosity can then be calculated from the time - dependent photospheric radius : where @xmath262 and @xmath263 is the dimensionless position of the photosphere in the expanding ejecta . we can write the equivalent of equation ( [ eq:2 ] ) for the evolution of the internal energy of the ionized region , @xmath264 : we again use the diffusion equation to write @xmath239 in terms of the internal energy , except now using @xmath266 instead of @xmath186 . finally , we equate the photospheric luminosity with that from diffusion in the ionized region , which gives an expression for @xmath175 in terms of @xmath263 . the result is a non - linear first order differential equation for @xmath267 : in the absence of heating ( @xmath258 ) , eq . ( [ eq:4 ] ) is similar to eq . 14 of @xcite , except with slightly different numerical coefficients . in this case , the analytic solution for the luminosity starting at time @xmath269 , such that @xmath270 , is : in general , we calculate the luminosity assuming constant opacity using eq . ( [ lcurve ] ) . then , the approximate one zone photospheric temperature is given by @xmath272 . when this drops below @xmath260 , we numerically integrate eq . ( [ eq:4 ] ) for @xmath267 , and then calculate @xmath273 . the recombination wave can significantly increase the peak luminosity in hydrogen rich progenitors ( see section [ outcomes ] ) . more accurate radiative transfer calculations would likely find smoother light curves than those estimated from this one zone approach .
some fraction of the material ejected in a core collapse supernova explosion may remain bound to the compact remnant , and eventually turn around and fall back . we show that the late time ( @xmath0 days ) power associated with the accretion of this `` fallback '' material may significantly affect the optical light curve , in some cases producing super - luminous or otherwise peculiar supernovae . we use spherically symmetric hydrodynamical models to estimate the accretion rate at late times for a range of progenitor masses and radii and explosion energies . the accretion rate onto the proto - neutron star or black hole decreases as @xmath1 at late times , but its normalization can be significantly enhanced at low explosion energies , in very massive stars , or if a strong reverse shock wave forms at the helium / hydrogen interface in the progenitor . if the resulting super - eddington accretion drives an outflow which thermalizes in the outgoing ejecta , the supernova debris will be re - energized at a time when photons can diffuse out efficiently . the resulting light curves are different and more diverse than previous fallback supernova models which ignored the input of accretion power and produced short - lived , dim transients . the possible outcomes when fallback accretion power is significant include super - luminous ( @xmath2 ) type ii events of both short and long durations , as well as luminous type i events from compact stars that may have experienced significant mass loss . accretion power may unbind the remaining infalling material , causing a sudden decrease in the brightness of some long duration type ii events . this scenario may be relevant for explaining some of the recently discovered classes of peculiar and rare supernovae .
_ purely functional programming _ ( pfp ) has a chance of becoming very popular for the simple reason that we now have laptops with four cores and more . the promise of pfp is that because there are no side - effects , no destructive updates , and no shared mutable state , partitioning a program into pieces that run in parallel becomes straightforward . another consequence of the freedom from impure language constructs is that reasoning about program correctness , both formally and informally , becomes much easier in pfp languages than in , say , imperative languages . therefore it is not surprising that pfp is popular within the theorem proving community . for example , the source code of the interactive theorem proving assistant isabelle @xcite is mostly written in a purely functional style . outside of such specialty communities though , pfp clearly has not reached the mainstream yet . a programming paradigm that pervades today s mainstream is dijkstra s _ structured programming _ @xcite ( sp ) . most young programmers even do not know the term structured programming anymore , but anyway still construct their object - oriented programs out of building blocks like ` if`-branches and ` while`-loops . interestingly , the pfp community largely rejects sp because it smells of side - effects , destructive updates , and mutable state , just the things a purely functional programmer wants to avoid . as an example , let us examine the isabelle ( version 2009 - 2 ) source code . discounting blank lines , it consists of about 140000 lines of standard ml @xcite ( sml ) code . yet , only ten of those lines use the ` while ` keyword of sml ! furthermore , five out of those ten lines are part of isabelle s system level code , and a further three lines stem from the author of this paper trying to circumvent missing tail - recursion optimization . the reason for this sparse use of ` while ` is clear : in order to use ` while ` in sml one must also use reference cells which are the embodiment of the small amount of impurity still left in sml . the easiest way to make pfp more mainstream might be to make sp , which already is part of the mainstream , an integral part of pfp ! this is what this paper is about . our central tool for such a unification of pfp and sp is the notion of _ linear scope_. linear scope makes heavy use of _ shadowing _ , therefore we first look at shadowing and its treatment in other languages that draw on functional programming , like erlang and scala . we then present the syntax of a toy language called _ mini babel-17 _ to prepare a proper playground for the introduction of linear scope . first we concentrate on how linear scope interacts with the sequencing and nesting of statements . from there the extension to conditionals and loops is straightforward . finally we give a formal semantics for mini babel-17 and hence also for linear scope . here is how you could code in sml the function @xmath0 : fn x = > let val x = x * x in x * x end there is no doubt that the above denotes a pure function . the fact that the introduction of the variable ` x ` via ` val x = x * x ` _ shadows _ the previous binding of ` x ` in ` fn x ` might make it look a little bit more unusual than the more common fn x = > let val y = x * x in y * y end , but of course both denotations are equivalent . rewriting both functions in de bruijn notation @xcite would actually yield the exact same closed term . yet it seems that the conception that shadowing is somehow wrong lies at the heart of why pfp and sp do not overlap in the mind of many programmers . an instance where shadowing is forbidden in order to obtain a notion of pure variables is the programming language erlang which features _ single - assignment _ of variables . quoting the inventor of erlang @xcite : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ when erlang sees a statement such as x = 1234 , it binds the variable x to the value 1234 . before being bound , x could take any value : it s just an empty hole waiting to be filled . however , once it gets a value , it holds on to it forever . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ clearly in erlang shadowing is a victim of the idea that variables are not just names bound to values , but that _ the variables themselves are the state_. something similar can be observed in the programming language scala @xcite . scala combines functional and structured programming in an elegant fashion . but when it comes to integrate _ purely _ functional programming and sp , scala does not go all the way : it also forbids shadowing . for example , the following scala function implements @xmath1 , ( x : int ) = > val y = x*x ; val z = y*y ; z*z , but both of the following expressions are illegal in scala : ( x : int ) = > val y = x*x ; val y = y*y ; y*y , ( x : int ) = > val y = x*x ; y = y*y ; y*y . the last expression can be turned into a legal scala expression by replacing the keyword ` val ` , which introduces immutable variables , with the keyword ` var ` , which introduces mutable variables : ( x : int ) = > var y = x*x ; y = y*y ; y*y . it might seem that after all , shadowing in scala is possible ! but this is not the case . that ` var ` behaves differently than shadowing can easily be checked : ( x : int ) = > var y = x*x val h = ( ) = > y y = y*y h ( ) * y also implements @xmath1 . with shadowing , we would expect above function to implement @xmath2 . _ babel-17 _ @xcite is a new dynamically - typed programming language in the making which is being developed by the author of this paper . one of its main features is that it combines purely functional programming and structured programming , building on the key observation that shadowing is purely functional . for illustration purposes we use a simplified version of a subset of babel-17 , which we call _ mini babel-17 _ , as a proposal of how a purely functional structured programming language could look like . an implementation of mini babel-17 is available at @xcite . a _ block _ in mini babel-17 is a sequence of _ statements _ : @xmath3 @xmath4 @xmath5 @xmath6 @xmath7 several statements within a single line are separated via semicolons . there are seven kinds of statements : @xmath8 @xmath4 @xmath9 @xmath10 @xmath11 @xmath10 @xmath12 @xmath10 @xmath13 @xmath10 @xmath14 @xmath10 @xmath15 @xmath10 @xmath16 @xmath9 @xmath4 val @xmath17 = @xmath18 @xmath19 @xmath4 @xmath17 = @xmath18 @xmath12 @xmath4 yield @xmath18 @xmath13 @xmath4 if @xmath20 then @xmath3 else @xmath3 end @xmath14 @xmath4 while @xmath20 do @xmath3 end @xmath15 @xmath4 for @xmath17 in @xmath20 do @xmath3 end @xmath16 @xmath4 begin @xmath3 end if the last statement of a _ block _ is a _ yield - statement _ , then the ` yield ` keyword may be dropped in that statement . a _ simple - expression _ is an expression like it can be found in most other functional languages , i.e. it can be an integer , a boolean , an identifier , an anonymous function , function application , or some operation on _ expressions _ like function application , multiplication or comparison : @xmath20 @xmath4 @xmath21 @xmath10 @xmath22 @xmath10 @xmath17 @xmath10 @xmath17 = > @xmath18 @xmath10 @xmath23 @xmath24 @xmath10 @xmath23 * @xmath24 @xmath10 @xmath23 = = @xmath24 @xmath6 an _ expression _ is either a _ simple - expression _ or a statement : @xmath18 @xmath4 @xmath20 @xmath10 @xmath13 @xmath10 @xmath14 @xmath10 @xmath15 @xmath10 @xmath16 let us gain a first intuitive understanding of mini babel-17 before formally introducing its semantics . here is how you could denote @xmath1 in mini babel-17 : x = > begin val y = x*x ; val z = y*y ; z*z end this looks pretty much like the scala denotation of @xmath1 from section [ sec : shadowing ] . but because mini babel-17 is designed so that shadowing of variables is allowed , an equivalent notation is : x = > begin val x = x*x ; val x = x*x ; x*x end the central idea of mini babel-17 is the notion of _ linear scope_. whenever an identifier x is in linear scope , it is allowed to rebind x to a new value , and _ that rebinding will affect all other _ later _ lookups of @xmath25 that happen within its normal lexical scope_. the rebinding is done via a _ val - assign - statement_. the linear scope of a variable is contained in the usual lexical scope of that variable . the linear scope of a variable x starts * in the statement after the _ val - statement _ that defines @xmath25 , or * in the first statement of an anonymous function that binds @xmath25 as a function argument , if the body of that function is a block , or * in the first statement of the block of a _ for - loop _ where x is the identifier bound by that loop . it continues throughout the rest of the block unless a new linear scope for @xmath25 starts . it does extend into nested blocks and statements , but not into _ simple - expressions_. the reason for this is that blocks and statements are ordered sequentially , but there is no natural order for the evaluation of the components of a _ simple - expression_. using the linear scope rules of mini babel-17 , the above function can also be encoded as x = > begin x = x*x ; x = x*x ; x*x end if there are no nested blocks involved , then linear scope is no big deal . it is just a fancy way of saying that when in a _ val - statement _ the variable being defined shadows a previously defined variable , often it is ok to drop the _ keyword , effectively turning a _ val - statement _ into a _ val - assign - statement_. but with nested blocks , linear scope becomes important : c|cc|cc val x = 2 begin val y = x*x val x = y end x+x & & val x = 2 begin val y = x*x x = y end x+x & & val x = 2 begin val y = x*x val x = 0 x = y end x+x + evaluates to 4 & & evaluates to 8 & & evaluates to 4 the left and right programs both evaluate to 4 because the ` begin ` ... ` end ` block is superfluous as none of its statements have any effect in the outer scope . the middle program evaluates to 8 , though , because the rebinding ` x = y ` effects all later lookups in the lexical scope of that x that has been introduced via ` val x = 2 ` , and ` x+x ` certainly is such a later lookup . maybe the rules of linear scope sound confusing at first . but they really are not . just replace in your mind in the above three programs the ` val`s by ` var`s and view them as imperative programs . what value would you assign now to each program ? let us also recode the last scala expression of section [ sec : shadowing ] as a mini babel-17 expression : x = > begin val y = x*x val h = dummy = > y y = y*y h 0 * y end mini babel-17 is purely functional , therefore the value of h is of course not changed by the rebinding ` y = y*y ` which affects only _ later _ lookups of y. thus the above expression implements @xmath2 , not @xmath1 . conditionals and especially loops are the meat of structured programming . with linear scope , they are easily seen also as part of purely functional programming . all we need to do is to apply linear scoping rules to the nested blocks that the _ if_- , _ while- _ and _ for - statements _ consist of . for example , this is how you can encode the subtraction based euclidean algorithm for two non - negative integers in mini babel-17 : a = > b = > if a = = 0 then b else val a = a while b ! = 0 do if a > b then a = a - b else b = b - a end end a end note the line ` val a = a ` which on first sight seems to be superfluous . but while the linear scope of ` b ` encompasses the whole function body , the linear scope of ` a ` does not , because linear scope does not extend into _ simple - expressions_. if mini babel-17 had pattern matching , the line ` val a = a ` could be avoided by starting the function definition with [ a , b ] = > @xmath6 instead . in this section we define an operational semantics for mini babel-17 by building a mini babel-17 interpreter written in standard ml . first we represent the grammar of mini babel-17 as sml datatypes : datatype block = block of statement list and statement = sval of identifier * expression | sassign of identifier * expression | syield of expression | sif of simple_expression * block * block | swhile of simple_expression * block | sfor of identifier * simple_expression * block | sblock of block and expression = esimple of simple_expression | eblock of statement and simple_expression = eint of int | ebool of bool * value - > value ) * expression * expression and identifier = i d of string note that function application , multiplication , comparison and so on are all described via the ` ebinop ` constructor by providing a suitable parameter of type ` value * value - > value ` . the type ` value ` represents all values that can be the result of evaluating a mini babel-17 program : datatype value = vbool of bool | vint of int | vfun of value - > value | vlist of value list mini babel-17 wants to be both purely functional and structured ; the most important ingredients of a purely functional program are expressions ; the most important ingredients of an sp program are blocks and statements . this dilemma is resolved by treating statements as special expressions . the interpreter defines the following evaluation functions : eval_b : environment - > block - > environment * value list eval_st : environment - > statement - > environment * value list eval_e : environment - > expression - > value eval_se : environment - > simple_expression - > value the evaluation of blocks and statements yields lists of values instead of single values , the block begin yield 1 ; yield 2 ; 3 end for example evaluates to ` [ 1 , 2 , 3 ] ` . consider the following mini babel-17 program : val x = 0 begin x = 1 ; x end * begin val x = x + 2 ; x end it does not obey the linear scoping rules of mini babel-17 because x is not in linear scope in the _ val - assign - statement _ ` x = 1 ` . in such a situation , the exception illformed is raised during evaluation . furthermore , an exception typeerror is raised when for example the condition of an if - statement evaluates to a list instead of a boolean . note by the way that the program val x = 0 begin val x = 1 ; x end * begin val x = x + 2 ; x end is perfectly fine and evaluates to 2 . what does the environment look like ? it is actually split into two parts , one part for those identifiers that have linear scope , and one part for identifiers that do nt . the nonlinear part is a mapping from identifiers to values , the linear part a mapping from identifiers to reference cells of values . both parts can be described by the polymorphic type a idmap : type idmap = ( string * a ) list fun lookup [ ] _ = raise illformed | lookup ( ( t , x)::r ) ( i d s ) = if t = s then x else lookup r ( i d s ) fun remove [ ] _ = [ ] | remove ( ( t , x)::r ) ( i d s ) = if t = s then r else remove r ( i d s ) fun insert m ( ( i d s),x ) = ( s , x)::(remove m ( i d s ) ) the type of environments is then introduced as follows : type environment = value idmap * ( value ref ) idmap fun deref [ ] = [ ] | deref ( ( s , vr)::m ) = ( ( s,!vr)::(deref m ) ) fun freeze ( nonlinear , linear ) = ( nonlinear@(deref linear ) , [ ] ) fun bind ( nonlinear , linear ) ( id , value ) = ( remove nonlinear i d , insert linear ( i d , ref value ) ) fun rebind ( env as ( _ , linear ) ) ( i d , value ) = ( lookup linear i d : = value ; env ) note that bind returns a new environment , and rebind returns the same environment with a mutated linear part . the function freeze turns all mutable linear bindings into immutable nonlinear ones . now we can give the definition of all evaluation functions : fun eval_b env ( block [ ] ) = ( env , [ ] ) | eval_b env ( block ( s::r ) ) = let val ( env , values_s ) = eval_st env s val ( env , values_r ) = eval_b env ( block r ) in ( env , values_s @ values_r ) end and eval_nestedb env b = let val ( _ , values ) = eval_b env b in ( env , values ) end and eval_st env ( sval ( i d , e ) ) = let val value = eval_e env e in ( bind env ( i d , value ) , [ ] ) end | eval_st env ( sassign ( i d , e ) ) = let val value = eval_e env e in ( rebind env ( i d , value ) , [ ] ) end | eval_st env ( syield e ) = let val value = eval_e env e in ( env , [ value ] ) end | eval_st env ( sblock b ) = eval_nestedb env b | eval_st env ( sif ( cond , yes , no ) ) = ( case eval_se env cond of vbool true = > eval_nestedb env yes | vbool false = > eval_nestedb env no | _ = > raise typeerror ) | eval_st env ( loop as swhile ( cond , body ) ) = ( case eval_se env cond of vbool true = > let val ( _ , values_1 ) = eval_b env body val ( _ , values_2 ) = eval_st env loop in ( env , values_1 @ values_2 ) end | vbool false = > ( env , [ ] ) | _ = > raise typeerror ) | eval_st env ( sfor ( i d , list , body ) ) = ( case eval_se env list of vlist l = > eval_for env i d body l | _ = > raise typeerror ) and eval_for env i d body [ ] = ( env , [ ] ) | eval_for env i d body ( x::xs ) = let val ( _ , values_1 ) = eval_b ( bind env ( id , x ) ) body val ( _ , values_2 ) = eval_for env i d body xs in ( env , values_1@values_2 ) end and eval_e env ( esimple se ) = eval_se env se | eval_e env ( eblock s ) = ( case eval_b env ( block [ s ] ) of ( _ , [ a ] ) = > a | ( _ , l ) = > vlist l ) and eval_se env se = eval_simple ( freeze env ) se and eval_simple env ( eint i ) = vint i | eval_simple env ( ebool b ) = vbool b | eval_simple env ( ebinop ( f , a , b ) ) = f ( eval_e env a , eval_e env b ) | eval_simple ( nonlinear , _ ) ( eid i d ) = lookup nonlinear i d | eval_simple env ( efun ( i d , body ) ) = vfun ( fn value = > eval_e ( bind env ( i d , value ) ) body ) here is the evaluation function that computes the meaning of a mini babel-17 program , i.e. of a block : eval : block - > value fun eval prog = snd ( eval_e ( [ ] , [ ] ) ( eblock ( sblock prog ) ) ) it is straightforward how to extract from above evaluation functions a wellformedness - criterion such that if a mini babel-17 program is statically checked to be wellformed according to that criterion , no illformed exception will be raised during the evaluation of the program : val value = vint 0 fun check_b env ( block [ ] ) = env | check_b env ( block ( s::r ) ) = check_b ( check_st env s ) ( block r ) and check_st env ( sval ( i d , e ) ) = ( check_e env e ; bind env ( i d , value ) ) | check_st env ( sassign ( i d , e ) ) = ( check_e env e ; rebind env ( i d , value ) ) | check_st env ( syield e ) = ( check_e env e ; env ) | check_st env ( sblock b ) = ( check_b env b ; env ) yes ; check_b env no ; env ) | check_st env ( loop as swhile ( cond , body ) ) = ( check_se env cond ; check_b env body ; env ) | check_st env ( sfor ( i d , list , body ) ) = ( check_se env list ; check_b ( bind env ( i d , value ) ) body ; env ) and check_e env ( esimple se ) = check_se env se | check_e env ( eblock s ) = ( check_b env ( block [ s ] ) ; ( ) ) and check_se env se = check_simple ( freeze env ) se and check_simple env ( eint i ) = ( ) | check_simple env ( ebool b ) = ( ) | check_simple env ( ebinop ( f , a , b ) ) = ( check_e env a ; check_e env b ) | check_simple ( nonlinear , _ ) ( eid i d ) = ( lookup nonlinear i d ; ( ) ) | check_simple env ( efun ( i d , body ) ) = check_e ( bind env ( i d , value ) ) body fun check prog = check_e ( [ ] , [ ] ) ( eblock ( sblock prog ) ) the function _ check _ terminates because it is basically defined via primitive recursion on the structure of the program . furthermore , the set of calls to _ lookup _ generated during an execution of _ check prog _ is clearly a superset of the set of calls to _ lookup _ generated during the execution of _ eval prog_. therefore , if _ check prog _ does not raise an exception _ illformed _ , then neither will _ eval prog_. with mini babel-17 , you can freely choose between a programming style that uses loops and a programming style that puts its emphasis on the use of higher - order functionals . if you have an imperative background , you might start out with using loops everywhere , and then migrate slowly to the use of functionals like _ map _ or _ fold _ as your understanding of functional programming increases . but even after your functional programming skills have matured , you might still choose to use loops in appropriate situations . let us for example look at a function that takes a list of integers @xmath26 $ ] and an integer @xmath25 as arguments and returns the list @xmath27 \quad \text{where } \quad q_k = \sum_{i=0}^k a_i\ , x^i\ ] ] the implementation in mini babel-17 via a loop is straightforward , efficient and even elegant : m = > x = > begin val y = 0 val p = 1 for a in m do y = y + a*p p = p * x yield y end end we have already mentioned how scala also combines structured programming with functional programming , but fails to deliver a combination of structured programming and _ purely _ functional programming . actually , it should be possible to conservatively extend scala so that linear scope for variables defined via ` val ` is supported . the work done on monads in the purely functional programming language haskell @xcite has a superficial similarity with the work done in this paper . with monads it is possible to formulate sequences of ( possibly shadowing ) assignments , and with the help of monad transformers even loops can be modeled . but in order to understand and effectively use monads a solid background in functional programming is useful , if not even required ; linear scope on the other hand is understood intuitively by programmers with a mostly imperative background , because mini babel-17 programs can look just like imperative programs and do not introduce additional clutter like the need for lifting . actually , in haskell monads are used to limit the influence of mutable state to a confined region of the code that can be recognized by its type ; the work in this paper has the entirely different focus of trying to merge the structured and purely functional programming style as seamlessly as possible . this work is not directly connected to work done on linear or uniqueness types @xcite . of course one might think about applying uniqueness typing to mini babel-17 , but mini babel-17 itself is dynamically - typed and its values are persistent and can be passed around without any restrictions . the current separation between sp and pfp is an artificial one . there is no good reason anymore why sp should not be used where appropiate for the sequential parts of a purely functional program except the personal preference of the programmer . the purpose of mini babel-17 is to show the importance of linear scope for unifying sp and pfp . babel-17 incorporates further important features for purely functional and structured programming like mutually recursive functions , pattern matching , exceptions , objects , memoization , concurrency , laziness , and more syntactic sugar .
the idea of functional programming has played a big role in shaping today s landscape of mainstream programming languages . another concept that dominates the current programming style is dijkstra s structured programming . both concepts have been successfully married , for example in the programming language scala . this paper proposes how the same can be achieved for structured programming and _ purely _ functional programming via the notion of _ linear scope_. one advantage of this proposal is that mainstream programmers can reap the benefits of purely functional programming like easily exploitable parallelism while using familiar structured programming syntax and without knowing concepts like monads . a second advantage is that professional purely functional programmers can often avoid hard to read functional code by using structured programming syntax that is often easier to parse mentally .
random walk metropolis ( rwm ) algorithms are widely used generic markov chain monte carlo ( mcmc ) algorithms . the ease with which rwm algorithms can be constructed has no doubt played a pivotal role in their popularity . the efficiency of a rwm algorithm depends fundamentally upon the scaling of the proposal density . choose the variance of the proposal to be too small and the rwm will converge slowly since all its increments are small . conversely , choose the variance of the proposal to be too large and too high a proportion of proposed moves will be rejected . of particular interest is how the scaling of the proposal variance depends upon the dimensionality of the target distribution . the target distribution is the distribution of interest and the mcmc algorithm is constructed such that the stationary distribution of the markov chain is the target distribution . the introduction is structured as follows . we outline known results for continuous independent and identically distributed product densities from @xcite and subsequent work . we highlight the scope and limitations of the results before introducing the discontinuous target densities to be studied in this paper . while the statements of the key results ( theorem [ main ] ) in this paper are similar to those given for continuous target densities , the proofs are markedly different . a discussion of why a new method of proof is required for discontinuous target densities is given . finally , we give an outline of the remainder of the paper . the results of this paper have quite general consequences for the implementation of metropolis algorithms on discontinuous densities ( as are commonly applied in many bayesian statistics problems ) , namely : full- ( high- ) dimensional update rules can be an order of magnitude slower than strategies involving smaller dimensional updates . ( see theorem [ thmprop ] below . ) for target densities with bounded support , metropolis algorithms can be an order of magnitude slower than algorithms which first transform the target support to @xmath5 for some @xmath0 . in @xcite , a sequence of target densities of the form @xmath6 were considered as @xmath7 , where @xmath8 is twice differentiable and satisfies certain mild moment conditions ; see @xcite , ( a1 ) and ( a2 ) . the following random walk metropolis algorithm was used to obtain a sample @xmath9 from @xmath10 . draw @xmath11 from @xmath12 . for @xmath13 and @xmath14 let @xmath15 be independent and identically distributed ( i.i.d . ) according to @xmath16 and @xmath17 . at time @xmath18 , propose @xmath19 where @xmath20 is the proposal standard deviation to be discussed shortly . set @xmath21 with probability @xmath22 otherwise set @xmath23 . it is straightforward to check that @xmath24 has stationary distribution @xmath12 , and hence , for all @xmath13 , @xmath25 . the key question addressed in @xcite was : starting from the stationary distribution , how should @xmath20 be chosen to optimize the rate at which the rwm algorithm explores the stationary distribution ? since the components of @xmath26 are i.i.d . , it suffices to study the marginal behavior of the first component , @xmath27 . in @xcite , it was shown that if @xmath28 @xmath29 and @xmath30,1}^d$ ] @xmath31 , then @xmath32 where @xmath33 satisfies the langevin sde @xmath34 with @xmath35 and @xmath36 with @xmath37 being the standard normal c.d.f . and @xmath38 $ ] . note that the `` speed measure '' of the diffusion @xmath39 only depends upon @xmath40 through @xmath41 . the diffusion limit for @xmath42 is unsurprising in that for a time interval of length @xmath43 , @xmath44 moves are made each of size @xmath45 . therefore the movements in the first component ( appropriately normalized ) converge to those of a langevin diffusion with the `` most efficient '' asymptotic diffusion having the largest speed measure @xmath46 . since the diffusion limit involves speeding up time by a factor of @xmath0 , we say that the mixing of the algorithm is @xmath44 . the optimal value of @xmath47 is @xmath48 , which leads to an average optimal acceptance rate ( aoar ) of 0.234 . this has major practical implications for practitioners , in that , to monitor the ( asymptotic ) efficiency of the rwm algorithm it is sufficient to study the proportion of proposed moves accepted . there are three key assumptions made in @xcite . first , @xmath49 , that is , the algorithm starts in the stationary distribution and @xmath20 is chosen to optimize exploration of the stationary distribution . this assumption has been made in virtually all subsequent optimal scaling work ; see , for example , @xcite and @xcite . the one exception is @xcite , where @xmath11 is started from the mode of @xmath12 with explicit calculations given for a standard multivariate normal distribution . in @xcite , it is shown that @xmath50 is optimal for maximizing the rate of convergence to the stationary distribution . since convergence is shown to occur within @xmath51 iterations , the time taken to explore the stationary distribution dominates the time taken to converge to the stationary distribution , and thus overall it is optimal to choose @xmath52 . it is difficult to prove generic results for @xmath53 . however , the findings of @xcite suggest that even when @xmath54 , it is best to scale the proposal distribution based upon @xmath55 . it is worth noting that in @xcite it was found that for the metropolis adjusted langevin algorithm ( mala ) , the optimal scaling of @xmath20 for @xmath11 started at the mode of a multivariate normal is @xmath56 compared to @xmath57 for @xmath55 . second , @xmath12 is an i.i.d . product density . this assumption has been relaxed by a number of authors with @xmath50 and an aoar of 0.234 still being the case , for example , independent , scaled product densities ( @xcite and @xcite ) , gibbs random fields @xcite , exchangeable normals @xcite and elliptical densities @xcite . thus the simple rule of thumb of tuning @xmath20 such that one in four proposed moves are accepted holds quite generally . in @xcite and @xcite , examples where the aoar is strictly less than 0.234 are given . these correspond to different orders of magnitude being appropriate for the scaling of the proposed moves in different components . third , the results are asymptotic as @xmath7 . however , simulations have shown that for i.i.d . product densities an acceptance rate of 0.234 is close to optimal for @xmath58 ; see , for example , @xcite . departures from the i.i.d . product density require larger @xmath0 for the asymptotic results to be optimal , but @xmath59 is often seen in practical mcmc problems . in @xcite and @xcite , optimal acceptance rates are obtained for finite @xmath0 for some special cases . with the exceptions of @xcite and @xcite , in the above works @xmath60 is assumed to have a continuous ( and suitably differentiable ) probability density function ( p.d.f . ) . the aim of the current work is to investigate the situation where the target distribution has a discontinuous p.d.f . , and specifically , target distributions confined to the @xmath0-dimensional hypercube @xmath61^d$ ] . that is , we consider target distributions of the form @xmath62 where @xmath63 and @xmath64 is twice differentiable upon @xmath61 $ ] with @xmath65 we then use the following random walk metropolis algorithm to obtain a sample @xmath66 from @xmath12 . draw @xmath11 from @xmath12 . for @xmath13 and @xmath14 let @xmath67 be independent and identically distributed ( i.i.d . ) according to @xmath68 $ ] and @xmath69 . at time @xmath18 , propose @xmath70 set @xmath21 with probability @xmath71 otherwise set @xmath23 . in @xcite and @xcite , spherical and elliptical densities are considered which have very different geometry to the hypercube restricted densities . therefore different approaches are taken in these papers with results akin to those obtained for continuous target densities . densities of the form ( [ eq1b ] ) have previously been studied in @xcite , where the expected square jumping distance ( esjd ) has been computed . the esjd is @xmath72 = d { \mathbb{e}}_{\pi_d } [ ( x_{1,1}^d - x_{0,1}^d)^2 ] , \ ] ] the mean squared distance between @xmath11 and @xmath73 , where @xmath55 . in @xcite , appendix b , it is shown that for @xmath74 @xmath75 and @xmath76 , @xmath77 \rightarrow\frac{l^2}{3 } \exp\biggl ( - \frac{l}{2 } \biggr ) \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] thus asymptotically ( as @xmath7 ) the esjd is maximized by taking @xmath78 which corresponds to an aoar of @xmath79 ( @xmath800.1353 ) . in this paper , we show that @xmath74 and an aoar of @xmath79 holds more generally for target distributions of the form given by ( [ eq1a ] ) and ( [ eq1b ] ) . moreover , we prove a much stronger result than that given in @xcite , in that , we prove that @xmath81,1}^d$ ] converges weakly to an appropriate langevin diffusion @xmath82 with speed measure @xmath83 as @xmath7 , where @xmath84 . this gives a clear indication of how the markov chain explores the stationary distribution . by contrast the esjd only gives a measure of average behavior and does not take account of the possibility of the markov chain becoming `` stuck . '' if @xmath85 $ ] is very low , the markov chain started @xmath86 is likely to spend a large number of iterations at @xmath87 before accepting a move away from @xmath87 . note that since @xmath88 involves speeding up time by a factor of @xmath89 , we say that the mixing of the algorithm is @xmath90 . the esjd is easy to compute and asymptotically , as @xmath7 , the esjd ( appropriately scaled ) converges to @xmath39 . thus in discussing possible extensions of the langevin diffusion limit proved in theorem [ main ] for i.i.d . product densities of the form given in ( [ eq1a ] ) and ( [ eq1b ] ) , we make considerable use of the esjd . however , we highlight the limitations of the esjd in discussing extensions of theorem [ main ] . in most previous work on optimal scaling , the components of @xmath91 are taken to be independent and identically distributed @xmath16 random variables . the reason for choosing @xmath68 $ ] for discontinuous target densities is mathematical convenience . the results proved in this paper hold with gaussian rather than uniform proposal distributions , but some elements of the proof are less straightforward . for discussion of the esjd for densities ( [ eq1a ] ) for general @xmath92 subject to @xmath93 < \infty$ ] , see @xcite , appendix b. while the key result , a langevin diffusion limit for the movement in the first component , is the same as @xcite , the proof is markedly different . note that , for finite @xmath0 , @xmath42 and @xmath94 are not markov chains since whether or not a proposed move is accepted depends upon all the components in @xmath12 . in @xcite , it is shown that there exists @xmath95 such that @xmath96 } \{\mathbf{x}_t^d \notin f_d \ } ) \rightarrow0 $ ] as @xmath7 and @xmath97 - 2\phi\biggl ( -\frac{l \sqrt{i}}{2 } \biggr ) \biggr| \leq\varepsilon_d,\ ] ] where @xmath98 as @xmath7 . while ( [ eqrevb7 ] ) is not explicitly stated in @xcite , it is the essence of the requirements of the sets @xmath99 , stating that for large @xmath0 , with high probability over the first @xmath100 iterations the acceptance probability of the markov chain is approximately constant , being within @xmath101 of @xmath102 . ( note @xmath103 rather than @xmath0 is used for dimensionality in @xcite . ) thus in the limit as @xmath7 the effect of the other components on movements in the first component converges to a deterministic acceptance probability @xmath102 . the situation is more complex for @xmath12 of the form given by ( [ eq1a ] ) and ( [ eq1b ] ) as the acceptance rate in the limit as @xmath7 is inherently stochastic . for example , suppose @xmath12 is the uniform distribution on the @xmath0-dimensional hypercube so that @xmath104^d \}}$ ] . letting @xmath105 and @xmath106 , this gives @xmath107 = \prod_{i \in r_d^l } \biggl ( \frac{1}{2 } + \frac{x_i}{2 \sigma_d } \biggr ) \times\prod_{i \in r_d^u } \biggl ( \frac{1}{2 } + \frac{1-x_i}{2 \sigma_d } \biggr).\ ] ] thus the acceptance probability is totally determined by the components at the boundary ( within @xmath20 of 0 or 1 ) . the total number of components in @xmath108 is @xmath109 which converges in distribution to @xmath110 as @xmath7 . thus the number of components close to the boundary is inherently stochastic . moreover , the location of the components within @xmath111 plays a crucial role in the acceptance probability ; see ( [ eqrevb8 ] ) . therefore there is no hope of replicating directly the method of proof applied in @xcite and subsequently , in @xcite and @xcite . we need a homogenization argument which involves looking at @xmath112 over @xmath113 $ ] steps ; cf . in particular , we show that the acceptance probability converges very rapidly to its stationary measure , so that over @xmath113 $ ] iterations approximately @xmath114 $ ] proposed moves are accepted . by comparison , @xmath115,1}^d - x_{0,1}^d | \leq[d^\delta ] \sigma_d$ ] ; thus the value of an individual component only makes small changes over @xmath113 $ ] iterations . that is , we show that there exists @xmath116 such that , for any @xmath117 , @xmath118 } \{\mathbf{x}_t^d \notin\tilde{f}_d \ } ) \rightarrow0 $ ] as @xmath7 and for @xmath119 , @xmath120 } \sum_{t=0}^ { [ d^\delta]-1 } { \mathbb{e } } [ \alpha(\mathbf{x}^d_t , \mathbf{x}^d_t + \sigma_d \mathbf{z}_t^d ) | \mathbf{x}_0^d = \mathbf{x}^d ] - \exp \biggl ( - \frac{lf^\ast}{2 } \biggr ) \biggr| \leq\varepsilon_d\hspace*{-25pt}\ ] ] for some @xmath98 as @xmath7 . for large @xmath0 , with high probability over the first @xmath121 $ ] iterations the markov chain stays in @xmath122 , where the average number of accepted proposed moves in the following @xmath113 $ ] iterations is @xmath123 . the arguments are considerably more involved than in @xcite , where spherically constrained target distributions were studied , due to the very different geometry of the hypercube and spherical constraints applied in this paper and @xcite , respectively . in particular , in @xcite , @xmath28 with an aoar of @xmath124 . by exploiting the homogenization argument it is possible to prove that @xmath94 converges weakly to an appropriate langevin diffusion @xmath125 , given in theorem [ main ] . in section [ secalg ] , theorem [ main ] is presented along with an outline of the proof . also in section [ secalg ] , a description of the pseudo - rwm algorithm is given . the pseudo - rwm algorithm plays a key role in the proof of theorem [ main ] . the pseudo - rwm process moves at each iteration and the moves in the pseudo - rwm process are identical to those of the rwm process , conditioned upon a proposed move in the rwm process being accepted . the proof of theorem [ main ] is long and technical with the details split into three key sections which are given in the ; see section [ secalg ] for more details . in section [ secextsim ] , two interesting extensions of theorem [ main ] are given . in particular , theorem [ thmprop ] has major practical implications for the implementation of rwm algorithms by highlighting the detrimental effect of choosing rwm algorithms over metropolis - within - gibbs algorithms . the target densities for which theoretical results can be proved are limited , so discussion of possible extensions of theorem [ main ] are given . in particular , we discuss general @xmath60 restricted to the hypercube , general discontinuities in @xmath40 and @xmath53 . we begin by defining the pseudo - random walk metropolis ( pseudo - rwm ) process . we will then be in position to formally state the main theorem , theorem [ main ] . an outline of the proof of theorem [ main ] is given , with the details , which are long and technical , placed in the . for @xmath126 , let @xmath127 let @xmath128 denote the probability of accepting a move in the rwm process given the current state of the process is @xmath87 . then @xmath129 let @xmath130 , the total number of components of @xmath87 in @xmath131 . by taylor s theorem for all @xmath132 and @xmath133 , @xmath134 with @xmath135 defined in ( [ eq1c ] ) . hence , for all @xmath136^d$ ] , @xmath137^d \ } } \,d \mathbf{z}^d \nonumber\\ & \geq&\int h_d ( \mathbf{z}^d ) \ { 1 \wedge\exp(- dg^\ast \sigma_d ) \ } 1_{\ { \mathbf{x}^d + \sigma_d \mathbf { z}^d \in [ 0,1]^d \ } } \,d \mathbf{z}^d \nonumber\\[-8pt]\\[-8pt ] & = & \exp(-l g^\ast ) \int h_d ( \mathbf{z}^d ) 1_{\ { \mathbf { x}^d + \sigma_d \mathbf{z}^d \in[0,1]^d \ } } \,d \mathbf{z}^d \nonumber\\ & \geq & \exp(-l g^\ast ) \biggl ( \frac{1}{2 } \biggr)^{b_d^l ( \mathbf{x}^d)}.\nonumber\end{aligned}\ ] ] this lower bound for @xmath138 will be used repeatedly . the pseudo - rwm process moves at each iteration , which is the key difference to the rwm process . furthermore , the moves in the pseudo - rwm process are identical to those of the rwm process , conditioned upon a move in the rwm process being accepted , that is , its jump chain . for @xmath139 , let @xmath140 denote the successive states of the pseudo - rwm process , where @xmath141 . the pseudo - rwm process is a markov process , where for @xmath13 , @xmath142 and given that @xmath143 , @xmath144 has p.d.f . @xmath145 note that @xmath146 for @xmath147 . since @xmath148 , we can couple the two processes to have the same starting value @xmath11 . a continued coupling of the two processes is outlined below . suppose that @xmath149 . then for any @xmath150 , @xmath151 that is , the number of iterations the rwm algorithm stays at @xmath87 before moving follows a geometric distribution with `` success '' probability @xmath128 . therefore for @xmath152 , let @xmath153 denote independent geometric random variables , where for @xmath154 , @xmath155 denotes a geometric random variable with `` success '' probability @xmath156 . for @xmath157 , let @xmath158 and for @xmath159 , let @xmath160 where the sum is zero if vacuous . for @xmath161 , attach @xmath162 to @xmath163 . thus @xmath164 denotes the total number of iterations the rwm process spends at @xmath163 before moving to @xmath165 . hence , the rwm process can be constructed from @xmath166 by setting @xmath167 and for all @xmath150 , @xmath168 . obviously the above process can be reversed by setting @xmath169 equal to the @xmath18th accepted move in the rwm process . for each @xmath126 , the components of @xmath11 are independent and identically distributed . therefore we focus attention on the first component as this is indicative of the behavior of the whole process . for @xmath126 and @xmath13 , let @xmath170,1}^d$ ] and @xmath171,1}^d$ ] . [ main ] fix @xmath172 . for all @xmath126 , let @xmath173 . then , as @xmath7 , @xmath174 in the skorokhod topology on @xmath175 , where @xmath176 satisfies the ( reflected ) langevin sde on @xmath61 $ ] @xmath177 with @xmath178 . note that @xmath179 is standard brownian motion , @xmath180 and @xmath181 . here @xmath182 denotes the local time of @xmath125 at @xmath183 @xmath184 and the sde ( [ eq2a1 ] ) corresponds to standard reflection at the boundaries @xmath185 and @xmath186 ( see , e.g. , chapter of @xcite ) . as noted in section [ secint ] , the acceptance probability of the rwm process is inherently random and therefore it is necessary to consider the behavior of the rwm process averaged over @xmath113 $ ] iterations , for @xmath119 . fix @xmath187 and let @xmath188 be a sequence of positive integers satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] . for @xmath161 , let @xmath190}^d$ ] and for @xmath191 , let @xmath192,1}^d$ ] . for all @xmath13 , @xmath193 and @xmath194 - [ d^\delta ] \times[d^2 t/[d^\delta]]| \leq[d^\delta]$ ] . hence , for all @xmath195 , @xmath196 \sigma_d.\ ] ] therefore by @xcite , theorem 4.1 , @xmath197 as @xmath7 , if @xmath198 as @xmath7 . hence we proceed by showing that @xmath199 let @xmath200 be the ( discrete - time ) generator of @xmath201 and let @xmath202 be an arbitrary test function of the first component only . thus @xmath203 } { \mathbb{e } } [ h ( \tilde{\mathbf{x}}_1^d ) - h ( \tilde{\mathbf{x}}_0^d ) | \tilde{\mathbf{x}}_0^d = \mathbf{x}^d].\ ] ] the generator @xmath204 of the ( limiting ) one - dimensional diffusion @xmath125 for an arbitrary test function @xmath202 is given by @xmath205 for all @xmath206 $ ] at least for all @xmath207 , where @xmath208 is defined in ( [ eqmain1a ] ) below . first note that the diffusion defined by ( [ eqmain1 ] ) is regular ; see @xcite , page 366 . therefore by @xcite , chapter 8 , corollary 1.2 , it is sufficient to restrict attention to functions @xmath209 ) \cap c^2 ( ( 0,1 ) ) \cap \mathcal{d}^\ast , gh \in\hat{c } ( [ 0,1 ] ) \},\ ] ] where @xmath210 is the set of twice differentiable functions upon @xmath211 , @xmath212 $ ] is the set of bounded continuous functions upon @xmath61 $ ] and @xmath213 is obtained by setting @xmath214 @xmath215 in @xcite , page 367 , ( 1.11 ) and is given by @xmath216 let @xmath217 and @xmath218 . then @xmath219 combined with @xmath220 implies that @xmath221 . it then follows from @xmath222 being bounded on @xmath61 $ ] and @xmath223)$ ] that @xmath224 . these observations will play a key role in appendix [ secgen ] . now ( [ eqn112 ] ) is proved using @xcite , chapter 4 , corollary 8.7 , by showing that there exists a sequence of sets @xmath225 such that for any @xmath117 , @xmath226 } \ { \mathbf{x}_j^d \notin\tilde{f}_d \ } \biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}\ ] ] and @xmath227 let the sets @xmath228 and @xmath229 be such that @xmath230 and @xmath231 } \ { \hat{\mathbf{x}}_j^d \notin f_d \ } | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ) \leq d^{-3 } \biggr\},\ ] ] where @xmath232 , @xmath233 , @xmath234 and @xmath235 are defined below . recall that @xmath236 , the total number of components of @xmath87 in @xmath237 . we term @xmath238 the rejection region , in that , for any component in @xmath238 , there is positive probability of proposing a move outside the hypercube with such moves automatically being rejected . let @xmath239}^{[d^\delta ] } \bigl\ { \mathbf{x}^d ; |b_d^{k^{3/4 } } ( \mathbf{x}^d ) - { \mathbb{e } } [ b_d^{k^{3/4 } } ( \mathbf{x}_0^d ) ] | \leq\sqrt{k } \bigr\ } , \\ \label{eqn11f3 } f_d^3 & = & \bigl\ { \mathbf{x}^d ; \sup_{[d^\beta]\leq k_d \leq[d^\delta ] } \sup_{0 \leq r \leq l } | \lambda_d ( \mathbf{x}^d;r;k_d ) - \lambda(r)| \leq d^{-\gamma } \bigr\ } , \\ \label{eqn11f4 } f_d^4 & = & \biggl\ { \mathbf{x}^d ; \biggl| \frac{1}{d } \sum_{j=1}^d g^\prime(x_j)^2 - { \mathbb{e}}_f [ g^\prime(x_1)^2 ] \biggr| < d^{-{1}/{8 } } \biggr\},\end{aligned}\ ] ] where @xmath240 $ ] and @xmath241 . in appendix [ secsets ] , we prove ( [ eqn114 ] ) for the sets @xmath229 given in ( [ emainx1 ] ) . note that ( [ eqn114 ] ) follows immediately from theorem [ lem321 ] , ( [ eqss46 ] ) since @xmath173 . an outline of the roles played by each @xmath242 @xmath243 is given below . for @xmath244 ( @xmath245 ) the total number of components in ( _ close to _ ) the rejection region are controlled . for @xmath246 after @xmath247 iterations the total number and position of the points @xmath248 in @xmath238 are approximately from the stationary distribution of @xmath249 . finally , for @xmath250 , @xmath251 $ ] ; this is the key requirement for the sets @xmath99 given in @xcite , cf . @xcite , page 114 , @xmath252 . the proof of ( [ eqn115 ] ) splits into two parts and exploits the pseudo - rwm process . let @xmath253 ; \frac{1}{[d^\delta ] } \sum_{j=0}^{k-1 } m_j ( j_d ( \hat{\mathbf{x}}_j^d ) ) \leq1 \biggr\ } \big/[d^\delta],\hspace*{-25pt}\ ] ] the proportion of accepted moves in the first @xmath113 $ ] iterations , where the sum is set equal to zero if vacuous . then @xmath254}^d = \hat{\mathbf{x}}_{[p_d d^\delta]}^d$ ] and @xmath255 } { \mathbb{e}}\bigl [ h \bigl(\hat{\mathbf{x}}_{[p_d d^\delta]}^d\bigr ) - h ( \hat{\mathbf{x}}_0^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d\bigr].\ ] ] in appendix [ secpd ] , we show that for all @xmath256 , @xmath257 as @xmath7 . consequently , it is useful to introduce @xmath258 @xmath259 which is defined for fixed @xmath260 as @xmath261 } { \mathbb{e}}\bigl [ \bigl(h\bigl(\hat{\mathbf{x}}^d_{[\pi d^\delta]}\bigr ) - h ( \hat{\mathbf{x}}^d_0)\bigr ) | \hat{\mathbf{x}}^d_0 = \mathbf{x}^d \bigr ] \nonumber\\ & = & \frac{d^2}{[d^\delta ] } \sum_{j=0}^{[\pi d^\delta-1 ] } { \mathbb{e } } [ h ( { \hat{\mathbf{x}}}_{j+1}^d ) - h ( { \hat{\mathbf{x}}}_j^d ) | { \hat{\mathbf{x}}}_0^d = \mathbf{x}^d ] \\ & = & \frac{1}{[d^\delta ] } \sum_{j=0}^{[\pi d^\delta-1 ] } { \mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_j^d ) | { \hat{\mathbf{x}}}_0^d = \mathbf{x}^d ] , \nonumber\end{aligned}\ ] ] where @xmath262.\ ] ] finally in appendix [ secgen ] , we prove in lemma [ lem45 ] that @xmath263 the triangle inequality is then utilized to prove ( [ eqn115 ] ) in lemma [ lem45 ] using ( [ eqn117 ] ) and @xmath264 as @xmath7 . it should be noted that in appendix [ secgen ] , we assume that @xmath265 > 0 $ ] , in particular in lemma [ lem40 ] . in appendixes [ secsets ] and [ secpd ] we make no such assumption . however , @xmath265=0 $ ] corresponds to @xmath266 ( uniform distribution ) , and proving lemma [ lem45 ] in this case follows similar but simpler arguments to those given in appendix [ secgen ] . a key difference between the diffusion limits for continuous and discontinuous i.i.d . product densities is the dependence of the speed measure @xmath39 upon @xmath40 . for continuous ( suitably differentiable ) @xmath40 , @xmath39 depends upon @xmath38 $ ] , which is a measure of the `` roughness '' of @xmath40 . for discontinuous densities of the form ( [ eq1b ] ) , @xmath267 depends upon @xmath268 , the ( mean of the ) limit of the density at the boundaries ( discontinuities ) . discussion of the role of the density @xmath40 in the behavior of the rwm algorithm is given in section [ secextsim ] . the most important consequence of theorem [ main ] is the following result . [ mc1 ] let @xmath269 . then @xmath270 \rightarrow a(l ) \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] @xmath39 is maximized by @xmath271 with @xmath272 clearly , if @xmath273 is known , @xmath274 can be calculated explicitly . however , where mcmc is used , @xmath273 will often only be known up to the constant of proportionality . this is where corollary [ mc1 ] has major practical implications , in that , to maximize the speed of the limiting diffusion , and hence , the efficiency of the rwm algorithm , it is sufficient to monitor the average acceptance rate , and to choose @xmath47 such that the average acceptance rate is approximately @xmath3 . therefore there is no need to explicitly calculate or estimate the constant of proportionality . in this section , we discuss the extent to which the conclusions of theorem [ main ] extend beyond @xmath60 being an i.i.d . product density upon the @xmath0-dimensional hypercube and @xmath275 . first we present two extensions of theorem [ main ] . the second extension , theorem [ thmprop ] , is an important practical result concerning lower - dimensional updating schema . suppose that @xmath273 is nonzero on the positive half - line . that is , @xmath276 and @xmath277 otherwise . [ thmhalf ] fix @xmath172 . for all @xmath126 , let @xmath173 , given by ( [ eqtri1 ] ) , with latexmath:[$\sup_{x \geq0 } @xmath7 , @xmath174 in the skorokhod topology on @xmath175 , where @xmath176 satisfies the ( reflected ) langevin sde on @xmath279 @xmath280 with @xmath178 , @xmath281 and @xmath282 . the proof of the theorem is virtually identical to the proof of theorem [ main ] , and so , the details are omitted . note that we have assumed that @xmath283 is bounded on @xmath284 . this assumption is almost certainly stronger than necessary with @xmath283 being lipschitz and/or satisfying certain moment conditions probably being sufficient ; cf . @xcite . theorem [ thmhalf ] is unsurprising with the speed of the diffusion depending upon the number of components close to the discontinuity at 0 . [ mc2 ] let @xmath285 where @xmath40 satisfies ( [ eqtri1 ] ) . then @xmath286 \rightarrow\exp ( -f^\star l/4 ) \equiv a(l ) \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] @xmath39 is maximized by @xmath287 with @xmath272 therefore the conclusions are identical to corollary [ mc1 ] that in order to maximize the speed of the limiting diffusion it is sufficient to choose @xmath47 such that the average acceptance rate is @xmath3 . the second and more important extension of theorem [ main ] follows on from @xcite . in @xcite , the metropolis - within - gibbs algorithm was considered , where only a proportion @xmath288 @xmath289 of the components are updated at each iteration . for given @xmath126 , at each iteration @xmath290 of the components are chosen uniformly at random and new values for these components are proposed using random walk metropolis with proposal variance @xmath291 . the remaining @xmath292 components remain fixed at their current values . finally , it is assumed that @xmath293 as @xmath7 . the following result assumes that @xmath273 is nonzero on @xmath211 only . the extension to the positive half - line is trivial . [ thmprop ] fix @xmath294 and @xmath172 . for all @xmath126 , let @xmath295 be such that all of its components are distributed according to @xmath296 . then , as @xmath7 , @xmath174 in the skorokhod topology , where @xmath297 and @xmath125 satisfies the ( reflected ) langevin sde on @xmath298 $ ] @xmath299 where @xmath179 is standard brownian motion , @xmath300 and @xmath301 . let @xmath302 denote the average acceptance rate of the rwm algorithm in @xmath0 dimensions where a proportion @xmath303 of the components are updated at each iteration . let @xmath304 we then have the following result which mirrors corollaries [ mc1 ] and [ mc2 ] . [ mc3 ] let @xmath293 as @xmath7 . then @xmath305 for fixed @xmath294 , @xmath306 is maximized by @xmath307 and @xmath308 also @xmath309 corollary [ mc3 ] is of fundamental importance from a practical point of view , in that it shows that the optimal speed of the limiting diffusion is inversely proportional to @xmath288 . therefore the optimal action is to choose @xmath288 as close to 0 as possible . furthermore , we have shown that not only is full - dimensional rwm bad for discontinuous target densities but it is the worst algorithm of all the metropolis - within - gibbs rwm algorithms . we now go beyond i.i.d . product densities with a discontinuity at the boundary and @xmath55 . we consider general densities on the unit hypercube , discontinuities not at the boundary and @xmath53 . as mentioned in section [ secint ] , for i.i.d . product densities , the speed measure of the limiting one - dimensional diffusion , @xmath39 , is equal to the limit , as @xmath7 , of the esjd times @xmath0 . therefore we consider the esjd for the above - mentioned extensions as being indicative of the behavior of the limiting langevin diffusion . we also highlight an extra criterion which is likely to be required in moving from an esjd to a langevin diffusion limit . using the proof of theorem [ main ] , it is straightforward to show that @xmath310 \lim_{{d \rightarrow\infty } } { \mathbb{e}}\bigl [ 1 _ { \ { \mathbf{x}_0^d + \sigma_d \mathbf{z}_1^d \in[0,1]^d \ } } \bigr ] \\ & = & \frac{l^2}{3 } \lim_{{d \rightarrow\infty } } { \mathbb{e}}\biggl [ \biggl ( \frac{3}{4 } \biggr)^{b_d^l ( \mathbf{x}_0^d ) } \biggr].\nonumber\end{aligned}\ ] ] the first equality in ( [ eqextd1 ] ) can be proved using lemma [ lem33 ] , ( [ eq33b ] ) , where for @xmath311 , @xmath312 = 1/3 $ ] . the second equality in ( [ eqextd1 ] ) comes from the fact that for @xmath313 , @xmath314 and for a component @xmath315 uniformly distributed on @xmath316 or @xmath317 , @xmath318 ) = 3/4 $ ] . that is , the acceptance probability of a proposed move is dominated by whether or not the proposed move lies inside the @xmath0-dimensional unit hypercube . proposed moves inside the hypercube are accepted with probability @xmath319 for any @xmath320 ; see lemma [ lem35 ] . thus it is the number and behavior of the components at the boundary of the hypercube ( the discontinuity ) which determine the behavior of the rwm algorithm . this is also seen in theorems [ thmhalf ] and [ thmprop ] . first , we consider discontinuities not at the boundary . suppose that@xmath285 , where @xmath321 \ } } \exp(g ( x ) ) \qquad(x \in\mathbb{r})\ ] ] for some @xmath322 . further suppose that @xmath323 is continuous ( twice differentiable ) upon @xmath324 $ ] except at a countable number of points , @xmath325 , say , on @xmath326 . set @xmath327 and @xmath328 , with @xmath74 . for @xmath329 , let @xmath330 and @xmath331 , with @xmath332 and @xmath333 , where @xmath334 . then following @xcite , ( 4.23 ) , we can show that @xmath0 times the esjd @xmath335 \rightarrow\frac{l^2}{3 } { \mathbb{e}}\biggl [ 1 \wedge \prod_{j=0}^{k+1 } \biggl ( \frac{f_j^-}{f_j^+ } \biggr)^{y_j^+ - y_j^- } \biggr ] \qquad\mbox{as } { d \rightarrow\infty}.\hspace*{-25pt}\ ] ] thus the optimal scaling of @xmath20 is again of the form @xmath336 and the acceptance or rejection of a proposed move is determined by the components close to the discontinuities . furthermore , it is straightforward to show that for each @xmath337 , @xmath338 as @xmath339 , implying that the optimal choice of @xmath47 lies in @xmath340 . proving a langevin diffusion for the ( normalized ) first component of the rwm algorithm should be possible with appropriate local time terms at the discontinuities in @xmath40 . while ( [ eqem1 ] ) holds regardless of @xmath341 and @xmath342 for a diffusion limit we require that @xmath343 , that is , the density is strictly positive on @xmath326 . ( if this is not the case , the rwm algorithm is reducible in the limit as @xmath7 . ) extensions to the case where either @xmath344 and/or @xmath345 are straightforward . second , we consider general densities which are zero outside the @xmath0-dimensional hypercube , @xmath346^d \ } } \exp(\mu_d ( \mathbf{x}^d))$ ] , where @xmath347 is assumed to be continuous and twice differentiable . let @xmath348 and assuming that @xmath349 we have that @xmath0 times the esjd satisfies @xmath350 \rightarrow\frac{l^2}{3 } \lim_{{d \rightarrow\infty } } { \mathbb{e}}\biggl [ \biggl ( \frac{3}{4 } \biggr)^{b_d^l ( \mathbf{x}_0^d ) } \biggr ] \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] note that ( [ eqextd2 ] ) is a weak condition and should be straightforward to check using a taylor series expansion of @xmath351 . for i.i.d . product densities , @xmath352 as @xmath7 . more generally , the limiting distribution of @xmath353 will determine the limit of the right - hand side of ( [ eqextd3 ] ) . in particular , so long as there exist @xmath119 and @xmath354 such that @xmath355 , the right - hand side of ( [ eqextd3 ] ) will be nonzero for @xmath172 . it is informative to consider what conditions upon @xmath60 are likely to be necessary for a diffusion limit , whether it be one - dimensional or infinite - dimensional as in @xcite . suppose that as @xmath7 . for a diffusion limit we will require moment conditions on @xmath356 , probably requiring that there exists @xmath357 such that @xmath358 < \infty$ ] . this will be required to control the probability of the rwm algorithm getting `` stuck '' in the corners of the hypercube . this highlights a key difference between studying the esjd and a diffusion limit . for the esjd , we want a positive probability that the total number of components at the boundary of the hypercube is finite in the limit as @xmath7 . for the diffusion limit , as seen with the construction of @xmath359 in theorem [ main ] , we want that the probability of there being a large number of components @xmath360 at the boundary is very small @xmath361 . third , suppose that @xmath53 . there are very bad starting points in the `` corners '' of the hypercube . for example , if @xmath362 , @xmath363 which even for @xmath59 is less than @xmath364 . thus the rwm process is likely to be `` stuck '' at its starting point for a very long period of time . this is rather pathological and a more interesting question is the situation when @xmath365 , where the components of @xmath366 are i.i.d . in particular , suppose that @xmath367 $ ] , so that @xmath11 is chosen uniformly at random over the hypercube . note that , if @xmath366 is the uniform distribution , @xmath368 with the right - hand side maximized by taking @xmath369 compared with @xmath370 for @xmath275 . we expect to see similar behavior to @xcite , in that the optimal @xmath20 ( in terms of the esjd ) will vary as the algorithm converges to the stationary distribution but will be of the form @xmath74 throughout . the rwm algorithm is unlikely to get `` stuck '' with it conjectured that for any @xmath117 and @xmath371 , @xmath372 } \ { b_d^l ( \mathbf{x}_t^d ) \geq \gamma\log d \ } \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] simulations with @xmath373 and @xmath374 and @xmath375 suggest that convergence occurs in @xmath90 . for convergence , we monitor the mean of @xmath26 for @xmath376 and the variance of @xmath26 for @xmath377 . the sets @xmath378 consist of the intersection of four sets @xmath379 @xmath380 . for @xmath381 , we will define @xmath379 and discuss the role that it plays in the proof of theorem [ main ] , one at a time . furthermore , we show that in stationarity it is highly unlikely that @xmath26 does not belong to @xmath378 . since we rely upon a homogenization argument , it is necessary to go further than the sets @xmath378 to the sets @xmath382 . in particular , if @xmath383 , then it is highly unlikely that any of @xmath384}^d$ ] do not belong to @xmath378 . the above statement is made precise in theorem [ lem321 ] below , where the constructions of @xmath228 and @xmath116 are drawn together . it is possible that all @xmath0 components of @xmath11 are in @xmath238 . however , this is highly unlikely and we show in lemma [ lem31 ] that with high probability , there are at most @xmath385 components in the rejection region . let @xmath386 . [ lem31 ] for any @xmath387 , @xmath388 fix @xmath389 . note that @xmath390 if and only if @xmath391 . however , @xmath392 with @xmath393 fix @xmath394 . by markov s inequality and using independence of the components of @xmath11 , @xmath395 & & \qquad \leq d^\kappa{\mathbb{e } } [ \exp(\rho b_d^l ( \mathbf{x}_0^d))]/ \exp ( \rho\gamma\log d ) \nonumber\\[-1pt ] & & \qquad= d^\kappa{\mathbb{e}}\bigl [ \exp\bigl(\rho1_{\ { x_{0,1}^d \in r_d^l \}}\bigr)\bigr]^d / d^{\rho\gamma } \\[-2pt ] & & \qquad = d^\kappa\biggl ( 1 + ( e^\rho-1 ) \int_0^{l / d } \ { f(x ) + f(1-x)\ } \,dx \biggr)^d \big/ d^{\rho\gamma } \nonumber\\[-2pt ] & & \qquad \leq d^{\kappa- \rho\gamma } \exp\biggl ( ( e^\rho-1)d \int_0^{l / d } \ { f(x ) + f ( 1-x)\ } \,dx \biggr).\nonumber\end{aligned}\ ] ] the lemma follows since ( [ eq31a ] ) implies that the right - hand side of ( [ eq31b ] ) converges to 0 as @xmath7 . for @xmath244 , it follows from ( [ eq21ay ] ) that @xmath396 this is a useful lower bound for the acceptance probability and as a result the random walk metropolis algorithm does not get `` stuck '' at values of @xmath397 . to assist with the homogenizing arguments , we define @xmath398 by @xmath399 } \hat{\mathbf{x}}_j^d \notin f_d^1 | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ) \leq d^{-3 } \biggr\}.\ ] ] that is , by starting in @xmath400 it is highly unlikely that the pseudo - rwm algorithm leaves @xmath232 in @xmath113 $ ] iterations . to study @xmath400 and later @xmath122 we require the following lemmas . [ lem31a ] for a random variable @xmath401 , suppose that there exist @xmath402 such that @xmath403 and for all @xmath404 , @xmath405 . then @xmath406 first note that @xmath407\\[-8pt ] & = & { \mathbb{p}}(x \in a | x \in d^c , x \in b ) { \mathbb{p}}(x \notin d ( [ eq31e ] ) and using ( [ eq31c ] ) and @xmath408 . [ lem34 ] suppose that a sequence of sets @xmath409 is such that there exists @xmath387 such that @xmath410 fix @xmath411 and let @xmath412 } \ { \hat{\mathbf{x}}_i^d \notin f_d^\star\cap f_d^1 \ } | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ) \leq d^{-\varepsilon } \biggr\}.\ ] ] then @xmath413 since @xmath414 , @xmath415 } \ { \mathbf{x}_i^d \notin f_d^\star\cap f_d^1 \ } \biggr ) \leq d^{2 + \delta+ \gamma } { \mathbb{p}}(\mathbf{x}_0^d \notin f_d^\ast\cap f_d^1).\ ] ] therefore for all sufficiently large @xmath0 , @xmath416 by bayes s theorem , @xmath417 . therefore taking @xmath418 } \ { \mathbf{x}_i^d \notin f_d^\ast\cap f_d^1 \}$ ] and @xmath419 , it follows from ( [ eq34e ] ) and ( [ eq34ea ] ) that @xmath420 } \!\ { \mathbf{x}_i^d \notin f_d^\star\cap f_d^1\ } | \mathbf{x}_0^d \in f_d^\star\cap f_d^1 \!\biggr ) \leq\frac{d^{2 + \delta+ \gamma } { \mathbb{p}}(\mathbf{x}_0^d \notin f_d^\star\cap f_d^1 ) } { 1/2}.\hspace*{-40pt}\ ] ] let @xmath421 } \ { \mathbf{x}_i^d \notin f_d^\star\cap f_d^1 \ } | \mathbf{x}_0^d = \mathbf{x}^d \biggr ) \leq d^{-\varepsilon } \biggr\}.\ ] ] it follows from lemmas [ lem31 ] and [ lem31a ] that @xmath422 since @xmath423 as @xmath7 , it follows from ( [ eq34 g ] ) that @xmath424 for @xmath126 and @xmath425 let @xmath426 be independent and identically distributed bernoulli random variables with @xmath427 where @xmath428 . it is straightforward using hoeffding s inequality to show that @xmath429 } \theta_i^d < d^\delta\biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] now @xmath430 and @xmath431 can be constructed upon a common probability space such that if @xmath432 and @xmath433 , @xmath434 . for @xmath435 , consider @xmath436 , if @xmath437 and @xmath438 , a coupling exists such that there exists @xmath439 such that @xmath440 . exploiting the above coupling , @xmath441 } \ { \mathbf{x}_j^d \in f_d^\star\cap f_d^1 \}$ ] and @xmath442 } \theta_i^d \geq d^\delta$ ] together imply that @xmath443 } \ { \hat{\mathbf{x}}_j^d \in f_d^\star\cap f_d^1\}$ ] . thus @xmath444 } \theta_i^d < d^\delta\biggr),\ ] ] and ( [ eq34c ] ) follows from ( [ eq34h ] ) , ( [ eq34j ] ) and ( [ eq34k ] ) . as noted in section [ secalg ] , we follow @xcite by considering the behavior of the random walk metropolis algorithm over steps of size @xmath113 $ ] iterations . we find that a single component moves only a small distance in @xmath113 $ ] iterations , while over @xmath113 $ ] iterations the acceptance probability , which is dominated by the number and position of components in @xmath238 , `` forgets '' its starting value . moreover , we show that approximately @xmath445 $ ] of the proposed moves are accepted . however , we need to control the number of components which are _ close to _ the rejection region ( @xmath233 ) and the distribution of the position of the components in the rejection region after @xmath189 $ ] iterations ( @xmath234 ) , where @xmath446 . for any @xmath447 , let @xmath448 | \leq\sqrt{k } \bigr\}\ ] ] and let @xmath449}^{[d^\delta ] } \hat{f}_d^2 ( k).\ ] ] before studying @xmath233 , we state a simple , useful result concerning the central moments of a sequence of binomial random variables . [ lemz32 ] let @xmath450 . suppose that @xmath451 and @xmath452 as @xmath7 ; then for any @xmath453 , @xmath454)^{2m}\bigr]/(dp_d)^m \rightarrow\prod_{j=1}^m ( 2j-1 ) \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] [ lem32 ] for any @xmath387 and sequence of positive integers @xmath188 satisfying @xmath189 \leq k_d \leq [ d^\delta]$ ] , @xmath455 consequently , for any @xmath387 , @xmath456 as @xmath7 . fix @xmath387 . by stationarity and markov s inequality , for all , @xmath457 | \geq\sqrt{k_d}\bigr ) \nonumber\\[-8pt]\\[-8pt ] & \leq & \frac{d^\kappa}{k_d^m } { \mathbb{e}}\bigl [ \bigl ( b_d^{k_d^{3/4 } } ( \mathbf{x}^d_0 ) - { \mathbb{e } } [ b_d^{k_d^{3/4 } } ( \mathbf{x}^d_0)]\bigr)^{2 m } \bigr].\nonumber\end{aligned}\ ] ] however , @xmath458 , so by lemma [ lemz32 ] for any @xmath453 , for all sufficiently large @xmath0 , @xmath459\bigr)^{2 m } \bigr ] \leq k_m k_d^{3m/4},\ ] ] where @xmath460 . since @xmath461 $ ] , the right - hand side of ( [ eq32a ] ) converges to 0 as @xmath7 by taking @xmath462 , proving ( [ eq32c ] ) . note that @xmath463}^{[d^\delta ] } { \mathbb{p}}\bigl(\mathbf{x}_0^d \notin \hat{f}_d^2 ( k)\bigr).\ ] ] the right - hand side of ( [ eq32b ] ) converges to 0 as @xmath7 since ( [ eq32c ] ) holds with @xmath464 replaced by @xmath465 . before considering @xmath234 , the distribution of the position of the components in the rejection region after @xmath189 $ ] iterations , we introduce a simple random walk on the hypercube ( rwh ) . the biggest problem in analyzing the rwm or pseudo - rwm algorithm is the dependence between the components . however , the dependence is weak and whether or not a proposed move is accepted is dominated by whether or not the proposed moves lies inside or outside the hypercube . therefore we couple the rwm algorithm to the simpler rwh algorithm . for @xmath126 , define the rwh algorithm as follows . let @xmath466 denote the position of the rwh algorithm after @xmath467 iterations . then @xmath468^d$ , \cr \mathbf{w}_k^d , & \quad otherwise.}\ ] ] that is , the rwh algorithm simply accepts all proposed moves which remain inside the hypercube and rejects all proposed moves outside the hypercube . define the pseudo - rwh algorithm in the obvious fashion with @xmath469 denoting the position of the pseudo - rwh algorithm at iteration @xmath467 . then for @xmath470 , @xmath471 , where @xmath472 $ ] . for our purposes it will suffice to consider the coupling of the pseudo - rwm and pseudo - rwh algorithms over @xmath113 $ ] iterations and study how the pseudo - rwh algorithm evolves over @xmath113 $ ] iterations . note that the rwh algorithm coincides with the rwm algorithm with a uniform target density over the @xmath0-dimensional cube , so in this case the coupling is exact . the components of the pseudo - rwh algorithm behave independently . for @xmath473 , let @xmath474 and for @xmath475 , let @xmath476 . then @xmath477 is the probability that a proposed move from @xmath87 is accepted in the rwh algorithm . [ lem33 ] for any @xmath478 and @xmath136^d$ ] , there exists a coupling such that @xmath479 let @xmath480 $ ] ; then we can couple @xmath73 and @xmath481 using @xmath482 and @xmath483 as follows . let @xmath484^d$ , \cr \mathbf{x}^d , & \quad otherwise , } \\ \mathbf{x}_1^d & = & \cases { \mathbf{x}^d + \sigma_d \mathbf{z}_1^d , & \quad if $ \mathbf{x}^d + \sigma_d \mathbf{z}_1^d \in[0,1]^d$\vspace*{2pt}\cr & \quad and $ \displaystyle u \leq1 \wedge \exp\biggl(\sum_{j=1}^d \{g ( x_j + \sigma_d z_{1,j } ) - g(x_j ) \ } \biggr)$ , \vspace*{2pt}\cr \mathbf{x}^d , & \quad otherwise.}\end{aligned}\ ] ] therefore , @xmath485 if @xmath486^d$ ] and @xmath487 . thus @xmath488^d,\nonumber\\ & & \hspace*{59pt } u > 1 \wedge\exp\biggl(\sum_{j=1}^d \{g ( x_j + \sigma_d z_{1,j } ) - g(x_j ) \ } \biggr ) \biggr ) \nonumber\\[-8pt]\\[-8pt ] & & \qquad= d^\alpha{\mathbb{e}}\biggl [ \prod_{j=1}^d 1 _ { \ { 0 < x_j + \sigma_d z_{1,j } < 1 \ } } \nonumber\\ & & \hspace*{73pt}{}\times\biggl\{1 - 1 \wedge\exp\biggl ( \sum_{j=1}^d \{g ( x_j + \sigma_d z_{1,j } ) - g(x_j ) \ } \biggr ) \biggr\ } \biggr ] \nonumber\\ & & \qquad\leq d^\alpha{\mathbb{e}}\biggl [ \biggl| \sum_{j=1}^d \{g ( x_j + \sigma_d z_{1,j } ) - g(x_j ) \ } \biggr| \biggr],\nonumber\end{aligned}\ ] ] since for all @xmath489 , @xmath490 . by taylor s theorem , for @xmath491 , there exists @xmath492 lying between 0 and @xmath493 such that @xmath494 since @xmath323 is continuously twice differentiable on @xmath211 , there exists @xmath495 such that @xmath496 since the components of @xmath482 are independent , by jensen s inequality , ( [ eq33d ] ) and @xmath497 \leq2 { \mathbb{e}}[x^2 ] + 2 c^2 $ ] , for any random variable @xmath401 and constant @xmath288 , we have that @xmath498 and the lemma is proved . [ lem35 ] fix @xmath499 . for any @xmath136^d$ ] , there exists a coupling such that @xmath500 } \{\mathbf{x}_j^d \neq\mathbf{w}_j^d \}| \mathbf{x}_0^d \equiv\mathbf{w}_0^d = \mathbf{x}^d \biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] moreover , if @xmath501 and @xmath502 , there exists a coupling such that @xmath503 } \{\hat{\mathbf{x}}_j^d \neq \hat{\mathbf{w}}_j^d \ } | \mathbf{x}_0^d \equiv\mathbf{w}_0^d = \mathbf{x}^d \biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] for @xmath504 and @xmath505 let @xmath506 let @xmath507 $ ] and let @xmath508 . note that the movement of the components of the pseudo - rwh algorithm are independent . the next stage in the proof is to show that , if @xmath509 is started in @xmath234 , then after @xmath247 iterations of the pseudo - rwm algorithm has forgotten its starting value in terms of the total number and position of the components in @xmath238 ( the rejection region ) . moreover , the total number and position of the components in @xmath238 after @xmath247 iterations of the pseudo - rwm algorithm are approximately from the stationary distribution of @xmath510 . before defining and studying @xmath511 , we require the following lemma and associated corollary concerning the distribution of the components in the rejection region after @xmath247 steps . [ lem38a ] let @xmath512 be any sequence of positive integers satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] . for any sequence of @xmath513 such that @xmath514 , @xmath515 also for all @xmath516 , @xmath517 fix @xmath518 and set @xmath519 . to prove ( [ eq38aa ] ) and ( [ eq38ab ] ) we couple the components of @xmath520 to a simple reflected random walk process @xmath521 . set @xmath522 for some @xmath523 . let @xmath524 be i.i.d . according to @xmath525 $ ] . for @xmath526 , set @xmath527 with reflection at the boundaries 0 and 1 so that @xmath528 . for @xmath473 , let @xmath529 . consider @xmath530 with identical arguments applying for the other components of @xmath520 . since @xmath531 as @xmath7 , we assume that @xmath0 is such that @xmath532 . then @xmath533 for @xmath534 we can couple @xmath535 and @xmath530 such that @xmath536 for @xmath537 , if @xmath538 , then set @xmath539 . now if @xmath540 @xmath541 , @xmath542 and @xmath543 can be coupled such that , if @xmath538 , then @xmath544 @xmath545 . furthermore , for @xmath546 @xmath547 , the above coupling can be extended to give , if @xmath548 and @xmath549 , then @xmath550 @xmath551 . since in @xmath247 iterations either process can move at most a distance @xmath552 , ( [ eq38ad ] ) follows from the above coupling . without loss of generality , we assume that @xmath553 [ symmetry arguments apply for @xmath554 . by the reflection principle , @xmath555\\[-8pt ] & = & { \mathbb{p}}\biggl ( - 1 < \frac{x}{\sigma_d } + \sum_{i=1}^{k_d } \tilde{z}_i < 1 \biggr).\nonumber\end{aligned}\ ] ] by the berry essen theorem , there exists a positive constant , @xmath556 say , such that for all @xmath557 , @xmath558 where @xmath559 denotes the c.d.f . of a standard normal . therefore it follows from ( [ eq38ae ] ) and ( [ eq38af ] ) that there exists a positive constant , @xmath560 say , such that for all @xmath561 , @xmath562 by hoeffding s inequality , for any @xmath563 , @xmath564\\[-8pt ] & = & 2 \exp\bigl(- \varepsilon^2 \sqrt{k_d}/2\bigr).\nonumber\end{aligned}\ ] ] hence for @xmath565 , by taking @xmath566 in ( [ eq38ah ] ) , we have that @xmath567 furthermore , note that for @xmath568 , @xmath569 . then ( [ eq38ab ] ) follows immediately from ( [ eq38ad ] ) and the above bounds for @xmath570 since @xmath571 as @xmath7 . finally , for @xmath518 , it follows from ( [ eq38ad ] ) , ( [ eq38af ] ) and ( [ eq38ag ] ) that there exists @xmath572 such that @xmath573\\[-8pt ] & \leq & d^{2 \gamma } \biggl\ { k_3 k_d^{3/4 } \biggl ( \frac{k_2}{\sqrt{k_d } } \biggr)^2 + 2 d \exp\biggl(- \frac{\sqrt{k_d}}{8 l^2 } \biggr ) \biggr\}\nonumber\end{aligned}\ ] ] with the right - hand side of ( [ eq38j ] ) converging to 0 as @xmath7 . [ lem37 ] for any @xmath574 , any sequence @xmath575 satisfying and any sequence of positive integers @xmath576 satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] , there exists @xmath495 , such that for all @xmath126 , @xmath577 \leq k d^{- ( 1 + \beta m/8)}.\ ] ] fix @xmath574 . note that @xmath578 & \leq & { \mathbb{e } } [ q^d ( x_{0,1}^d;l ; k_d)^m ] \nonumber\\ & = & \int_0 ^ 1 q^d ( x ; l ; k_d)^m f ( x ) \,dx \nonumber\\[-8pt]\\[-8pt ] & = & \int_{r_d^{k_d^{3/4 } } } q^d ( x ; l ; k_d)^m f ( x ) \,dx\nonumber\\ & & { } + \int_{(r_d^{k_d^{3/4 } } ) ^c } q^d ( x ; l ; k_d)^m f ( x ) \,dx.\nonumber\end{aligned}\ ] ] the two terms on the right - hand side of ( [ eq37aa ] ) are bounded using ( [ eq38ag ] ) and ( [ eq38ai ] ) , respectively . thus it follows from the proof of lemma [ lem38a ] that there exist constants @xmath579 such that , for all @xmath126 , @xmath580 & \leq & \int_{r_d^{k_d^{3/4 } } } \biggl ( \frac{k_1}{\sqrt{k_d } } \biggr)^m f ( x ) \,dx\nonumber\\ & & { } + \int_{(r_d^{k_d^{3/4 } } ) ^c } \biggl\ { 2 \exp\biggl ( - \frac{\sqrt{k_d}}{8l^2 } \biggr ) \biggr\}^m f ( x ) \,dx \nonumber\\ & \leq & { \mathbb{p } } ( x_{0,1}^d \in r_d^{k_d^{3/4 } } ) \biggl ( \frac{k_1}{\sqrt{k_d } } \biggr)^m\\ & & { } + { \mathbb{p } } ( x_{0,1}^d \notin r_d^{k_d^{3/4 } } ) \times2 \exp\biggl ( - \frac{\sqrt{k_d}}{8l^2 } \biggr ) \nonumber\\ & \leq & k_2 \frac{k_d^{3/4}}{d } k_d^{-m/2 } + 2 \exp\biggl ( - \frac{\sqrt{k_d}}{8l^2 } \biggr).\nonumber\end{aligned}\ ] ] the corollary follows from ( [ eq37b ] ) since @xmath574 and @xmath581 $ ] . we are now in position to define @xmath582 . for any @xmath583 and @xmath584 , let @xmath585 where @xmath586 let @xmath587 \leq k_d \leq[d^\delta ] } \sup_{0 \leq r \leq l } | \lambda_d ( \mathbf{x}^d ; r ; k_d ) - \lambda ( r ) | < d^{-\gamma } \bigr\}.\ ] ] we study @xmath588 as a prelude to analyzing @xmath582 where @xmath589 and @xmath247 are defined in lemma [ lem36 ] below . [ lem36 ] for any sequence @xmath590 satisfying @xmath591 , any sequence of positive integers @xmath188 satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] and @xmath387 , @xmath592 by the triangle inequality , @xmath593 | > d^{-\gamma}/16\bigr ) \\ & & \qquad\quad { } + d^\kappa{\mathbb{p}}\bigl ( | { \mathbb{e } } [ \lambda_d ( \mathbf{x}_0^d ; r_d ; k_d ) ] - \lambda(r_d)| > d^{-\gamma}/16\bigr).\nonumber\end{aligned}\ ] ] in turn we show that the two terms on the right - hand side of ( [ eq36b ] ) converge to 0 as @xmath7 . by markov s inequality , we have that for any @xmath453 , @xmath594 | > d^{-\gamma}/16\bigr ) \nonumber\\ & & \qquad\leq16^{m } d^{\kappa+m \gamma } { \mathbb{e}}\biggl [ \biggl ( \sum_{j = 1}^d \ { q^d ( x_{0,j};r_d ; k_d ) - { \mathbb{e } } [ q^d ( x_{0,j};r_d ; k_d ) ] \ } \biggr)^m \biggr ] \nonumber\\[-8pt]\\[-8pt ] & & \qquad= 16^m d^{\kappa+m \gamma } \sum_{i_1 = 1}^d \cdots\sum_{i_m = 1}^d { \mathbb{e}}\biggl [ \prod_{j=1}^m \ { q^d ( x_{0,i_j};r_d ; k_d)\nonumber\\ & & \hspace*{167pt } { } - { \mathbb{e } } [ q^d ( x_{0,i_j};r_d ; k_d ) ] \ } \biggr].\nonumber\end{aligned}\ ] ] since the components of @xmath11 are independent and identically distributed , we have for any @xmath595 , there exists @xmath596 and @xmath597 with @xmath598 such that @xmath599 \ } \biggr]\nonumber\\[-8pt]\\[-8pt ] & & \qquad = \prod_{j=1}^j { \mathbb{e}}\bigl [ \ { q^d ( x_{0,1};r_d ; k_d ) - { \mathbb{e } } [ q^d ( x_{0,1};r_d ; k_d ) ] \}^{l_j } \bigr].\nonumber\end{aligned}\ ] ] note that if any @xmath600 , then the right - hand side of ( [ eq36d ] ) is equal to 0 . by corollary [ lem37 ] , if @xmath601 , there exists @xmath556 such that the right - hand side of ( [ eq36d ] ) is less than or equal to @xmath602 . furthermore , there exists @xmath560 such that for any and @xmath603 , there are at most @xmath604 configurations of @xmath605 such that for @xmath606 , @xmath607 of the components are the same . therefore there exists @xmath495 such that @xmath608 \ } \biggr]\nonumber\\[-8pt]\\[-8pt ] & & \qquad \leq k d^{-m \beta/8}.\nonumber\end{aligned}\ ] ] taking @xmath609 , it follows from ( [ eq36e ] ) that the right - hand side of ( [ eq36c ] ) converges to 0 as @xmath7 . the lemma follows by showing that for all sufficiently large @xmath0 , @xmath610 - \lambda(r_d)| \leq d^{-\gamma}/16.\ ] ] note that @xmath611 & = & d { \mathbb{e } } [ q^d ( x_{0,1};r;k_d ) ] \nonumber\\ & = & d \int_0^{k_d^{3/4}/d } q^d ( x;r_d ; k_d ) f ( x ) \,dx\nonumber\\[-8pt]\\[-8pt ] & & { } + d \int_{k_d^{3/4}/d}^{1-k_d^{3/4}/d } q^d ( x;r_d ; k_d ) f ( x ) \,dx\nonumber\\ & & { } + d \int_{1-k_d^{3/4}/d}^1 q^d ( x;r_d ; k_d ) f ( x ) \,dx.\nonumber\end{aligned}\ ] ] by ( [ eq38ai ] ) , the second integral on the right - hand side of ( [ eq36 g ] ) is bounded above by @xmath612 as @xmath7 . let @xmath613 . then by taylor s theorem , for @xmath614 , @xmath615 thus @xmath616\\[-8pt ] & & \qquad\leq d \times f_\star\frac{k_d^{3/4}}{d } \times \int_0^{k_d^{3/4}/d } q^d ( x;r_d ; k_d ) \,dx.\nonumber\end{aligned}\ ] ] similarly , we have that @xmath617\\[-8pt ] & & \qquad\leq d \times f_\star\frac{k_d^{3/4}}{d } \times \int_{1-k_d^{3/4}/d}^1 q^d ( x;r_d ; k_d ) \,dx.\nonumber\end{aligned}\ ] ] by symmetry , @xmath618 , so @xmath619 - 2 f^\ast d \int_0 ^ 1 q^d ( x;r_d;k_d ) \,dx \biggr| \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\hspace*{-35pt}\ ] ] since @xmath620 , using lemma [ lem38a ] , ( [ eq38ab ] ) , we have that , for all sufficiently large @xmath0 , @xmath621\\[-8pt ] & & \qquad\quad { } + d \int_{\sigma_d}^{1-\sigma_d } \biggl\ { \frac{1 } { \int_0 ^ 1 \omega_d ( y ) \,dy } - 1 \biggr\ } q^d ( x;r_d;k_d ) \,dx \nonumber\\ & & \qquad\leq4 d^{1 + \gamma } \sigma_d d^{-2 \gamma } + d^{1 + \gamma } \int_0 ^ 1 \frac{2 \sigma_d}{\int_0 ^ 1 \omega_d ( y ) \,dy } q^d ( x;r_d;k_d ) \,dx.\nonumber\end{aligned}\ ] ] let @xmath622 be defined as in lemma [ lem38a ] . note that @xmath623 $ ] is the stationary distribution of a reflected random walk on @xmath211 . therefore for any @xmath447 , @xmath624 therefore , it follows from lemma [ lem38a ] , ( [ eq38ad ] ) , that @xmath625 hence the right - hand side of ( [ eq36k ] ) converges to 0 as @xmath7 . note that the stationary distribution of a single component of the pseudo - rwh algorithm has p.d.f . @xmath626 @xmath627 . therefore @xmath628\\[-8pt ] & = & \frac{r_d}{2 } \biggl(1 + \frac{r_d}{2l } \biggr ) \bigg/ \biggl(1 - \frac{l}{2d } \biggr).\nonumber\end{aligned}\ ] ] finally , combining ( [ eq36j ] ) , ( [ eq36k ] ) and ( [ eq36n ] ) , we have that ( [ eq36f ] ) holds and the lemma is proved . [ lema311 ] for any @xmath387 , @xmath629 fix @xmath387 . fix a sequence of positive integers @xmath576 such that @xmath189 \leq k_d \leq[d^\delta]$ ] . fix @xmath630 and let @xmath631 d^{- \theta},l \}$ ] . thus the elements of @xmath632 are separated by a distance of at most @xmath633 . for any @xmath634 and @xmath126 , there exist @xmath635 such that @xmath636 with @xmath637 . by the triangle inequality , @xmath638\\[-8pt ] & & \qquad\quad { } + \lambda(\hat{r}_d)- \lambda(\tilde{r}_d ) \nonumber\\ & & \qquad\leq| \lambda_d ( \mathbf{x}_0^d ; \hat{r}_d ; k_d ) - { \mathbb{e } } [ \lambda_d ( \mathbf{x}_0^d ; \hat{r}_d ; k_d ) ] |\nonumber\\ & & \qquad\quad { } + 2 |\lambda_d ( \mathbf{x}_0^d ; \tilde{r}_d ; k_d ) - { \mathbb{e } } [ \lambda_d ( \mathbf{x}_0^d ; \tilde{r}_d ; k_d ) ] | \nonumber\\ & & \qquad\quad { } + 2 |\lambda(\hat{r}_d)- \lambda(\tilde{r}_d)|.\nonumber\end{aligned}\ ] ] by lemma [ lem36 ] , for any sequence @xmath639 satisfying @xmath640 , @xmath641 hence @xmath642 for all sufficiently large @xmath0 , @xmath643 therefore it follows from ( [ eqss4 ] ) , ( [ eqss5 ] ) and ( [ eqss6 ] ) that @xmath644 since ( [ eqss7 ] ) holds for any sequence @xmath188 satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] , the lemma follows since @xmath645 \leq k \leq[d^\delta ] } \sup_{0 \leq r \leq l } | \lambda_d ( \mathbf{x}_0^d;r ; k ) - \lambda(r ) | > d^{-\gamma } \bigr ) \nonumber\\[-8pt]\\[-8pt ] & & \qquad\leq d^ { \kappa } \sum_{k=[d^\beta]}^{[d^\delta ] } { \mathbb{p}}\bigl ( \sup_{0 \leq r \leq l } | \lambda_d ( \mathbf{x}_0^d;r ; k ) - \lambda(r ) | > d^{-\gamma } \bigr).\nonumber\end{aligned}\ ] ] finally , we consider @xmath646 \biggr| < d^{-{1}/{8 } } \biggr\}.\ ] ] the sets @xmath647 mirror the sets @xmath648 in @xcite and are used when considering @xmath649 and @xmath258 but play no role in analyzing @xmath650 . [ lem320 ] for any @xmath387 , @xmath651 let @xmath652 and fix @xmath387 . then by hoeffding s inequality , @xmath653 \biggr| > d^{7/8 } \biggr ) \nonumber\\[-8pt]\\[-8pt ] & \leq & d^\kappa\times2 \exp\biggl ( - \frac{2 d^{7/4}}{d ( g^\ast)^4 } \biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] finally we are in position to consider @xmath228 and @xmath654 . recall that , for @xmath126 , @xmath655 and @xmath656 } \hat{\mathbf{x}}_j^d \notin f_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ) \leq d^{-3 } \biggr\}.\ ] ] combining lemmas [ lem31 ] , [ lem32 ] , [ lema311 ] and [ lem320 ] , we have the following theorem . [ lem321 ] for any @xmath387 , @xmath657 hence , by lemma [ lem34 ] , for any @xmath387 , @xmath658 also using the couplings outlined above , we have that @xmath659 } \ { \hat{\mathbf{w}}_j^d \notin f_d \ } |\hat{\mathbf{w}}_0^d \in \tilde{f}_d \biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] we show that for any sequence @xmath513 such that @xmath660 , @xmath661 the key result is lemma [ lem311 ] which states that after @xmath247 iterations , the configuration of the components in the rejection region @xmath238 resemble the configuration of the points of a poisson point process with rate @xmath586 on the interval @xmath662 $ ] . for any @xmath663 and @xmath664 , let @xmath665 with @xmath666 let @xmath667 where the components of @xmath668 are independent poisson random variables with @xmath669 and @xmath670 [ lem311 ] for any @xmath663 , any sequence of positive integers @xmath188 satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] and @xmath671 , @xmath672 fix @xmath663 and @xmath671 . let @xmath673 where for @xmath664 , @xmath674 are independent poisson random variables with means @xmath675 the lemma is proved by showing that @xmath676 by @xcite , theorem 1 , @xmath677 by lemma [ lem38a ] , ( [ eq38aa ] ) the right - hand side of ( [ eq311c ] ) converges to 0 as @xmath7 . for the second term on the right - hand side of ( [ eq311b ] ) , it suffices to show that @xmath678 ( for discrete random variables convergence in distribution and convergence in total variation distance are equivalent ; see @xcite , page 254 . ) the components of @xmath679 and @xmath668 are independent , and therefore it is sufficient to show that , for all @xmath664 , @xmath680 for all @xmath664 , ( [ eq311d ] ) holds , if @xmath681 therefore the lemma follows from ( [ eq311e ] ) since @xmath189 \leq k_d \leq[d^\delta]$ ] and @xmath682 . [ see ( [ eqn11f3 ] ) for the construction of @xmath582 . ] lemma [ lem311 ] is the key result stating that if the pseudo - rwh process is started from the set @xmath378 , then after @xmath189 $ ] iterations the distribution of the components in the rejection region are approximately given by @xmath668 . we show that studying the pseudo - rwh algorithm over @xmath113 $ ] iterations suffices in analyzing @xmath683 } \sum_{j=0}^{[\pi d^\delta-1 ] } m_j ( j_d ( \hat{\mathbf{x}}_j^d))$ ] . note that @xmath650 satisfies @xmath684).\ ] ] let @xmath685 } \sum_{j=0}^{[\pi d^\delta -1 ] } m_j ( \omega_d ( \hat{\mathbf{w}}_j^d))$ ] . before establishing a coupling between @xmath686 and @xmath687 , we give a simple coupling for geometric random variables . [ lem310 ] suppose that @xmath688 and that @xmath401 and @xmath689 are independent geometric random variables with success probabilities @xmath156 and @xmath690 , respectively , that is , @xmath691 and @xmath692 . let @xmath693 be a bernoulli random variable with @xmath694 and @xmath695 . then if @xmath693 , @xmath401 , @xmath689 and @xmath92 are mutually independent , @xmath696 therefore there exists a coupling of @xmath401 and @xmath689 such that @xmath697 [ lem312 ] for any @xmath698 and @xmath256 , there exists a coupling of @xmath686 and @xmath699 such that @xmath700 for @xmath660 , by corollary [ lem35 ] , we have that @xmath701 } \ { \hat{\mathbf{x}}_j^d \neq \hat{\mathbf{w}}_j^d \ } | \hat{\mathbf{x}}_0^d \equiv \hat{\mathbf{w}}_0^d = \mathbf{x}^d \biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] suppose that for @xmath702 $ ] , @xmath703 . then using lemma [ lem310 ] , ( [ eq310b ] ) , @xmath704 and @xmath705 can be coupled such that @xmath706 since @xmath707 , @xmath708 , the right - hand side of ( [ eq312d ] ) is less than @xmath709 . note that @xmath710 so by lemma [ lem33 ] for any @xmath711 , @xmath712 times the right - hand side of ( [ eq312d ] ) converges to 0 as @xmath7 . taking @xmath713 such that @xmath714 , @xmath715 } { \mathbb{p}}\bigl(m_j ( j_d ( \hat{\mathbf{x}}_j^d ) ) \neq m_j ( \omega_d ( \hat{\mathbf{w}}_j^d))| \hat{\mathbf{w}}_j^d = \hat{\mathbf{x}}_j^d \in f_d^1\bigr)\nonumber\\[-8pt]\\[-8pt ] & & \qquad\rightarrow0 \qquad \mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] the lemma then follows from ( [ eq312b ] ) and ( [ eq312e ] ) . we show that it suffices to study @xmath716 } \sum_{j=0}^{[\pi d^\delta-1 ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1}$ ] . in other words , replace the mean of the geometric random variables @xmath717 , @xmath718}^d ) ) \}$ ] by the mean of the means of the geometric random variables . [ lem313 ] for any @xmath698 and for any sequence of @xmath719 such that , @xmath720 if @xmath721 as @xmath7 . let @xmath722 } \{\hat{\mathbf{w}}_j^d \notin f_d \}$ ] . then for any @xmath723 , @xmath724 as @xmath7 . for any @xmath725 with @xmath726 , the characteristic function of @xmath687 conditional upon @xmath727 and @xmath519 is given by @xmath728 } { \mathbb{e}}\biggl [ \exp\biggl ( \frac{i \tau}{[d^\delta ] } m_j ( \omega_d ( \hat{\mathbf{w}}_j^d ) ) \biggr ) \big| a_d^c , \ { \hat{\mathbf{w}}^d \ } \biggr ] \big| a_d^c , \hat{\mathbf{w}}_0^d = \mathbf{x}^d \biggr ] \\ & & \qquad= { \mathbb{e}}\biggl [ \prod_{j=0}^{[\pi d^\delta-1 ] } \frac{\exp(i \tau/[d^\delta ] ) \omega_d ( \hat{\mathbf{w}}_j^d)}{1 - ( 1 - \omega_d ( \hat{\mathbf{w}}_j^d ) ) \exp(i \tau/[d^\delta ] ) } \big| a_d^c , \hat{\mathbf{w}}_0^d = \mathbf{x}^d \biggr].\nonumber\end{aligned}\ ] ] conditional upon @xmath727 , @xmath729 . hence , for all , @xmath730 ) \omega_d ( \hat{\mathbf{w}}_j^d)}{1 - ( 1 - \omega_d ( \hat{\mathbf{w}}_j^d ) ) \exp(i \tau/[d^\delta ] ) } = 1 + \frac{i \tau}{[d^\delta ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1 } + o \biggl ( \frac{1}{[d^\delta ] } \biggr).\ ] ] thus @xmath731 $ ] has the same limit as @xmath7 ( should one exist ) as @xmath732 } \biggl ( 1 + \frac{i \tau}{[d^\delta ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1 } \biggr ) \big| a_d^c , \hat{\mathbf{w}}_0^d = \mathbf{x}^d \biggr],\ ] ] which in turn has the same limit as @xmath7 as @xmath733 } \exp \biggl ( \frac{i \tau}{[d^\delta ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1 } \biggr ) \big| a_d^c , \hat{\mathbf{w}}_0^d = \mathbf{x}^d \biggr]\nonumber\\[-8pt]\\[-8pt ] & & \qquad = { \mathbb{e}}[\exp(i \tau \tilde{t}_d ( \pi ) ) | a_d^c , \hat{\mathbf{w}}_0^d = \mathbf{x}^d].\nonumber\end{aligned}\ ] ] the lemma follows since @xmath734 as @xmath7 . we shall show that @xmath735 as @xmath7 using chebyshev s inequality in lemma [ lem318 ] . we require preliminary results concerning@xmath736 , @xmath737 with the key results given in lemma [ lem317 ] . first , however , we introduce useful upper and lower bounds for @xmath738 which allow us to exploit lemma [ lem311 ] and prove uniform integrability @xmath739 . for @xmath663 , @xmath664 and @xmath740 , let @xmath741 with @xmath742 . for @xmath663 and @xmath743 , let @xmath744 then for all @xmath475 , @xmath745 [ lem314 ] for any @xmath453 , any sequence of @xmath513 such that @xmath660 and any sequence of positive integers @xmath188 satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] , @xmath746 \rightarrow\exp\bigl ( ( 2^m -1 ) \lambda(l ) \bigr ) \qquad\mbox{as } { d \rightarrow\infty}.\hspace*{-20pt}\ ] ] note that @xmath747 . then since the @xmath748 are independent bernoulli random variables , @xmath749 & = & \prod_{j=1}^d { \mathbb{e}}\bigl [ ( 2^m)^{\chi_j ( x_j;l;k_d ) } | \hat{\mathbf{w}}_0^d = \mathbf{x}^d \bigr ] \nonumber\\[-8pt]\\[-8pt ] & = & \prod_{j=1}^d \bigl\ { \bigl(1 - q^d ( x_j;l;k_d)\bigr ) + 2^m q^d ( x_j;l;k_d ) \bigr\}.\nonumber\end{aligned}\ ] ] by lemma [ lem38a ] , ( [ eq38aa ] ) , for @xmath660 , @xmath750 as @xmath7 , so the right - hand side of ( [ eq314b ] ) has the same limit as @xmath7 as @xmath751 the lemma follows since for any @xmath660 , @xmath752 as @xmath7 . [ lem315]fix @xmath753 . for any sequence @xmath719 such that @xmath671 , and any sequence of positive integers @xmath188 satisfying @xmath189 \leq k_d \leq [ d^\delta]$ ] , we have that @xmath754 & \rightarrow&{\mathbb{e } } [ \check{\nu}_n ( \mathbf{s}_n)^m ] \qquad\mbox{as } { d \rightarrow\infty } , \\ { \mathbb{e } } [ \hat{\nu}_n ( \tilde{\mathbf{s}}_n^d ( \mathbf{x}^d ; k_d))^m ] & \rightarrow&{\mathbb{e } } [ \hat{\nu}_n^m ( \mathbf{s}_n)^m ] \qquad\mbox{as } { d \rightarrow\infty}.\end{aligned}\ ] ] by @xcite , theorem 29.2 , and lemma [ lem311 ] @xmath755 the lemma follows since ( [ eqnu3 ] ) and lemma [ lem314 ] ensure the uniform integrability of the left - hand sides of ( [ eq315a ] ) and ( [ eq315b ] ) . [ lem316 ] for any sequence @xmath756 such that @xmath671 and sequence of positive integers @xmath512 satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] , @xmath757 \rightarrow\exp(f^\ast l/2 ) \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] for any @xmath660 and sequences of positive integers @xmath758 and @xmath512 satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] and @xmath759 $ ] , @xmath760 { \stackrel{p}{\longrightarrow}}\exp(f^\ast l/2 ) \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] an immediate consequence of lemma [ lem315 ] is that @xmath761 , \lim_{{d \rightarrow\infty } } { \mathbb{e } } [ \hat{\nu}_n(\tilde{\mathbf{s}}_n^d ( \mathbf{x}^d ; k_d ) ) ] \rightarrow\exp(f^\ast l/2 ) \qquad\mbox{as } { n \rightarrow\infty},\ ] ] from which ( [ eq316a ] ) follows by ( [ eqnu3 ] ) . by theorem [ lem321 ] , ( [ eqss47 ] ) , @xmath762 as @xmath7 , so ( [ eq316b ] ) follows from ( [ eq316a ] ) . [ lem317 ] for any sequence @xmath756 such that @xmath660 and any sequences of positive integers @xmath763 and @xmath512 satisfying @xmath189 \leq i_d , k_d \leq[d^\delta]$ ] , @xmath764 and @xmath765 using ( [ eqnu3 ] ) , lemma [ lem314 ] and markov s inequality , it is straightforward to show that for any @xmath766 , there exists @xmath495 such that @xmath767 therefore it follows from lemma [ lem316 ] that , for any sequence @xmath513 such that @xmath660 , @xmath768 \nonumber\\[-2pt ] & & \hspace*{47pt}\qquad{}- { \mathbb{e } } [ \omega_d ( \hat{\mathbf{w}}_{j_d + k_d}^d)^{-1 } | \hat{\mathbf{w}}_0^d = \mathbf{x}^d ] \ } | \hat{\mathbf{w}}_0^d = \mathbf{x}^d \\[-2pt ] & & \qquad { \stackrel{p}{\longrightarrow}}0\qquad \mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] the uniform integrability of the left - hand side of ( [ eq317d ] ) follows from ( [ eqnu3 ] ) and lemma [ lem314 ] . hence ( [ eq317a ] ) follows . it is straightforward to show that @xmath769 , { \mathbb{e } } [ \hat{\nu}_n ( \mathbf{s}_n)^2 ] \rightarrow \exp(f^\ast l ( 4 \log2 - 3/2))$ ] as @xmath770 . therefore from ( [ eqnu3 ] ) and lemma [ lem314 ] , we have that @xmath771 \rightarrow\exp\bigl(f^\ast l ( 4 \log2 - 3/2)\bigr ) \qquad\mbox{as } { d \rightarrow\infty}.\hspace*{-35pt}\ ] ] then ( [ eq317b ] ) follows immediately . we are now in position to prove lemma [ lem318 ] , which is the final step in proving that for any sequence @xmath756 such that @xmath660 , @xmath772 as @xmath7 . [ lem318 ] for any @xmath698 and any sequence @xmath719 such that , @xmath773 fix a sequence @xmath756 . let @xmath774 } \sum_{j=0}^{[d^\beta-1 ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1}$ ] and let @xmath775 } \sum_{j= [ d^\beta]}^{[\pi d^\delta-1 ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1}$ ] . thus @xmath776 . let @xmath777 } \ { \hat{\mathbf{w}}_j^d \notin f_d^1 \}$ ] . by theorem [ lem321 ] , ( [ eqss47 ] ) , @xmath778 as @xmath7 and conditional upon @xmath727 , @xmath779 d^\gamma}{[d^\delta]}$ ] . hence @xmath780 as @xmath7 . by lemma [ lem316 ] , ( [ eq316a ] ) , @xmath781 & = & \frac{1}{[d^\delta ] } \sum_{j=[d^\beta]}^{[\pi d^\delta -1 ] } { \mathbb{e } } [ \omega_d ( \hat{\mathbf{w}}_j^d)^{-1 } | \hat{\mathbf{w}}_0^d = \mathbf{x}^d ] \nonumber\\[-9pt]\\[-9pt ] & \rightarrow & \pi\exp ( f^\ast l /2).\nonumber\vadjust{\goodbreak}\end{aligned}\ ] ] by chebyshev s inequality , for any @xmath563 , @xmath782 \bigr| > \varepsilon| \hat{\mathbf{w}}_0^d = \mathbf{x}^d \bigr ) \nonumber\\[-8pt]\\[-8pt ] & & \qquad\leq\frac{1}{\varepsilon^2 [ d^\delta]^2 } \sum_{j=[d^\beta ] } ^{[\pi d^\delta-1 ] } \sum_{l=[d^\beta]}^{[\pi d^\delta-1 ] } \operatorname{cov } \bigl ( \omega_d ( \hat{\mathbf{w}}_j^d)^{-1 } , \omega_d ( \hat{\mathbf{w}}_l^d)^{-1 } | \hat{\mathbf{w}}_0^d = \mathbf{x}^d \bigr).\nonumber\end{aligned}\ ] ] since for all @xmath783 , @xmath784\\[-8pt ] & & \qquad\leq \operatorname{var } \bigl ( \omega_d ( \hat{\mathbf{w}}_j^d)^{-1 } | \hat{\mathbf{w}}_0^d = \mathbf{x}^d \bigr)^{{1}/{2 } } \operatorname{var } \bigl ( \omega_d ( \hat{\mathbf{w}}_l^d)^{-1 } | \hat{\mathbf{w}}_0^d = \mathbf{x}^d \bigr)^{{1}/{2}},\nonumber\end{aligned}\ ] ] it is straightforward to show , using lemma [ lem317 ] , that the right - hand side of ( [ eq318c ] ) converges to 0 as @xmath7 . thus @xmath785 as @xmath7 and the lemma follows immediately . [ lem319 ] for any sequence @xmath756 such that @xmath660 , @xmath786 for any @xmath698 , by lemmas [ lem312 ] , [ lem313 ] and [ lem318 ] , @xmath787 since @xmath650 satisfies @xmath788)$ ] , for any @xmath563 , @xmath789 for all sufficiently large @xmath0 . the lemma follows , since ( [ eq319b ] ) ensures that the right - hand side of ( [ eq319c ] ) converges to 0 as @xmath7 . from appendix [ secpd ] , we have that for any sequence @xmath513 , such that @xmath660 , latexmath:[$p_d as @xmath7 . therefore we proceed by showing that , for any @xmath260 , @xmath791 where @xmath792 } { \mathbb{e } } [ ( h(\hat{\mathbf{x}}^d_{[\pi d^\delta ] } ) - h ( \hat{\mathbf{x}}^d_0 ) ) @xmath793 equation ( [ eqn115 ] ) will then be proved using the triangle inequality . we analyze @xmath794 ) , before using ( [ eqmain6 ] ) to study @xmath795 . however , first we require some definitions and preliminary results . throughout we will utilize the following key facts noted in section [ secalg ] : @xmath796 and that @xmath797 , where @xmath798 and @xmath799 . we follow @xcite and @xcite in noting that , for any function @xmath800 which is a twice differentiable function on @xmath801 , the function @xmath802 is also twice differentiable , except at a countable number of points , with first derivative given lebesgue almost everywhere by the function @xmath803 the second derivative can similarly be obtained but will not be explicitly required for our calculations . for @xmath804 , let @xmath805 denote the probability of accepting a move in the rwm algorithm given that @xmath806 and let @xmath807\\[-8pt ] & & \hspace*{10.3pt}{}\times 1_{\{\sum_{j=2}^d ( g ( x_j + \sigma_d z_{1,j } ) - g ( x_j ) ) < 0 \ } } \prod_{j=2}^d 1 _ { \ { 0 < x_j + \sigma_d z_{1,j } < 1 \ } } \biggr].\nonumber\end{aligned}\ ] ] then for all @xmath87 , using taylor s theorem , @xmath808 therefore for @xmath809 , @xmath810 [ lem40 ] @xmath811 let @xmath812 $ ] and let @xmath813 $ ] , the probability a proposed move stays inside the unit cube given that the first component does not move . the proof of ( [ eq33b ] ) can be adapted to show that , for any @xmath478 , @xmath814 as @xmath7 . therefore since for @xmath671 , @xmath815 , ( [ eqlowerj ] ) , we have that @xmath816 let @xmath817 and let @xmath818 . since @xmath819 , we have that @xmath820 then using a taylor series expansion , there exists @xmath495 such that , for all @xmath671 , @xmath821 since @xmath822 are independent , and whether or not a proposed move from @xmath87 stays inside the hypercube depends only upon @xmath823 , @xmath824\\[-8pt ] & & \qquad \leq\tilde{\omega}_d^0 ( \mathbf{x}^d ) \leq\omega_d^0 ( \mathbf{x}^d ) { \mathbb{p}}\bigl(i_d ( \mathbf{x}^d ) < k \log d /d\bigr).\nonumber\end{aligned}\ ] ] for all @xmath671 , @xmath825 $ ] , so @xmath826)\qquad \mbox{as $ { d \rightarrow\infty}$}.\ ] ] therefore it follows that @xmath827 with the lemma following from ( [ eq40b ] ) and ( [ eq40e ] ) by the triangle inequality . [ lem41 ] for @xmath809 and @xmath671 , @xmath828 where @xmath98 as @xmath7 . for @xmath829 , @xmath830 for @xmath126 , fix @xmath671 and suppose that @xmath831 . then @xmath832 \nonumber\\[-4pt]\\[-12pt ] & = & \frac{d^2}{j_d ( \mathbf{x}^d ) } { \mathbb{e}}\biggl [ \bigl(h ( \mathbf{x}^d + \sigma_d \mathbf{z}^d ) - h ( \mathbf{x}^d)\bigr ) \biggl\ { 1 \wedge \frac{\pi_d ( \mathbf{x}^d + \sigma_d \mathbf{z}^d)}{\pi_d ( \mathbf{x}^d ) } \biggr\ } \biggr].\nonumber\end{aligned}\ ] ] the right - hand side of ( [ eq41ex ] ) is familiar in that it is the generator of the rwm - algorithm divided by the acceptance probability ; see , for example , @xcite , page 113 . first , note that @xmath833 using ( [ eqn4b ] ) , ( [ eqn4c ] ) and noting that @xmath834 , we have that @xmath835 \nonumber\\ & = & \frac{d^2 j_d^0 ( \mathbf{x}^d)}{j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } \sigma_d { \mathbb{e}}[z_1 ] h^\prime(x_1)\nonumber\\ & & { } + \frac{d^2 j_d^0 ( \mathbf{x}^d)}{j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } \frac{\sigma_d^2}{2 } { \mathbb{e}}[z_1 ^ 2 ] h^ { \prime\prime } ( x_1 ) \nonumber\\ & & { } + \frac{d^2 j_d^0 ( \mathbf{x}^d)}{j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } \frac{\sigma_d^2}{2 } { \mathbb{e}}[z_1 ^ 2 \{h^ { \prime\prime } ( x_1 + \psi_1^d ) - h''(x_1 ) \ } ] \\ & & { } + \frac{d^2 \tilde{j}_d^0 ( \mathbf{x}^d)}{j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } \sigma_d^2 g^\prime(x_1 ) h^\prime(x_1 ) { \mathbb{e}}[z_1 ^ 2]\nonumber\\ & & { } + \frac{d^2 } { j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } o ( \sigma_d^3 ) . \nonumber\end{aligned}\ ] ] the first term on the right - hand side of ( [ eq41eb ] ) is 0 . since @xmath224 , by the continuous mapping theorem , @xmath836 as @xmath7 and then since @xmath837 is bounded the third term on the right - hand side of ( [ eq41eb ] ) converges to 0 as @xmath7 . for @xmath671 , @xmath838 , and so , the right - hand side of ( [ eq41eb ] ) equals @xmath839 where @xmath98 as @xmath7 . thus ( [ eq41fx ] ) is proved . the proof of ( [ eq41fa ] ) follows straightforwardly using taylor series expansions since @xmath840 . since @xmath841 , an immediate consequence of lemma [ lem41 ] is that , there exists @xmath842 such that @xmath843 [ lem42 ] for any sequence of positive integers @xmath188 satisfying @xmath189 \leq k_d \leq[d^\delta]$ ] , @xmath844 - \hat{g } h ( x_1 ) \bigr| \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\vspace*{-2pt}\ ] ] fix @xmath576 and note that @xmath845 \nonumber\\ & & \qquad= { \mathbb{p}}({\hat{\mathbf{x}}}_{k_d}^d \in f_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ) { \mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_{k_d}^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d ] \\ & & \qquad\quad { } + { \mathbb{p}}({\hat{\mathbf{x}}}_{k_d}^d \notin f_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ) { \mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_{k_d}^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \notin f_d ] .\nonumber\end{aligned}\ ] ] since @xmath846 , @xmath847 . therefore , for all @xmath848^d$ ] , @xmath849 . by ( [ emainx1 ] ) , @xmath850 as @xmath7 . thus the latter term on the right - hand side of ( [ eq42ab ] ) converges to 0 as @xmath7 . now @xmath851 \nonumber\\ & & \qquad= { \mathbb{p}}(\hat{x}_{k_d,1}^d \notin r_d^l | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d ) \nonumber\\ & & \qquad\quad{}\times{\mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_{k_d}^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d , \hat{x}_{k_d,1}^d \notin r_d^l ] \\ & & \qquad\quad { } + { \mathbb{p}}(\hat{x}_{k_d,1}^d \in r_d^l | \hat{\mathbf { x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d ) \nonumber\\ & & \qquad\quad\hspace*{11pt}{}\times{\mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_{k_d}^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d , \hat{x}_{k_d,1}^d \in r_d^l ] .\nonumber\end{aligned}\ ] ] consider first the latter term on the right - hand side of ( [ eq42ac ] ) . by lemma [ lem41 ] , ( [ eq41fa ] ) , @xmath852 \leq\tfrac{3}{2 } l^2 h^\ast_2.\ ] ] note that @xmath853\\[-8pt ] & \leq & \frac{{\mathbb{p}}(\hat{x}_{k_d,1}^d \in r_d^l | \hat{\mathbf{x}}_0^d = \mathbf{x}^d)}{{\mathbb{p}}({\hat{\mathbf{x}}}_{k_d}^d \in f_d |\hat{\mathbf{x}}_0^d = \mathbf{x}^d)}.\nonumber\end{aligned}\ ] ] by ( [ emainx1 ] ) , for @xmath660 , @xmath854 as @xmath7 . use corollary [ lem35 ] and lemma [ lem38a ] to show that @xmath855 as @xmath7 . hence , the right - hand side of ( [ eq42ad ] ) converges to 0 as @xmath7 and consequently the latter term on the right - hand side of ( [ eq42ac ] ) converges to 0 as @xmath7 . it follows from the above arguments that @xmath856 also it follows from ( [ eqn4d ] ) that there exists @xmath495 such that @xmath857 \leq k.\ ] ] therefore , it is straightforward using ( [ eq42ab ] ) , ( [ eq42ac ] ) and the triangle inequality to show that @xmath858 \nonumber\\ & & \qquad\quad\hspace*{-8.6pt}{}- { \mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_{k_d}^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d , \hat{x}_{k_d,1}^d \notin r_d^l ] \bigr|\\ & & \qquad \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] by lemma [ lem41 ] , ( [ eq41fx ] ) , there exists @xmath859 as @xmath7 , such that @xmath860 \bigr| \nonumber\\[-1pt ] & & \qquad\leq\frac{l^2}{3 } \sup_{0 \leq y \leq1 } | g^\prime(y ) h^\prime(y)| \nonumber\\[-8.5pt]\\[-8.5pt ] & & \qquad\quad{}\times\sup_{\mathbf{x}^d \in\tilde{f}_d } { \mathbb{e}}\biggl [ \biggl| \frac{\tilde{j}_d^0 ( { \hat{\mathbf{x}}}_{k_d}^d)}{j_d^0 ( { \hat{\mathbf{x}}}_{k_d}^d ) } - \frac{1}{2 } \biggr| \big| \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d , \hat{x}_{k_d,1}^d \notin r_d^l \biggr ] + \varepsilon_d^1 \nonumber\\[-1pt ] & & \qquad\leq\frac{l^2}{3 } g^\ast h^\ast_1 \sup_{\mathbf{y}^d \in f_d } \biggl| \frac{\tilde{j}_d^0 ( \mathbf{y}^d)}{j_d^0 ( \mathbf{y}^d ) } - \frac{1}{2 } \biggr| + \varepsilon_d^1.\nonumber\end{aligned}\ ] ] by lemma [ lem40 ] , the right - hand side of ( [ eq42ah ] ) converges to 0 as @xmath7 . using the triangle inequality , the lemma follows by showing that @xmath861 - \hat{g } h ( x_1 ) \bigr|\nonumber\\[-8.5pt]\\[-8.5pt ] & & \qquad\rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] note that @xmath862 , and so , ( [ eq42ai ] ) follows since @xmath863 is continuous . we are in position to prove ( [ eq41e ] ) . [ lem43 ] for any @xmath260 , @xmath864 since ( [ eq43a ] ) trivially holds for @xmath865 , we assume that @xmath866 . for all sufficiently large @xmath0 , by the triangle inequality , @xmath867 & & \qquad= \biggl| \frac{1}{[d^\delta ] } \sum_{j=0}^{[\pi d^\delta-1 ] } { \mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_j^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \pi\hat{g } h ( x_1 ) \biggr| \nonumber\\[-1pt ] & & \qquad\leq\biggl| \frac{1}{[d^\delta ] } \sum_{j=0}^{[d^\beta ] -1 } { \mathbb{e } } [ \hat{g}_d h ( \hat{\mathbf{x}}_j^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] \biggr| \\[-1pt ] & & \qquad\quad { } + \frac{1}{[d^\delta ] } \sum_{j=[d^\beta]}^{[\pi d^\delta-1 ] } \bigl| { \mathbb{e } } [ \hat{g}_d h ( \hat{\mathbf{x}}_j^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \hat{g } h ( x_1 ) \bigr|\nonumber\\[-1pt ] & & \qquad\quad { } + \biggl ( \pi- \frac{[\pi d^\delta ] - [ d^\beta]}{[d^\delta ] } \biggr ) \hat{g } h ( x_1).\nonumber\end{aligned}\ ] ] since @xmath868 \nonumber\\ & & \qquad = { \mathbb{e } } [ \hat{g}_d h ( \hat{\mathbf{x}}_j^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_j^d \in f_d ] { \mathbb{p}}({\hat{\mathbf{x}}}_j^d \in f_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ) \\ & & \qquad\quad { } + { \mathbb{e } } [ \hat{g}_d h ( \hat{\mathbf{x}}_j^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_j^d \notin f_d ] { \mathbb{p}}({\hat{\mathbf{x}}}_j^d \notin f_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d),\nonumber\end{aligned}\ ] ] it is straightforward , following a similar argument to the proof of lemma [ lem42 ] , ( [ eq42af ] ) , to show that there exists @xmath869 such that , for all @xmath870 $ ] , @xmath871 \bigr| \leq\tilde{k}.\ ] ] therefore the first term on the right - hand side of ( [ eq43c ] ) is bounded by @xmath189 \tilde{k}/[d^\delta]$ ] . by lemma [ lem42 ] the supremum over @xmath660 of the second term on the right - hand side of ( [ eq43b ] ) converges to 0 as @xmath7 and the lemma follows . [ lem44 ] @xmath872 fix @xmath563 and let @xmath873 \varepsilon , 1\}$ ] . it follows from lemma [ lem43 ] that , for all sufficiently large @xmath0 , @xmath874 consider any @xmath260 . there exists @xmath875 such that @xmath876 . by the triangle inequality , @xmath877 again by the triangle inequality , @xmath878 } \sum_{j = [ \tilde{\pi } d^\delta]}^{[\pi d^\delta-1 ] } \sup_{\mathbf{x}^d \in\tilde{f}_d } \bigl|{\mathbb{e } } [ \hat{g}_d h ( \hat{\mathbf{x}}_j^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] \bigr|.\nonumber\end{aligned}\ ] ] since for all sufficiently large @xmath0 , @xmath879 - [ \tilde{\pi } d^\delta])/[d^\delta ] \leq2 \varepsilon$ ] , it follows from ( [ eq43d ] ) that the right - hand side of ( [ eq44d ] ) is bounded by @xmath880 , where @xmath881 is defined in lemma [ lem43 ] . let @xmath882 . note that since @xmath883 , we have that @xmath884 . therefore it follows from ( [ eq44c ] ) that for all sufficiently large @xmath0 , latexmath:[\[\label{eq44e } \sup_{\mathbf{x}^d \in\tilde{f}_d } @xmath260 and @xmath563 , the lemma follows . finally we are in position to prove ( [ eqn115 ] ) , and hence complete the proof of theorem [ main ] . [ lem45 ] @xmath886 note that @xmath649 is given by ( [ eqn113 ] ) and @xmath887 . therefore by the triangle inequality , @xmath888 } { \mathbb{e}}\bigl [ h \bigl(\hat{\mathbf{x}}_{[p_d d^\delta]}^d\bigr ) - h ( \hat{\mathbf{x}}_0^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \bigr ] - \exp(- l f^\ast/2 ) \hat{g } h ( x_1 ) \biggr| \nonumber\\ & & \qquad\leq\sup_{\mathbf{x}^d \in\tilde{f}_d } \biggl| { \mathbb{e}}\biggl [ \frac{d^2}{[d^\delta ] } \bigl ( h \bigl(\hat{\mathbf{x}}_{[p_d d^\delta]}^d\bigr ) - h ( \hat{\mathbf{x}}_0^d ) \bigr ) - p_d \hat{g } h ( x_1 ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ] \biggr| \nonumber\\ & & \qquad\quad { } + \sup_{\mathbf{x}^d \in\tilde{f}_d } \bigl| { \mathbb{e } } [ p_d \hat{g } h ( x_1 ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \exp ( - l f^\ast/2 ) \hat{g } h ( x_1 ) \bigr| \nonumber\\[-8pt]\\[-8pt ] & & \qquad\leq\sup_{0 \leq\pi\leq1 } \sup_{\mathbf{x}^d \in \tilde{f}_d } \biggl| { \mathbb{e}}\biggl [ \frac{d^2}{[d^\delta ] } \bigl ( h \bigl(\hat{\mathbf{x}}_{[\pi d^\delta]}^d\bigr ) - h ( \hat{\mathbf{x}}_0^d ) \bigr ) - \pi\hat{g } h ( x_1 ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ] \biggr| \nonumber\\ & & \qquad\quad{}+ \sup_{\mathbf{x}^d \in\tilde{f}_d } \bigl| { \mathbb{e } } [ p_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \exp(- l f^\ast/2 ) \bigr| \sup_{0 \leq y \leq1 } |\hat{g } h ( y)| \nonumber\\ & & \qquad\leq\sup_{0 \leq\pi\leq1 } \sup_{\mathbf{x}^d \in \tilde{f}_d } | \hat{g}_d^{\delta , \pi } h ( \mathbf{x}^d ) - \pi \hat{g } h ( x_1 ) | \nonumber\\ & & \qquad\quad { } + \sup _ { \mathbf{x}^d \in\tilde{f}_d } \bigl| { \mathbb{e } } [ p_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \exp(- l f^\ast/2 ) \bigr| \sup_{0 \leq y \leq1 } |\hat{g } h ( y)|.\nonumber\end{aligned}\ ] ] by corollary [ lem44 ] , the first term on the right - hand side of ( [ eq45ba ] ) converges to 0 as @xmath7 . by theorem [ lem319 ] , for any sequence @xmath756 such that @xmath660 , latexmath:[$p_d as @xmath7 . hence the latter term on the right - hand side of ( [ eq45ba ] ) converges to 0 as @xmath7 , since @xmath883 implies that @xmath889 . we thank the anonymous referees for their helpful comments which have improved the presentation of the paper .
we consider the optimal scaling problem for high - dimensional random walk metropolis ( rwm ) algorithms where the target distribution has a discontinuous probability density function . almost all previous analysis has focused upon continuous target densities . the main result is a weak convergence result as the dimensionality @xmath0 of the target densities converges to @xmath1 . in particular , when the proposal variance is scaled by @xmath2 , the sequence of stochastic processes formed by the first component of each markov chain converges to an appropriate langevin diffusion process . therefore optimizing the efficiency of the rwm algorithm is equivalent to maximizing the speed of the limiting diffusion . this leads to an asymptotic optimal acceptance rate of @xmath3 @xmath4 under quite general conditions . the results have major practical implications for the implementation of rwm algorithms by highlighting the detrimental effect of choosing rwm algorithms over metropolis - within - gibbs algorithms . , .
dilute suspensions of the same nipa particles used in the tube packings were placed between glass coverslips such that the gap between coverslips was approximately the diameter of the nipa particles , creating a quasi-2d monolayer . videos of these particles diffusing in two dimensions were taken using a 100@xmath3 oil - immersion objective ( n.a . = 1.4 ) at five temperatures from 24 to 28 @xmath2c . particle centers were tracked using standard particle tracking routines [ 1 ] . the two dimensional pair correlation function @xmath50 of the particle locations was then calculated from the particle tracks . at each temperature , the approximate diameter of the particles taken to be the first value where @xmath51 , since , in the first approximation , @xmath52 , and the effictive diameter of particle is often taken as the value of @xmath53 for which @xmath54 . since previous studies have observed a linear relationship between diameter and temperature for these particles in this temperature range [ 2 ] , we then take a linear fit of these data points to find a functional relationship between effective particle diameter and temperature . for the larger species used in this experiment , we find the relationship @xmath55m@xmath56m@xmath57 , and for the smaller species , we find @xmath58m@xmath59m@xmath57 . for videos of two dimensional cross sections of particle packings in cylinders , the nearest neighbor particle spacing @xmath60 was calculated from axial particle spacings and theoretical helical packing values . the volume fraction ratios of the tube packings were then calculated as @xmath61 . ) and larger ( red @xmath62 ) nipa particles at different temperatures , with linear fits . ] we describe the predicted helical packings with a commonly used three - index notation , ( @xmath13 ) [ 3 ] . if we consider any single particle in a such an ideal close - packed system of hard spheres , we notice that each particle six nearest neighbors . we can thus think about such a packing as a two - dimensional triangular lattice wrapped around a cylinder . the indices then indicate the unit vectors in the unwrapped triangular lattice connecting any point to itself in the helical structure . alternatively , one notices that the three indices indicate the relative distances of a particle s nearest neighbors along the axis of the cylinder . for example , if any given particle s nearest neighbors are the second , third or fifth closest particles in the axial direction , it would exist in a ( 2,3,5 ) packing . after identifying particle centers in 3d using common particle tracking routines [ 1 ] , the nearest neighbors of each particle in an image are identified as those closer than the far end of the first maximum in the 3d pair correlation function @xmath50 ( see figure 3b ) . after an individual particle in the packing is selected , all other particle are given integer values based on the order of their axial distance from the selected particle in each direction . the integer values of the nearest neighbors are then recorded . this process is repeated for every particle in the structure . this creates histograms of the relative axial order of the neighboring particles . the peaks in these histograms then identify the integers ( @xmath13 ) describing the ideal packing ( see figure 3c ) . highlighted in red , with lorentzian peak fit in blue dashed line . vertical dotted line indicates nearest neighbor cutoff . ( c ) axial and azimuthal positions of tracked particles in a short axial section . selected particle position highlighted in green , with nearest neighbors highlighted in blue . relative axial distance from the selected particle listed to side . note that the nearest neighbors have relative axial order of 2 , 3 and 5 . ( d ) histogram of relative axial order for 1st , 2nd and 3rd closest nearest neighbors in the axial direction . peaks at values of 2 , 3 and 5 indicate that this is a ( 2,3,5 ) packing.,title="fig : " ] highlighted in red , with lorentzian peak fit in blue dashed line . vertical dotted line indicates nearest neighbor cutoff . ( c ) axial and azimuthal positions of tracked particles in a short axial section . selected particle position highlighted in green , with nearest neighbors highlighted in blue . relative axial distance from the selected particle listed to side . note that the nearest neighbors have relative axial order of 2 , 3 and 5 . ( d ) histogram of relative axial order for 1st , 2nd and 3rd closest nearest neighbors in the axial direction . peaks at values of 2 , 3 and 5 indicate that this is a ( 2,3,5 ) packing.,title="fig : " ] highlighted in red , with lorentzian peak fit in blue dashed line . vertical dotted line indicates nearest neighbor cutoff . ( c ) axial and azimuthal positions of tracked particles in a short axial section . selected particle position highlighted in green , with nearest neighbors highlighted in blue . relative axial distance from the selected particle listed to side . note that the nearest neighbors have relative axial order of 2 , 3 and 5 . ( d ) histogram of relative axial order for 1st , 2nd and 3rd closest nearest neighbors in the axial direction . peaks at values of 2 , 3 and 5 indicate that this is a ( 2,3,5 ) packing.,title="fig : " ] highlighted in red , with lorentzian peak fit in blue dashed line . vertical dotted line indicates nearest neighbor cutoff . ( c ) axial and azimuthal positions of tracked particles in a short axial section . selected particle position highlighted in green , with nearest neighbors highlighted in blue . relative axial distance from the selected particle listed to side . note that the nearest neighbors have relative axial order of 2 , 3 and 5 . ( d ) histogram of relative axial order for 1st , 2nd and 3rd closest nearest neighbors in the axial direction . peaks at values of 2 , 3 and 5 indicate that this is a ( 2,3,5 ) packing.,title="fig : " ] the average inter - particle spacing @xmath60 in a 3d track is determined from the 3d pair correlation function . since each structure has on the order of 100 particles , the resolution of said correlation function is fairly poor . thus , we determine @xmath60 by fitting a lorentz peak , @xmath63 to the first peak in @xmath50 ( see figure 3b ) . the center of this function , @xmath64 , gives us the value for @xmath60 . the value for structure diameter @xmath14 was taken as twice the average value of the radial positions of the particles in the structure . in one- and two - dimensional systems of particles , there can be no long range positional order , however , there can be long range orientational order in two dimensions [ 4 ] . this fact is key to the kthny theory [ 5 - 7 ] of continuous melting in two dimensions . we point out here that the same arguments used by nelson and halperin in [ 6 ] applied to a quasi-_one_-dimensional system , show that there is long range orientational order in such systems at low temperatures . the system we consider is a two - dimensional box which is infinite in the @xmath65 direction and periodic with length @xmath42 in the @xmath66 direction . note that such a system system applies to the problem of spherical particles in a cylinder if we assume that at high densities , the outer layer of spheres in the cylinder behave as if they were disks lying in a two - dimensional strip with periodic boundary conditions that is , we `` unwrap '' the spheres onto a plane and neglect the radial ( which become out - of - plane ) fluctuations . this maps @xmath0 defined in the quasi - one - dimensional system approximately onto the @xmath0 that was measured on the surface of the cylinder . we ll calculate a few quantities in the low - temperature ( `` solid '' phase ) by using an isotropic lam elasticity free energy . the following expression is appropriate for the elasticity of a triangular lattice with the given geometry : @xmath67\ ] ] here @xmath68 is the displacement from the ( zero - temperature , hexagonal ) ordered state . this free energy assumes that there exist nonzero elastic moduli @xmath69 and @xmath4 , a natural assumption we must make . to calculate the correlation function of orientational order , we first define @xmath70 , where @xmath71 measures the bond angles between nearest neighbor particles ( the factor of 6 arises from the triangular symmetry of the lattice ) . furthermore , we will use the fact that the angle @xmath71 can be shown to be @xmath72 . the fact that the theory is quadratic allows us to take two shortcuts . first , we can evaluate @xmath73 ( where @xmath74 ) as the exponential of an average . second , this average can be evaluated simply since the inverse of the elasticity dynamical matrix is known [ 6,8 ] to be @xmath75 . it may be worthwhile to point out that we can guess the result from `` counting powers of @xmath76 '' . we can see that translational order is destroyed in this system by estimating the scaling of the ( fourier - space ) correlation function of @xmath77 at low @xmath76 . to do this , we integrate @xmath78 in one dimension . but @xmath78 scales as @xmath79 , so the result @xmath80 scales like @xmath81 . this diverges at low @xmath76 , meaning that long - wavelength fluctuations drive positional correlations to zero . however , since @xmath71 is related to the _ derivative _ of @xmath77 , the ( fourier - space ) correlation function of @xmath71 will involve an extra factor of @xmath76 for each @xmath82 . we thus expect the two - point correlation function to scale as @xmath83 , which is finite at low @xmath76 , indicating long range order at infinity . we now proceed with a more detailed calculation of the real - space orientational order correlation function : @xmath84\\ \nonumber & = \exp\left[-\frac{9k_bt}{2\pi l}\epsilon_{ij}\epsilon_{mn } \sum_{n =- n}^n\int_{-\lambda}^\lambda dq_x q_iq_m \left(1-e^{i(q_x x+2\pi n y / l)}\right)d_{jn}^{-1 } \right]\\ \nonumber & = \exp\left[-\frac{9k_bt}{2\pi\mu l } \sum_{n =- n}^n\int_{-\lambda}^\lambda dq_x \left(1-e^{i(q_x x+2\pi n y / l)}\right ) \right]\\ \nonumber & = \exp\left[-\frac{9k_bt}{2\pi\mu l } \sum_{n =- n}^n\left(2\lambda - e^{i2\pi n y / l}\frac{e^{i\lambda x}-e^{-i\lambda x}}{ix}\right ) \right]\\ & = \exp\left[-\frac{9k_bt}{2\pi\mu l } \left(2\lambda(2n+1)-\frac{\sin(2\pi(n+1/2)y / l)}{\sin(\pi y / l ) } \frac{2\sin(\lambda x)}{x}\right ) \right]\end{aligned}\ ] ] for large @xmath65 , this approaches the constant @xmath89 , and hence this quasi - one dimensional system has long range orientational order . this result explains our experimentally observed long range correlations in the orientational order parameter at high densities . we emphasize that ( 3 ) should not be fit to experimental or simulation data in the present form for the following reasons . first , the elastic field theory describes the sphere systems at large wavelengths . this approach is fine for the purposes of searching for long - range orientational order , but is inappropriate for the extraction of quantitative correlation functions . another issue is that the precise functional form of ( 3 ) arises from the cutoff scheme chosen . ideally , one would apply a physical theory which describes the higher wavelength modes and how the system responds to them ( in the case of spheres , part of this scheme could be a density functional - like theory ) , which would be more suitable for capturing the behavior of the correlation function over short distances than the hard cutoff that we used . [ 1 ] j. c. crocker and d. g. grier , j. colloid interface sci . * 179 * , 298 ( 1996).a . m. alsayed _ et al . _ , science * 309 * , 1207 ( 2005 ) ; y. han _ et al . _ , e * 77 * , 041406 ( 2008 ) ; y. han _ et al . _ , nature ( london ) * 456 * , 898 ( 2008 ) ; z. zhang _ _ , nature ( london ) * 459 * , 230 ( 2009 ) ; p. yunker _ et al . _ , lett . * 103 * , 115701 ( 2009).r . o. erickson , science * 181 * , 705 ( 1973 ) ; w. f. harris and r. o. erickson , j. theor . biol . * 83 * , 215 ( 1980).n . d. mermin , physical review * 168 * , 250 ( 1968).j.m . kosterlitz and d.j . thouless , j phys c * 6 * , 1181 ( 1973).d.r . nelson and b.i . halperin , phys . b * 19 * , 2457 ( 1979).a.p . young , phys . rev . b * 19 * , 1855 ( 1979).p . chaikin and t. c. lubensky , _ principles of condensed matter physics _ ( cambridge university press , cambridge , england , 2006 ) .
the phase behavior of helical packings of thermoresponsive microspheres inside glass capillaries is studied as a function of volume fraction . stable packings with long - range orientational order appear to evolve abruptly to disordered states as particle volume fraction is reduced , consistent with recent hard sphere simulations . we quantify this transition using correlations and susceptibilities of the orientational order parameter @xmath0 . the emergence of coexisting metastable packings , as well as coexisting ordered and disordered states , is also observed . these findings support the notion of phase transition - like behavior in quasi-1d systems . the phenomenology of ordered phases and phase transformations in systems with low dimensionality is surprisingly rich . while dense three - dimensional ( 3d ) thermal packings can exhibit long - range order , for example , this trait is absent in purely one dimensional systems [ 1 ] . complexities arise , however , when considering 3d systems confined primarily to one dimension ( 1d ) . investigation of order and phase behavior in quasi-1d thermal systems , therefore , holds potential to elucidate a variety of novel physical processes that have analogies with polymer folding [ 2 ] , formation of supermolecular fibers in gels [ 3 ] , and emergence of helical nanofilaments in achiral bent - core liquid crystals [ 4 ] . packings of soft colloidal spheres in cylinders provide a useful model experimental system to quantitatively investigate order and phase transformations in quasi-1d . at high densities , spheres in cylindrical confinement are predicted to form helical crystalline structures [ 5,6 ] . evidence for such packings have been found in foams [ 7,8 ] , biological microstructures [ 5 ] , colloids in micro - channels [ 9 ] and fullerenes in nanotubes [ 10 ] . however , research on these systems has been limited to static snapshots and athermal media . recent simulations suggest that transitions between different helical ordered states [ 11,12 ] and between quasi-1d ordered and disordered states [ 13 - 15 ] should exist in thermal systems , but such transitions have not been investigated experimentally . in this letter , we explore ordered and disordered structures in a quasi-1d thermal system of soft particles with adjustable volume fraction . in particular , we create helical packings of thermoresponsive colloid particles in glass microcapillaries , we show theoretically that phases with long - range orientational order can exist in quasi-1d , we demonstrate experimentally that such phases with long - range orientational order exist at volume fractions below maximal packing , and we study volume - fraction induced melting of these orientationally ordered phases into liquid phases . the orientational order parameters and susceptibilities that characterize these phases and this crossover are measured and analyzed . coexisting regions of ordered and disordered states and coexisting ordered domains with different pitch and chirality are observed at these crossover points . such coexistence effects suggest the presence of abrupt or discontinuous volume - fraction driven transitions in quasi-1d structures . interestingly , these orientationally ordered phases in quasi-1d share physical features with orientationally ordered phases observed [ 16,17 ] and predicted [ 18 ] in 2d . the experiments employed aqueous suspensions of rhodamine - labeled poly - n - isopropylacrylamide ( nipa ) microgel spheres ( polydispersity @xmath1 ) with diameters which decrease linearly and reversibly with increasing temperature [ 19 ] . the unique thermoresponsive characteristics of nipa microgel particles provide a means to explore the phase behavior [ 17,20 - 21 ] of soft spheres in quasi-1d as a function of volume fraction . borosilicate glass tubes ( mcmaster carr ) were heated and stretched to form microcapillaries with inner diameters comparable to nipa microsphere diameter . an aqueous suspension of nipa microspheres was then drawn into the capillaries ; subsequently , the capillary ends were sealed with epoxy and attached to a glass microscope slide . samples were annealed at 28@xmath2c to permit uneven packings to re - arrange at low volume fraction , thereby creating stable high - volume fraction packings when returned to lower temperatures . the samples were imaged under a 100@xmath3 oil - immersion objective ( n.a . = 1.4 ) using spinning - disk confocal microscopy ( qlc-100 , visitech , international ) . resulting images depict 75@xmath4 m long segments of densely packed regions which span at least several hundred microns . an objective heater attached to the microscope ( bioptechs ) permitted control of the sample temperature to within @xmath50.1@xmath2c . standard particle tracking routines [ 22 ] were employed to identify particle positions from three dimensional image stacks and 2d cross - sections . ) , ( 1,3,4 ) ( purple @xmath6 ) , ( 2,3,5 ) ( green @xmath7 ) , ( 1,4,5 ) ( cyan @xmath8 ) , ( 0,5,5 ) ( red @xmath9 ) , ( 3,3,6 ) ( dark green @xmath10 ) . filled symbols indicate structures with observable brownian motion . dashed vertical lines indicate theoretical @xmath11 values for predicted structures . inset : cartoon of axial cross - section of a ( 336 ) structure with @xmath12 . ( b ) confocal flourescence images of helical nipa packings at high volume fraction . scale bar = 10@xmath4m.,title="fig : " ] ) , ( 1,3,4 ) ( purple @xmath6 ) , ( 2,3,5 ) ( green @xmath7 ) , ( 1,4,5 ) ( cyan @xmath8 ) , ( 0,5,5 ) ( red @xmath9 ) , ( 3,3,6 ) ( dark green @xmath10 ) . filled symbols indicate structures with observable brownian motion . dashed vertical lines indicate theoretical @xmath11 values for predicted structures . inset : cartoon of axial cross - section of a ( 336 ) structure with @xmath12 . ( b ) confocal flourescence images of helical nipa packings at high volume fraction . scale bar = 10@xmath4m.,title="fig : " ] at high densities , we observe crystalline helical packings with varying pitch and chirality dependent upon particle- and tube - diameter . the observation of large ordered domains is consistent with the tendency for these nearly monodisperse particles to form uniform crystals in 2d [ 17 ] and 3d [ 20 ] . particles in hard - sphere helical packings predicted for cylinders [ 5 ] have six nearest neighbors whose relative order along the tube axis corresponds to a characteristic set of three integers ( @xmath13 ) . this notation for distinct helical crystalline structures is commonly used in phyllotaxy and is similar to the vector used to describe carbon nanotube chirality . we verify such crystalline ordering from analysis of 3d confocal image stacks . all varieties of predicted helical packings in the given range of tube - diameter - to - particle - diameter ratios were observed in 15 samples ( figure 1 ) , with the exception of structure ( 0,4,4 ) . additionally , the ratio @xmath11 , where @xmath14 is twice the average radial distance from the particle centers to the central axis of the tube and @xmath15 is the average nearest neighbor particle separation , fell within experimental error of the predicted maximally packed hard - sphere values . ordered structures were found to exist over a range of volume fractions below maximal packing . when the effective particle diameter @xmath16 [ 23 ] was such that @xmath17 , the particles did not appear to move ( i.e. , motions greater than 0.2 @xmath4 m were not observed during the 10-second scan ) . in such cases , we consider the particles to be packed closer than their effective diameters . for @xmath18 , particles fluctuate significantly about their equilibrium positions ; thus , thermal helical crystalline structures exist at volume fractions below close packing . the volume fractions of such samples were then lowered further to determine if , when , and how the packings disorder to isotropic states . two uniformly packed samples were chosen and are presented here for careful analysis : a ( 2,2,4 ) packing of microspheres with diameter 1.71@xmath4 m at 22@xmath2c and a ( 0,6,6 ) packing of microspheres with diameter 1.23@xmath4 m at 22@xmath2c . the sample temperature was increased in steps of 0.2 - 0.7@xmath2c . at each temperature step , after allotting ample time for the sample to reach thermal equilibrium ( at least 5 minutes ) , videos of two - dimensional cross - sections of the packings were taken at 15 - 30 frames per second for approximately 5 minutes . though these two - dimensional videos lose some of the structural information available in three dimensional scans , they provide data at much higher speeds and yield better axial position tracking of particles in view . a local orientational order parameter , @xmath19 , quantifies helical order in these systems . here , @xmath20 is the angle between the axis of the tube and the bond between particles @xmath21 and @xmath22 , and @xmath23 is the number of nearest neighbors of particle @xmath21 . though this order parameter is typically used for two - dimensional planar systems , it is acceptable to use in the analysis of two - dimensional slices of helical packings . helical packings are effectively two - dimensional triangular lattices wrapped into cylinders , and the observed cross - sections of these particular packings exhibit only slight variation from the ideal two - dimensional hexagonal lattice . we examined the spatial extent of orientational order along the tube by calculating the orientational spatial correlation function , @xmath24 , where @xmath25 is the axial position of particle @xmath21 . as depicted in figure 2 , the resulting correlation functions decrease quickly at low volume fractions , as expected in a disordered state . however , at high volume fractions , these functions exhibit long - range order within the experiment s field of view . these experimental observations are consistent with an expectation of long - range orientational order . though long - range _ translational _ order is impossible in one dimension at finite temperature [ 1 ] , long - range _ orientational _ order is possible , just as in the much - storied theory of two - dimensional melting [ 24 ] . one can show theoretically that long - range orientational order persists even in this quasi - one - dimensional system by evaluating the orientational correlation function @xmath26(r ) using the isotropic elasticity free energy [ 24,25 ] in an `` unwrapping '' of the particles on the cylinder surface onto a two - dimensional infinitely long strip [ 26 ] . we assume that particles are packed densely enough so that fluctuations along the radial direction of the cylinder may be neglected . as @xmath27 , @xmath26 approaches a constant . thus , finite correlations exist at infinite distance , a hallmark of a phase with long - range order . orientational order arises here through the crystalline axes defined by unwrapping the cylinder , rather than through an explicit additional mode , as in such systems studied in [ 27 ] . to our knowledge , the existence of long - range orientational order has not been characterized in previous studies of packing in cylindrical systems [ 11 - 15 ] . at low volume fractions , one expects long - range orientational correlations to disappear as observed in the experiment . however suggestive this combination of theory and experiment may be , we emphasize that further work is required to elucidate whether a generalization of kthny theory [ 24 ] is appropriate for this system . to quantify the crossover from long - range to short - range order in the experimental system , an average orientational order parameter @xmath28 is defined for each frame , where @xmath29 . the height of the first peak of the one - dimensional axial structure factor @xmath30 , where @xmath31^{-1}\left|\sum\sum e^{ik|z_l - z_m|}\right|$ ] , is used as a translational order parameter . here , @xmath32 denotes the axial position of each particle , @xmath33 is the number of particles in the field of view at a given time , and @xmath34 is chosen iteratively for each volume fraction . ) and orientational ( blue @xmath35 ) order parameters for ( a ) ( 2,2,4 ) and ( b ) ( 0,6,6 ) samples . bottom : orientational susceptibilities for ( c ) ( 2,2,4 ) and ( d ) ( 0,6,6 ) systems , for sample size = 75 @xmath4 m ( @xmath6 ) and = @xmath36 ( @xmath37 ) . dashed vertical lines at @xmath38 and @xmath39 . ] in fig . 3(a , b ) , it is evident that both the average translational and orientational order parameters cross from an ordered state at high volume fraction to a disordered state at low volume fraction , though the change in @xmath28 is significantly sharper than the change in @xmath30 . the orientational crossover complements a recent simulation which finds a similiar crossover in hard sphere packings with decreasing density [ 14 ] . to characterize the fluctuations in orientational order , we calculate the orientational susceptibility @xmath40 , where @xmath41 represents the time average . statistical effects of the finite size of the system are accounted for by calculating the susceptibility of different sub - segments of length @xmath42 in the system and extrapolating to the limit @xmath43 , similiar to the calculations in [ 17 ] . plots of @xmath44 in fig . 3(c , d ) clearly demonstrate a peak in the orientational susceptibility . the location of this peak coincides with the onset of both orientational and translational order in the system . translational susceptibilities were also calculated , but did not exhibit clear peaks or trends with respect to the order parameters or volume fraction . we do not expect any transition - like behavior from translational susceptibilities due to arguments from [ 1 ] against long - range translational order , which apply in quasi-1d . the existence of a diverging peak in the susceptibility of an order parameter typically indicates a phase transition [ 28 ] . however , it is difficult from the given data points to determine if this is truly a diverging peak , and whether it would indicate a first - order ( asymmetrically diverging ) or second - order ( symmetrically diverging ) phase transition . upon closer examination of the ( 0,6,6 ) sample , we observe coexistence of ordered and disordered domains for @xmath45 = 0.47 - 0.40 ( see figure 4(a ) ) . we also observe the appearance of a small domain with dubious order in the ( 2,2,4 ) sample at @xmath46 = 0.68 . the appearance of these coexisting domains is consistent with the spatial correlation functions exhibiting neither long - range nor short - range behavior at intermediate volume fractions in figure 2 ( @xmath45=0.40 , @xmath46 = 0.68 ) . the presence of such solid - liquid coexistence has also been seen in recent simulations of hard spheres in cylinders [ 15 ] . in other samples , domains with different helical order often appear as volume fraction decreased ( figure 4b ) . the appearance of these domains was difficult to quantify , since domains would grow , shrink and/or disappear with decreasing volume fraction . the structures observed in coexisting states were those with most similar predicted linear densities and @xmath11 values , consistent with recent hard sphere simulations [ 11,12 ] . this coexistence of ordered structures should not be confused with dislocation - mediated structural transitions theoretically studied in [ 6 ] and observed in [ 8 ] in athermal helical crystals , especially because of the difference in the observed sequence of coexisting structures . , @xmath47 , in 1.2 @xmath4 m segments . ( b ) ( 2,3,5 ) structure with emerging domains of ( 3,2,5 ) and ( 0,4,4 ) structures . color represents phase of @xmath48 , @xmath49 , which characterizes packing orientation . structures corresponding to each @xmath49 given on right . ] in summary , we created ordered helical packings of thermoresponsive colloids and observed the presence of long - range order resilient to thermal fluctuations . sharp crossovers from orientationally ordered to disordered phases with decreasing volume fraction were observed . in addition , we find basic evidence for abrupt volume - fraction driven structure - to - structure transitions . these findings raise and elucidate fundamental questions on the subject of melting in 1d . we especially thank yilong han for clarifying discussions about experimental analyses , and we also thank tom haxton , yair shokef and peter yunker for enlightening discussions . this work was supported by mrsec grant dmr-0520020 , nsf grant dmr-080488 , and nasa grant nag-2939 . [ 1 ] l. van hove , physica * 16 * , 137 ( 1950).l . e. hough _ et al . _ , science * 325 * , 456 ( 2009).y . q. zhou , c. k. hall , and m. karplus , phys . rev . lett . * 77 * , 2822 ( 1996).j . f. douglas , langmuir * 25 * , 8386 ( 2009).r . o. erickson , science * 181 * , 705 ( 1973).w . f. harris and r. o. erickson , j. theor . biol . * 83 * , 215 ( 1980).n . pittet , n. rivier , and d. weaire , forma * 10 * , 65 ( 1995).p . boltenhagen and n. pittet , europhys . lett . * 41 * , 571 ( 1998 ) ; n. pittet _ et al . _ , europhys . lett . * 35 * , 547 ( 1996 ) ; p. boltenhagen , n. pittet , and n. rivier , europhys . lett . * 43 * , 690 ( 1998).j . h. moon _ et al . _ , langmuir * 20 * , 2033 ( 2004 ) ; f. li _ et al . _ , j. am . chem . soc . * 127 * , 3268 ( 2005 ) ; m. tymczenko _ et al . _ , adv . mater . * 20 * , 2315 ( 2008).a . n. khlobystov _ et al . _ , phys . rev . lett . * 92 * , 245507 ( 2004 ) ; t. yamazaki _ et al . _ , nanotechnology * 19 * , 045702 ( 2008).g . t. pickett , m. gross , and h. okuyama , phys . rev . lett . * 85 * , 3652 ( 2000).k . koga and h. tanaka , j. chem . phys . * 124 * , 131103 ( 2006).m . c. gordillo , b. martinez - haya , and j. m. romero - enrique , j. chem . phys . * 125 * , 144702 ( 2006).f . j. duran - olivencia and m. c. gordillo , phys . rev . e * 79 * , 061111 ( 2009).h . c. huang , s. k. kwak , and j. k. singh , j. chem . phys . * 130 * , 164511 ( 2009).c . a. murray and d. h. vanwinkle , phys . rev . lett . * 58 * , 1200 ( 1987 ) ; k. zahn , r. lenke and g. maret , phys . rev . lett . * 82 * , 2721 ( 1999).y . han _ et al . _ , phys . rev . e * 77 * , 041406 ( 2008).k . j. strandburg , rev . mod . phys . * 60 * , 161 ( 1988).b . r. saunders and b. vincent , adv . colloid interface sci . * 80 * , 1 ( 1999 ) ; r. pelton , adv . colloid interface sci . * 85 * , 1 ( 2000 ) ; l. a. lyon _ et al . _ , j. phys . chem . b * 108 * , 19099 ( 2004).a . m. alsayed _ et al . _ , science * 309 * , 1207 ( 2005).h . senff and w. richtering , j. chem . phys . * 111 * , 1705 ( 1999 ) ; j. wu , b. zhou , and z. hu , phys . rev . lett * 90 * , 048304 ( 2003 ) ; y. han _ et al . _ , nature ( london ) * 456 * , 898 ( 2008 ) ; z. zhang _ et al . _ , nature ( london ) * 459 * , 230 ( 2009 ) ; p. yunker _ et al . _ , phys . rev . lett . * 103 * , 115701 ( 2009 ) ; j. brijitta _ et al . _ , j. chem . phys . * 131 * , 074904 ( 2009).j . c. crocker and d. g. grier , j. colloid interface sci . * 179 * , 298 ( 1996).see supplemental material for details of particle diameter characterization.d . r. nelson and b. i. halperin , phys . rev . b * 19 * , 2457 ( 1979).n . d. mermin , phys . rev . * 176 * , 250 ( 1968).see supplemental material for additional details of this calculation.y . kantor and m. kardar , phys . rev . e * 79 * , 041109 ( 2009 ) . k. binder , rep . prog . phys . * 50 * , 783 ( 1987 ) .
creatinine kinase ( ck ) and cardiac troponin is used for diagnostic evaluation of myocardial damage in patients with acute myocardial infarction ( ami ) . ck is an established noninvasive measure of myocardial infarct size and severity and has been accepted as a reliable prognostic marker in ami patients undergoing primary percutaneous coronary intervention ( pci ) . although early reperfusion phenomena strongly influence ck release , the peak values of cardiac biomarkers , including ck and the mb isoenzyme of ck ( ck - mb ) , predict the prognosis of ami patients after pci . in general , contrast - enhanced magnetic resonance imaging and nuclear cardiology techniques evaluate the association between cardiac biomarkers and myocardial damage . since 1990 , single - photon emission computed tomography ( spect ) with technetium-99 m hexakis 2-methoxy - isobutyl - isonitrile ( tc - sestamibi ) has been widely used to assess myocardial damage at rest after the onset of ami . tc - sestamibi is known to exhibit the phenomenon of reverse redistribution , so - called washout , in patients with ami after pci [ 1012 ] . myocardial scintigraphy with i - metaiodobenzylguanidine is a standard method of rendering early and delayed images ; nuclide tracer washout is the gold standard for evaluating cardiac function . however , washout of tc - sestamibi is still a matter of debate ; the significance of washout has not been fully elucidated in ami patients after primary pci . no previous study has demonstrated the significance of cardiac biomarkers and washout of tc - sestamibi for the evaluation of cardiac damage in ami patients . accordingly , the present study was designed to clarify the association between the washout rate ( wr ) of tc - sestamibi determined from myocardial scintigraphic images and cardiac enzymes in ami patients after pci . this study population comprised 56 consecutive patients ( mean age 65.8 years ) who had suffered their first ami . on arrival at the emergency department ami was diagnosed by cardiologists on the basis of the symptoms , electrocardiographic changes , echocardiograms , detection of human heart fatty acid binding protein by immunochromatography , and by hematological findings , including ck and ck - mb . in order to determine the actual onset time , cardiologists conducted interviews with the patients and family members . the patients were immediately transferred to the cardiac catheter laboratory for emergent cardiac catheterization . in 26 patients , the culprit lesion was located in the right coronary artery ( rca ) ; in 24 in the left anterior descending coronary artery ( lad ) ; and in 6 patients in the left circumflex coronary artery ( lcx ) . thrombus aspiration catheters were used to cross the occluding lesions ; follow - up coronary angiography was performed during pci . blood samples were collected every 3 h after pci to determine the peak values of cardiac enzymes . those patients with ami , who received pci followed with conventional drugs , experienced no worsening of symptoms and required no hospitalization due to ami - related complications , either before or after the scintigraphic examinations . tokyo , japan ) was injected into the left antecubital vein , and thereafter spect was performed twice - initially at 30 min after injection ( early tc - sestamibi uptake ) and subsequently at 4 h after injection ( delayed uptake ) . before performing spect , anterior and lateral planar images were acquired for 300 s using a gamma camera equipped with a low-/medium - energy general - purpose collimator and a 512512 matrix . tc - sestamibi images were obtained using a double - headed gamma camera ( symbia e ; siemens - asahi medical technologies ltd . , two detectors ( 2180 ) were used to acquire 64 views for 25 s in 5.6 steps using a 6464 matrix . raw imaging data were reconstructed using butterworth - filtered ( order , 8 ; cut - off frequency , 0.20 cycles / pixel ) back - projection . transaxial slices were reconstructed and reoriented to represent coronal slices , and then horizontal long- and short - axis slices were produced by axis shift . standard electrocardiographically gated images were acquired in 64 steps at 19 s per step , using the step acquisition mode in rr interval , and divided into 16 frames . tracer uptake was assessed by non - gated early images created from the sums of all of the gated images obtained in standard acquisition mode . regions of interest ( rois ) were drawn over the entire heart and upper mediastinum depicted in the planar images . the h / m ratio and global wr of tc - sestamibi were calculated from the pixel counts in the rois using the following equations : h / m = mean pixel count of cardiac roi / mean pixel count of mediastinal roi ; and wr ( % ) = [ ( mean early cardiac pixel count mean delayed cardiac pixel count)/mean early cardiac pixel count ] 100 . backgrounds and time - decay corrections were not applied to the calculation of wr . once a spect image was acquired and reconstructed from an early image , quantitative gated spect ( qgs ) software ( cedars - sinai medical center , los angeles , ca ) was used to calculate the ventricular edges and evaluate the left ventricular end - diastolic volume ( lvedv ) , left ventricular end - systolic volume ( lvesv ) , and left ventricular ejection fraction ( lvef ) . the significance of differences among different coronary territories was assessed by one - way analysis of variance ( anova ) . parameters in the early and delayed phases in the same patient were compared using a paired t test . liner regression analysis was used to evaluate the significance of peak values of cardiac enzymes and the values obtained from tc - sestamibi myocardial scintigraphic images . this study population comprised 56 consecutive patients ( mean age 65.8 years ) who had suffered their first ami . on arrival at the emergency department ami was diagnosed by cardiologists on the basis of the symptoms , electrocardiographic changes , echocardiograms , detection of human heart fatty acid binding protein by immunochromatography , and by hematological findings , including ck and ck - mb . in order to determine the actual onset time , cardiologists conducted interviews with the patients and family members . the patients were immediately transferred to the cardiac catheter laboratory for emergent cardiac catheterization . in 26 patients , the culprit lesion was located in the right coronary artery ( rca ) ; in 24 in the left anterior descending coronary artery ( lad ) ; and in 6 patients in the left circumflex coronary artery ( lcx ) . thrombus aspiration catheters were used to cross the occluding lesions ; follow - up coronary angiography was performed during pci . blood samples were collected every 3 h after pci to determine the peak values of cardiac enzymes . those patients with ami , who received pci followed with conventional drugs , experienced no worsening of symptoms and required no hospitalization due to ami - related complications , either before or after the scintigraphic examinations . tc - sestamibi ( 740 mbq ; fuji film ri pharma co. ltd . , tokyo , japan ) was injected into the left antecubital vein , and thereafter spect was performed twice - initially at 30 min after injection ( early tc - sestamibi uptake ) and subsequently at 4 h after injection ( delayed uptake ) . before performing spect , anterior and lateral planar images were acquired for 300 s using a gamma camera equipped with a low-/medium - energy general - purpose collimator and a 512512 matrix . tc - sestamibi images were obtained using a double - headed gamma camera ( symbia e ; siemens - asahi medical technologies ltd . , two detectors ( 2180 ) were used to acquire 64 views for 25 s in 5.6 steps using a 6464 matrix . raw imaging data were reconstructed using butterworth - filtered ( order , 8 ; cut - off frequency , 0.20 cycles / pixel ) back - projection . transaxial slices were reconstructed and reoriented to represent coronal slices , and then horizontal long- and short - axis slices were produced by axis shift . standard electrocardiographically gated images were acquired in 64 steps at 19 s per step , using the step acquisition mode in rr interval , and divided into 16 frames . tracer uptake was assessed by non - gated early images created from the sums of all of the gated images obtained in standard acquisition mode . regions of interest ( rois ) were drawn over the entire heart and upper mediastinum depicted in the planar images . the h / m ratio and global wr of tc - sestamibi were calculated from the pixel counts in the rois using the following equations : h / m = mean pixel count of cardiac roi / mean pixel count of mediastinal roi ; and wr ( % ) = [ ( mean early cardiac pixel count mean delayed cardiac pixel count)/mean early cardiac pixel count ] 100 . backgrounds and time - decay corrections were not applied to the calculation of wr . once a spect image was acquired and reconstructed from an early image , quantitative gated spect ( qgs ) software ( cedars - sinai medical center , los angeles , ca ) was used to calculate the ventricular edges and evaluate the left ventricular end - diastolic volume ( lvedv ) , left ventricular end - systolic volume ( lvesv ) , and left ventricular ejection fraction ( lvef ) . the significance of differences among different coronary territories was assessed by one - way analysis of variance ( anova ) . parameters in the early and delayed phases in the same patient were compared using a paired t test . liner regression analysis was used to evaluate the significance of peak values of cardiac enzymes and the values obtained from tc - sestamibi myocardial scintigraphic images . all patients received an appropriate pci ; in 51 ( 91% ) patients , aspiration catheters successfully penetrated the occluding lesions and were followed by conventional pci . thrombolysis in myocardial infarction ( timi ) grade 3 flow was observed in all patients after pci . there were no differences in the age , bmi , cardiac enzymes on admission , time from the onset to revascularization , and peak cardiac enzymes , even though the patients had culprit lesions located in different arteries ( table 1 ) . the ck and ck - mb levels on admission were 410.61318.0 iu / l and 39.8198.1 iu / l , and the peak ck and ck - mb levels were 2689.61167.4 iu / l ( 15.34.6 h ) and 274.1169.4 iu / l ( 13.53.9 h ) , respectively . after 200 mg of acetylsalicylic acid and 300 mg of clopidgrel sulfate administration as a loading dose , all patients received 100 mg of acetylsalicylic acid and 75 mg of clopidogrel sulfate after pci as a maintenance dose . most of the patients were treated with angiotensin - converting enzyme inhibitor or angiotensin receptor blocker , -blocker , or some type of statin in order to prevent secondary cardiac events and deterioration of cardiac function . seven patients who received statin were not able to continue the administration due to muscle ache . the early and delayed h / ms and wr of tc - sestamibi were 2.740.58 , 3.000.70 , and 58.710.0% , respectively . the early and delayed h / ms were significantly lower in the patients with an lad culprit lesion ( 2.590.36 and 2.700.41 , respectively ) than in those with an lcx culprit lesion ( 2.960.42 , p=0.037 ; and 3.270.64 , p=0.01 ) or an rca culprit lesion ( 2.840.43 , p=0.040 ; and 3.210.49 , p<0.01 ) . the global wr of tc - sestamibi was significantly accelerated in the patients with an lad culprit lesion compared with those with an rca culprit lesion ( 61.16.6% vs. 56.44.5% , p<0.01 ) . a significant difference in the corrected wr was found between the patients with an lad culprit lesion and those with an rca culprit lesion ( p<0.01 ; table 2 ) . the left ventricular end diastolic volume ( lvedv ) , left ventricular end systolic volume ( lvesv ) , and left ventricular ejection fraction ( lvef ) were 97.927.2 ml , 51.922.0 ml , and 48.810.0% , respectively ( table 2 ) . figure 1 shows the association between the parameters obtained from the tc - sestamibi planar images and cardiac enzymes . although the early h / m was not correlated with the peak ck or peak ck - mb , the delayed h / m was correlated with the peak ck ( r=0.32 , p=0.015 ) and peak ck - mb ( r=0.37 , p=0.005 ) . the wr of tc - sestamibi was also correlated with the peak ck ( r=0.32 , p=0.017 ) and peak ck - mb ( r=0.34 , p=0.012 ) . all patients received an appropriate pci ; in 51 ( 91% ) patients , aspiration catheters successfully penetrated the occluding lesions and were followed by conventional pci . thrombolysis in myocardial infarction ( timi ) grade 3 flow was observed in all patients after pci . there were no differences in the age , bmi , cardiac enzymes on admission , time from the onset to revascularization , and peak cardiac enzymes , even though the patients had culprit lesions located in different arteries ( table 1 ) . the ck and ck - mb levels on admission were 410.61318.0 iu / l and 39.8198.1 iu / l , and the peak ck and ck - mb levels were 2689.61167.4 iu / l ( 15.34.6 h ) and 274.1169.4 iu / l ( 13.53.9 h ) , respectively . after 200 mg of acetylsalicylic acid and 300 mg of clopidgrel sulfate administration as a loading dose , all patients received 100 mg of acetylsalicylic acid and 75 mg of clopidogrel sulfate after pci as a maintenance dose . most of the patients were treated with angiotensin - converting enzyme inhibitor or angiotensin receptor blocker , -blocker , or some type of statin in order to prevent secondary cardiac events and deterioration of cardiac function . seven patients who received statin were not able to continue the administration due to muscle ache . the early and delayed h / ms and wr of tc - sestamibi were 2.740.58 , 3.000.70 , and 58.710.0% , respectively . the early and delayed h / ms were significantly lower in the patients with an lad culprit lesion ( 2.590.36 and 2.700.41 , respectively ) than in those with an lcx culprit lesion ( 2.960.42 , p=0.037 ; and 3.270.64 , p=0.01 ) or an rca culprit lesion ( 2.840.43 , p=0.040 ; and 3.210.49 , p<0.01 ) . the global wr of tc - sestamibi was significantly accelerated in the patients with an lad culprit lesion compared with those with an rca culprit lesion ( 61.16.6% vs. 56.44.5% , p<0.01 ) . a significant difference in the corrected wr was found between the patients with an lad culprit lesion and those with an rca culprit lesion ( p<0.01 ; table 2 ) . the left ventricular end diastolic volume ( lvedv ) , left ventricular end systolic volume ( lvesv ) , and left ventricular ejection fraction ( lvef ) were 97.927.2 ml , 51.922.0 ml , and 48.810.0% , respectively ( table 2 ) . figure 1 shows the association between the parameters obtained from the tc - sestamibi planar images and cardiac enzymes . although the early h / m was not correlated with the peak ck or peak ck - mb , the delayed h / m was correlated with the peak ck ( r=0.32 , p=0.015 ) and peak ck - mb ( r=0.37 , p=0.005 ) . the wr of tc - sestamibi was also correlated with the peak ck ( r=0.32 , p=0.017 ) and peak ck - mb ( r=0.34 , p=0.012 ) . the present study demonstrated several significant aspects of tc - sestamibi planar imaging for the assessment of cardiac damage in ami patients after primary pci . firstly , the delayed uptake was negatively correlated with the peak values of ck and ck - mb . secondly , the wr was positively correlated with the peak values of these cardiac enzymes . these results suggest that tc - sestamibi imaging reflects injured myocardium . since tc - sestamibi wr indicates injured but viable myocardium , tc - sestamibi imaging in the subacute phase of ami may provide additional clinical information . the association among peak ck , infarct size , and mortality was demonstrated in the 1970s . after the importance of measuring ck in ami patients was recognized , various studies were conducted to confirm the efficacy of peak ck . one study reported that ck - mb elevation without concomitant ck elevation is associated with a worse prognosis . although it has been suggested that ck - mb overestimates infarct size after reperfusion , a recent study has reported that peak ck and ck - mb are still related to mortality and infarct size in ami patients with timi grade 3 flow after pci . according to the guidelines described by alpert et al , cardiac troponin is considered the sensitive marker of choice , and is more sensitive than ck and ck - mb . however , tzivoni et al . have demonstrated that peak troponin t is as accurate as peak ck , and ck - mb are as accurate as troponin t in estimating infarct size . thus , in the present study , we determined peak ck and ck - mb and conducted a serological study for detecting cardiac damage . the myocardial uptake mechanism of tc - sestamibi depends on the passive distribution across the plasma and mitochondrial membranes in response to a transmembrane electrochemical gradient ; approximately 90% of its activity in vivo is associated with mitochondria . fundamental studies have reported a close relationship between mitochondrial function and retention of tc - sestamibi in the myocardium . have demonstrated that loss of mitochondrial metabolic function is related to tc - sestamibi release from the myocardium . another study reported that tc - sestamibi uptake and retention are inhibited in a cultured chick myocyte model when mitochondrial membrane potential is depolarized . under ischemic conditions and during reperfusion , reactive oxygen species produced by endothelial cells induce the release of phagocytes in the myocardium , which leads to mitochondrial dysfunction . mitochondrial dysfunction may alter the mitochondrial membrane potential and impair myocardial retention of tc - sestamibi . in the present study , the delayed h / m and wr a tc - sestamibi kinetics study has demonstrated a significant correlation between tc - sestamibi activity after reperfusion and peak ck release in ischemic - reperfused rat heart models , which is consistent with the results of our study . since peak ck represents the extent of myocardial injury , tc - sestamibi - delayed h / m and wr are probably good markers of ischemic - damaged myocardium . in contrast , the early h / m of tc - sestamibi was uncorrelated with peak ck and peak ck - mb . reported that the initial uptake of tc - sestamibi predominantly reflects coronary blood flow in a rabbit heart model of coronary occlusion . the early h / m of tc - sestamibi reflects reperfused myocardial perfusion caused by primary pci , which suggests that the early h / m of tc - sestamibi is uncorrelated with peak ck and ck - mb . in studies conducted on ischemic patients , takeishi et al . reported that the wr of tc - sestamibi after direct percutaneous transluminal coronary angioplasty was associated with infarct - related area and preserved left ventricular function . compared the wr of tc - sestamibi with contractile reserve wall motion evaluated by low - dose dobutamine echocardiography . they concluded that the enhancement of tc - sestamibi wr was related to the reversible functional abnormalities indicated by the dobutamine - responsive contractile reserve . these results suggest that the wr of tc - sestamibi is associated with an ischemic - damaged but viable myocardium . in non - ischemic patients , kumita et al . reported that the wr of tc - sestamibi , which was related to left ventricular function , was higher in patients with chronic heart failure than in controls . analyzed left ventricular systolic and diastolic function in patients with dilated cardiomyopathy and demonstrated a positive correlation between the wr of tc - sestamibi and the plasma bnp level . they also suggested that the wr of tc - sestamibi might provide prognostic information in chronic heart failure patients because the incidence of cardiac events was higher in such patients with higher tc - sestamibi wr . these non - ischemic heart disease studies also suggest that the wr of tc - sestamibi might be a reliable marker of myocardial damage . in the present study we did not evaluate regional wr of tc - sestamibi in association with culprit region . ami patients who did not receive pci should be included as controls in future studies . in the study with non - ischemic patients , it was reported that wr of tc - sestamibi might provide prognostic information . however , the prognostic values of h / m and wr in patients with ischemic cardiac disease remain unknown . further investigations with larger numbers of patients should be conducted to evaluate the potential use of tc - sestamibi as a prognostic incremental indicator . the present study showed that tc - sestamibi planar imaging may be useful for the assessment of cardiac damage in ami patients . since wr of tc - sestamibi ( after pci ) is associated with infarcted myocardium ( but with preserved left ventricular function ) , increased wr might predict the improvement of left ventricular wall motion in chronic phase . follow - up studies with larger number of patients are needed to confirm the usefulness of tc - sestamibi images in ami patients . in the present study we did not evaluate regional wr of tc - sestamibi in association with culprit region . ami patients who did not receive pci should be included as controls in future studies . in the study with non - ischemic patients , it was reported that wr of tc - sestamibi might provide prognostic information . however , the prognostic values of h / m and wr in patients with ischemic cardiac disease remain unknown . further investigations with larger numbers of patients should be conducted to evaluate the potential use of tc - sestamibi as a prognostic incremental indicator . the present study showed that tc - sestamibi planar imaging may be useful for the assessment of cardiac damage in ami patients . since wr of tc - sestamibi ( after pci ) is associated with infarcted myocardium ( but with preserved left ventricular function ) , increased wr might predict the improvement of left ventricular wall motion in chronic phase . follow - up studies with larger number of patients are needed to confirm the usefulness of tc - sestamibi images in ami patients . these results suggest that the wr determined from tc - sestamibi myocardial scintigraphic images could reflect the extent of myocardial damage in ami patients after pci . this study also demonstrates the significance of taking tc - sestamibi myocardial scintigraphic images at 2 different time points .
summarybackgroundthis study was designed to clarify the significance of washout rate ( wr ) determined from 99mtc - sestamibi myocardial scintigraphic images and the levels of cardiac enzymes in patients with acute myocardial infarction ( ami ) after percutaneous coronary intervention ( pci).material / methodsa total of 56 consecutive patients with ami ( mean age 65.88.5 years ) , who underwent pci on admission , were included . cardiac enzyme , the mb isoenzyme of creatinine kinase ( ck - mb ) , was measured every 3 h after admission . two weeks after the onset of ami , 99mtc - sestamibi myocardial scintigraphy was performed at early ( 30 min ) and delayed ( 4 h ) phases after tracer injection . the heart - to - mediastinum ratio ( h / m ) and wr were calculated from the planar images.resultspci was performed at 9.46.0 h after the onset of ami . in 26 patients the culprit lesion was located in the right coronary artery and in 24 patients it was located in the left anterior descending coronary artery . the peak ck - mb was 274.1169.4 iu / l ( 13.53.9 h ) . the early and delayed h / ms and wr of 99mtc - sestamibi were 2.740.58 , 3.000.70 , and 58.810.0% , respectively . the delayed h / m was significantly correlated with the peak ck - mb ( r=0.37 , p=0.005 ) . the wr of 99mtc - sestamibi was also significantly correlated with the peak ck - mb ( r=0.34 , p=0.012).conclusionsthese results suggest that the wr determined from 99mtc - sestamibi myocardial scintigraphic images reflects the extent of myocardial damage in ami patients .
As President Barack Obama kicks off an all-out push for a strike on Syria, Secretary of State John Kerry came under fire on Monday for saying any attack would be “unbelievably small” and suggesting the Syrian ruler still had a week to give up his chemical weapons to avoid a U.S. assault - a remark Kerry’s spokeswoman later attempted to clarify. The off-key comments came in a joint press conference in London with Britain’s foreign secretary, where Kerry said the strike would be able to harm Assad without putting American troops on the ground and with a “very limited, very targeted, short-term effort.” Text Size - + reset Kristol slams Kerry for 'unbelievably small' W.H. disputes Kerry gaffe “That is exactly what we are talking about doing, unbelievably small, limited kind of effort,” Kerry said. (WATCH: Timeline of Syria crisis response) The comments, which dismayed even supporters of an attack on Syria, added fuel to the debate as Obama launches a full court press 48-hour media and political blitz to try to sell the plan to a skeptical American public and Congress. On Monday, Obama will sit down with anchors from six networks, PBS, CNN, Fox, ABC, CBS and NBC, for seven- to 10-minute interviews that will air Monday night. On Tuesday, the president will address the nation in a televised address in prime time. The White House was also expected to continue to reach out to members of Congress through high-profile Cabinet members as well as the vice president and president himself. On Monday afternoon, National Security Adviser Susan Rice was set to hold a public speaking event on Syria at the New America Foundation. But Kerry’s comments Monday caused even some of the president’s strongest backers for military intervention to call the White House’s outreach a disaster. (PHOTOS: Protesters target White House on Syria) Sen. John McCain (R-Ariz.), a supporter of robust military action in Syria and an important target of Obama’s outreach efforts to date, took to Twitter to express his frustration with Kerry’s comments. “Kerry says #Syria strike would be “unbelievably small” - that is unbelievably unhelpful,” McCain tweeted Monday. House Intelligence Committee Chairman Mike Rogers (R-Mich.) strongly favors a strike on Syria, but criticized Kerry’s comments. “I don’t understand what he means by that,” Rogers said on MSNBC’s “Morning Joe” on Monday. “This is part of the problem. That’s a very confusing message — certainly a confusing message to me that he would offer that as somebody who believes this is in our national security interest.” (PHOTOS: Scenes from Syria) Rogers said the president has done an “awful job” of explaining foreign policy to the American people and is trying to make up for lost time. Bill Kristol, the conservative editor of The Weekly Standard who also supports the resolution before Congress and who said he has tried to advise the administration behind the scenes on convincing conservatives, said Kerry’s comments about “unbelievably small” attack have him concerned. “I am worried, though, the administration has done such a bad job of making its case,” Kristol said also on “Morning Joe” on Monday. “Now we have the secretary of State saying, ‘Well, we went to Congress, it was so important to go to Congress, for an unbelievably small limited strike.’ Even I can see why reasonable people on the Hill … can say, ‘Is that really better than nothing?’” Kristol said. Rep. Adam Kinzinger (R-Ill.), another advocate for intervening in Syria, continued his criticism Monday for how the president has handled the push for intervention, saying he should have been more forceful earlier. (PHOTOS: Syria: Where politicians stand) “The president’s failed to make his case. And I think hopefully over the next 48 hours he does that, but he’s definitely failed to,” thus far, Kinzinger said on “Morning Joe.” Republican strategist Karl Rove called the next two days a last-minute effort by the president after repeated missteps. “Now the president is going to sort-of engage in two hail Marys, a series of interviews tonight with anchors and a speech to the country,” Rove said Monday on Fox News’s “American Newsroom. Asked about Kerry’s comments in the White House briefing Monday afternoon, press secretary Jay Carney said there was no mistake from the secretary. “I don’t think that the phrasing reflects some error,” Carney said. “I think that Secretary Kerry clearly was referring to that in the context of what the United States and the American people have experienced over this past 10 to 12 years, which includes large-scale, long-term … open-ended military engagement with boots on the ground in Afghanistan and Iraq. That is the contrast that Secretary Kerry was making.” ||||| In a surprise move, Russia promised Monday to push its ally Syria to place its chemical weapons under international control and then dismantle them quickly to avert U.S. strikes. Russian Foreign Minister Sergey Lavrov welcomes his Syrian counterpart Walid al-Moallem, left, prior to talks in Moscow on Monday, Sept. 9, 2013. (AP Photo/Ivan Sekretarev) (Associated Press) Russian Foreign Minister Sergey Lavrov welcomes his Syrian counterpart Walid al-Moallem, left, prior to talks in Moscow on Monday, Sept. 9, 2013. (AP Photo/Ivan Sekretarev) (Associated Press) The announcement by Russian Foreign Minister Sergey Lavrov came a few hours after U.S. Secretary of State John Kerry said that Syrian President Bashar Assad could resolve the crisis surrounding the alleged use of chemical weapons by his forces by surrendering control of "every single bit" of his arsenal to the international community by the end of the week. Kerry added that he thought Assad "isn't about to do it," but Lavrov, who just wrapped a round of talks in Moscow with his Syrian counterpart Walid al-Moallem, said that Moscow would try to convince the Syrians. "If the establishment of international control over chemical weapons in that country would allow avoiding strikes, we will immediately start working with Damascus," Lavrov said. "We are calling on the Syrian leadership to not only agree on placing chemical weapons storage sites under international control, but also on its subsequent destruction and fully joining the treaty on prohibition of chemical weapons," he said. Lavrov said that he has already handed over the proposal to al-Moallem and expects a "quick, and, hopefully, positive answer." His statement followed media reports alleging that Russian President Vladimir Putin, who discussed Syria with President Barack Obama during the group of 20 summit in St. Petersburg last week, sought to negotiate a deal that would have Assad hand over control of chemical weapons. Speaking earlier in the day, Lavrov denied that Russia was trying to sponsor any deal "behind the back of the Syrian people." The Russian move comes as Obama, who has blamed Assad for killing hundreds of his own people in a chemical attack last month, is pressing for a limited strike against the Syrian government. It has denied launching the attack, insisting along with its ally Russia that the attack was launched by the rebels to drag the U.S. into war. Lavrov and al-Moallem said after their talks that U.N. chemical weapons experts should complete their probe and present their findings to the U.N. Security Council. Al-Moallem said his government was ready to host the U.N. team, and insisted that Syria is ready to use all channels to convince the Americans that it wasn't behind the attack. He added that Syria was ready for "full cooperation with Russia to remove any pretext for aggression." Neither minister, however, offered any evidence to back their claim of rebel involvement in the chemical attack. Lavrov said that Russia will continue to promote a peaceful settlement and may try to convene a gathering of all Syrian opposition figures to join in negotiations. He added that a U.S. attack on Syria would deal a fatal blow to peace efforts. Lavrov wouldn't say how Russia could respond to a possible U.S. attack on Syria, saying that "we wouldn't like to proceed from a negative scenario and would primarily take efforts to prevent a military intervention." Putin said that Moscow would keep providing assistance to Syria in case of U.S. attack, but he and other Russian officials have made clear that Russia has no intention of engaging in hostilities.
– Russia made the surprise move today of prodding ally Syria to hand over its chemical weapons, just hours after John Kerry suggested that Bashar al-Assad and Co. could avoid a military strike if they did exactly that within a week. "If the establishment of international control over chemical weapons in that country would allow avoiding strikes, we will immediately start working with Damascus," said Russian Foreign Minister Sergei Lavrov. "We are calling on the Syrian leadership to not only agree on placing chemical weapons storage sites under international control, but also on its subsequent destruction and fully joining the treaty on prohibition of chemical weapons." Lavrov, who met today with his Syrian counterpart, says he is hoping for a "quick, and, hopefully, positive answer" to his proposal, reports the AP. Kerry, speaking after meetings with British Foreign Secretary William Hague, said Assad "could turn over every single bit of his chemical weapons to the international community in the next week—turn it over, all of it, without delay, and allow the full and total accounting." But he's not exactly holding his breath, saying Assad "isn't about to do it." He also said any military strikes against Syria would be "unbelievably small," notes Politico, for which he's now taking more heat from critics of the White House's handling of the situation. Says Bill Kristol: “Now we have the secretary of State saying, ‘Well, we went to Congress, it was so important to go to Congress, for an unbelievably small limited strike.’ Even I can see why reasonable people on the Hill … can say, ‘Is that really better than nothing?’" Click for highlights of Assad's interview with Charlie Rose.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Palestinian and United Nations Anti- Terrorism Act of 2014''. SEC. 2. FINDINGS. Congress makes the following findings: (1) On April 23, 2014, representatives of the Palestinian Liberation Organization and Hamas, a designated terrorist organization, signed an agreement to form a government of national consensus. (2) On June 2, 2014, Palestinian President Mahmoud Abbas announced a unity government as a result of the April 23, 2014, agreement. (3) United States law requires that any Palestinian government that ``includes Hamas as a member'', or over which Hamas exercises ``undue influence'', only receive United States assistance if certain certifications are made to Congress. (4) The President has taken the position that the current Palestinian government does not include members of Hamas or is influenced by Hamas and has thus not made the certifications required under current law. (5) The leadership of the Palestinian Authority has failed to completely denounce and distance itself from Hamas' campaign of terrorism against Israel. (6) President Abbas has refused to dissolve the power- sharing agreement with Hamas even as more than 2,300 rockets have targeted Israel since July 2, 2014. (7) President Abbas and other Palestinian Authority officials have failed to condemn Hamas' extensive use of the Palestinian people as human shields. (8) The Israeli Defense Forces have gone to unprecedented lengths for a modern military to limit civilian casualties. (9) On July 23, 2014, the United Nations Human Rights Council adopted a one-sided resolution criticizing Israel's ongoing military operations in Gaza. (10) The United Nations Human Rights Council has a long history of taking anti-Israel actions while ignoring the widespread and egregious human rights violations of many other countries, including some of its own members. (11) On July 16, 2014, officials of the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) discovered 20 rockets in one of the organization's schools in Gaza, before returning the weapons to local Palestinian officials rather than dismantling them. (12) On multiple occasions during the conflict in Gaza, Hamas has used the facilities and the areas surrounding UNRWA locations to store weapons, harbor their fighters, and conduct attacks. SEC. 3. DECLARATION OF POLICY. It shall be the policy of the United States-- (1) to deny United States assistance to any entity or international organization that harbors or collaborates with Hamas, a designated terrorist organization, until Hamas agrees to recognize Israel, renounces violence, disarms, and accepts prior Israeli-Palestinian agreements; (2) to seek a negotiated settlement of this conflict only under the condition that Hamas and any United States-designated terrorist groups are required to entirely disarm; and (3) to continue to provide security assistance to the Government of Israel to assist its efforts to defend its territory and people from rockets, missiles, and other threats. SEC. 4. RESTRICTIONS ON AID TO THE PALESTINIAN AUTHORITY. For purposes of section 620K of the Foreign Assistance Act of 1961 (22 U.S.C. 2378b), any power-sharing government, including the current government, formed in connection with the agreement signed on April 23, 2014, between the Palestinian Liberation Organization and Hamas is considered a ``Hamas-controlled Palestinian Authority''. SEC. 5. REFORM OF UNITED NATIONS HUMAN RIGHTS COUNCIL. (a) In General.--Until the Secretary of State submits to the appropriate congressional committees a certification that the requirements described in subsection (b) have been satisfied-- (1) the United States contribution to the regular budget of the United Nations shall be reduced by an amount equal to the percentage of such contribution that the Secretary determines would be allocated by the United Nations to support the United Nations Human Rights Council or any of its Special Procedures; (2) the Secretary shall not make a voluntary contribution to the United Nations Human Rights Council; and (3) the United States shall not run for a seat on the United Nations Human Rights Council. (b) Certification.--The annual certification referred to in subsection (a) is a certification made by the Secretary of State to Congress that the United Nations Human Rights Council's agenda does not include a permanent item related to the State of Israel or the Palestinian territories. (c) Reversion of Funds.--Funds appropriated and available for a United States contribution to the United Nations but withheld from obligation and expenditure pursuant to this section shall immediately revert to the United States Treasury and the United States Government shall not consider them arrears to be repaid to any United Nations entity. SEC. 6. UNITED STATES CONTRIBUTIONS TO THE UNITED NATIONS RELIEF AND WORKS AGENCY FOR PALESTINE REFUGEES IN THE NEAR EAST (UNRWA). Section 301(c) of the Foreign Assistance Act of 1961 (22 U.S.C. 2221(c)) is amended to read as follows: ``(c) Palestine Refugees; Considerations and Conditions for Furnishing Assistance.-- ``(1) In general.--No contributions by the United States to the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) for programs in the West Bank and Gaza, a successor entity or any related entity, or to the regular budget of the United Nations for the support of UNRWA or a successor entity for programs in the West Bank and Gaza, may be provided until the Secretary certifies to the appropriate congressional committees that-- ``(A) no official, employee, consultant, contractor, subcontractor, representative, or affiliate of UNRWA-- ``(i) is a member of Hamas or any United States-designated terrorist group; or ``(ii) has propagated, disseminated, or incited anti-Israel, or anti-Semitic rhetoric or propaganda; ``(B) no UNRWA school, hospital, clinic, other facility, or other infrastructure or resource is being used by Hamas or an affiliated group for operations, planning, training, recruitment, fundraising, indoctrination, communications, sanctuary, storage of weapons or other materials, or any other purposes; ``(C) UNRWA is subject to comprehensive financial audits by an internationally recognized third party independent auditing firm and has implemented an effective system of vetting and oversight to prevent the use, receipt, or diversion of any UNRWA resources by Hamas or any United States-designated terrorist group, or their members; and ``(D) no recipient of UNRWA funds or loans is a member of Hamas or any United States-designated terrorist group. ``(2) Appropriate congressional committees defined.--In this subsection, the term `appropriate congressional committees' means-- ``(A) the Committees on Foreign Relations, Appropriations, and Homeland Security and Governmental Affairs of the Senate; and ``(B) the Committees on Foreign Affairs, Appropriations, and Oversight and Government Reform of the House of Representatives.''. SEC. 7. ISRAELI SECURITY ASSISTANCE. The equivalent amount of all United States contributions withheld from the Palestinian Authority, the United Nations Human Rights Council, and the United Nations Relief and Works Agency for Palestine Refugees in the Near East under this Act is authorized to be provided to-- (1) the Government of Israel for the Iron Dome missile defense system and other missile defense programs; and (2) underground warfare training and technology and assistance to identify and deter tunneling from Palestinian- controlled territories into Israel.
Palestinian and United Nations Anti-Terrorism Act of 2014 - States that it shall be U.S. policy to: (1) deny U.S. assistance to any entity or international organization that collaborates with Hamas until Hamas agrees to recognize Israel, renounces violence, disarms, and accepts prior Israeli-Palestinian agreements; (2) seek a negotiated settlement only if Hamas and any U.S.-designated terrorist groups are required to disarm entirely; and (3) provide security assistance to Israel. Considers any power-sharing government, including the current government, formed in connection with the April 23, 2014, agreement between the Palestinian Liberation Organization (PLO) and Hamas to be a &quot;Hamas-controlled Palestinian Authority (PA)&quot; and thus subject to specified restrictions under the Foreign Assistance Act of 1961. States that until the Secretary of State certifies to Congress that the United Nations Human Rights Council (UNHRC)'s agenda does not include a permanent item related to Israel or the Palestinian territories: (1) the U.S. contribution to the regular budget of the United Nations (U.N.) shall be reduced by a specified amount, (2) the Secretary shall not make a voluntary contribution to UNHRC, and (3) the United States shall not run for an UNHRC seat. Amends the Foreign Assistance Act of 1961 to prohibit U.S. contributions to the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) for programs in the West Bank and Gaza until the Secretary certifies to Congress that: no official, employee, consultant, or affiliate of UNRWA is a member of Hamas or any U.S.-designated terrorist group, or has propagated or incited anti-Israel or anti-Semitic rhetoric; no UNRWA facility or resource is being used by Hamas or an affiliated group for any purpose; UNRWA is subject to audits by an internationally recognized third party auditing firm and has implemented an oversight system to prevent the use of UNRWA resources by Hamas or any U.S.-designated terrorist group; and no recipient of UNRWA funds or loans is a member of Hamas or any U.S.-designated terrorist group. Authorizes the equivalent amount of all U.S. contributions withheld from the PA, UNHRC, and UNRWA under this Act to be provided to Israel for Iron Dome and other missile defense systems and for underground warfare training and technology.
the x - ray source grs 1915 + 105 was discovered by the granat observatory in 1992 ( castro - tirado _ et al._1992 ) . it is one of the several galactic objects observed to produce superluminal jets ( mirabel _ et al._1994 ) . as the first galactic superluminal jet source , the so - called microquasar " , grs 1915 + 105 is a unique and very important astrophysical laboratory for relativistic astrophysics ( mirabel _ et al._1998 , 1999 ) . et al._(1997 ) first suggested that the black hole in this system is highly spinning ; subsequent studies on the high frequency qpo phenomena from this source tend to support the spinning black hole model for this source ( see , e.g. , cui _ et al._1998 , remillard _ et al._2002 ) , though the case is not settled yet . the spectral type of the companion has been recently identified as a k m iii star ( greiner _ et al._2001 ) , thus the source is classified as a low mass x - ray binary . the estimated mass of the black hole ( @xmath5 ) in this system is significantly higher than the black hole masses in most of other stellar mass black hole systems ( greiner _ et al._2001 ) . this system thus may have a unique evolutionary history and this also poses a major challenge to the theories of massive star evolution . grs 1915 + 105 displays rich qpo phenomena . the 328 hz qpo ( remillard _ et al._2002 ) , the highest qpo in this source , is believed to be associated with the innermost stable radius of accretion disk . a stable @xmath6 67 hz qpo ( morgan _ et al._1997 ) accompanied with a 40 hz qpo ( strohmayer 2001 ) has also been observed . the 0.5 - 10 hz qpos , observed only during the hard state , are probably linked to the properties of the accretion disk , since their centroid frequency and fractional rms have been reported to correlate with the thermal flux ( markwardt _ et al._1999 , trudolyubov _ et al._1999 , reig _ et al._2000 ) and apparent temperature ( muno _ et al._1999 ) . at much lower frequencies ( 0.001 - 0.1 hz ) , the source occasionally shows high - amplitude qpos or brightness sputters " ( morgan _ et al._1997 ) . these variations correspond probably to the disk instability . the 67 hz qpo shows a marked hard lag when the low frequency ( well below 1 hz ) qpo shows a complex hard / soft lag structure ( cui _ et al._1999 ) . the 0.5 - 10 hz qpo shows a complex lag behavior ( reig _ et al._2000 ) . the lag between hard and soft photons decreases as the frequency of the qpo increases , changing the sign of the lag for @xmath7 hz . negative lags occur when the power - law spectrum is soft and positive lags occurs when the spectrum is hard . the 0.5 - 10 hz qpo disappears when the pca intensity is high ( @xmath8 counts per second ) ( reig _ et al._2000 ) . based on the x - ray color - color diagram , belloni _ et al._(2000 ) found that the complex x - ray variability of grs 1915 + 105 can be reduced to transitions between three basic spectral states , which they called a " , b " and c " . the spectrally soft states a " and b " correspond to an observable inner accretion disk with different temperatures : in state b " the inner disk temperature is higher compared to state a " . for the spectrally hard state c " , the inner part of the accretion disk is either missing or just unobservable . the coherence function between the light curves in two different energy bands measures how the photons in the two energy bands are related ( vaughan _ et al._1997 ) . for black hole binaries , the coherence function often appears to be around unity over a wide frequency range ( vaughan _ et al._1997 ; cui _ et al._1997 ; nowak _ et al._1999a ; nowak _ et al._1999b ) , indicating that high energy photons are closely related to low energy photons , or the low energy photons are the seed photons of the high energy photons . reduced coherence was observed from cyg x-1 when the source was in the transition state ( cui _ et al._1997 ) , indicating that the hard x - ray production region was not stable during the transition state . therefore the coherence function provides a useful probe into the physical properties of the hard x - ray production region . the source s x - ray temporal properties change with the radio flux . in the steady hard state , as the radio emission becomes brighter and more optically thick , the rms of the 0.5 - 10 hz qpo decreases , the fourier phase lags in the frequency range of 0.01 - 10 hz change sign , the coherence at low frequencies decreases , and the relative amount of low frequency power in hard photons compared to soft photons decreases ( muno _ et al._2001 ) . klein - wolt _ et al._(2002 ) found a relation between the above mentioned state c " and the radio flux . during state c " a more or less continuous ejection of relativistic particles takes place . the length of the state c " interval determines the strength of the radio emission . the separation between different episodes of state c " determined the shape of the radio light curve . in this paper , we make use of the rxte observations of grs 1915 + 105 in soft states to examine its timing properties further . we choose three data sets belonging to three different classes according to belloni _ et al._(2000 ) . in section 3 , we first present the power density spectra ( pds ) and then we consider the coherence . finally we summarize our results and discuss the implications of our results in two different models . the data used in this work are retrieved from the public rxte archive ( see table [ tb.1 ] ) . they belong to the soft states without prominent 0.5 - 10 hz qpos , belonging to the spectral classes @xmath1 , @xmath0 , and @xmath2 respectively ( belloni _ et al._2000 ) . because within each classes the temporal properties of the source may vary at different times , we first examined the data for each individual observation . we found that the data for the first and third sets show consistent temporal properties within each set , and therefore in the following we combine the pds and coherence of all observations within the first and third sets . for the second set , we found that although the general shape of the pds for all observations is the same , the fine features vary between the three observations . we therefore present the results of each individual observation for the second set separately . the proportional counter array ( pca ) data modes that we used are listed in table [ tb.2 ] . good time intervals " ( gtis ) were defined when the elevation angle was above @xmath9 , the offset pointing less than @xmath10 , and the number of pcus turning on " equals to 5 . the techniques that we used to calculate the coherence for the grs 1915 + 105 are discussed in vaughan _ et al._(1997 ) and nowak _ et al._(1999a ) . we applied eq.(8 ) of vaughan _ et al._(1997 ) to our data . we use eq.(1 ) of morgan _ et al._(1997 ) throughout this work to estimate the deadtime - corrected poisson noise level ; all averaged pds we present have been subtracted by poisson noise level . for the first and second data set , we performed fft with two segment lengths , 1024 s and 256 s respectively . segments with data gaps of any duration were ignored . in these results , the frequency range of 0.00097 - 0.01 hz is computed from the 1024 s data segments , and the frequency range of 0.0039125 - 64 hz is computed from the 256 s data segments . for the third data set , we only performed fft with segments of 256 s because of the shorter exposure time . throughout this work we have used a logarithmic frequency binning with a binning factor of 0.1 . we also divided our data into five energy bands . the ranges of these energy bands are listed in table [ tb.3 ] . in figure [ fig1 ] , we present the pds of all three data sets in five energy bands . from the figure , we find : \1 . the overall shape of the pds for all three sets is power - law - like , superimposed with qpos or broad features at different frequencies , characteristic of black hole binaries in the soft state . \2 . for the first data set , there is a knee at about 2 hz . there may be a broad feature at about 0.02 hz . \3 . for the second data set , the pds for each observation shown separately . for the second set-1 and second set-3 , there are three qpos at around 0.01 hz , 0.03 hz and 0.07 hz . for the second set-2 , there are two qpos at around 0.01 hz and 0.05 hz . in all these observations , there are two features at about 5 hz and 10 hz . the 5 hz feature increases with energy but decreases remarkably in the highest energy band . however , the 10 hz qpo is very weak in the low energy band but suddenly enhanced in the highest energy band . \4 . for the third set , there is a qpo at about 0.03 hz . in figure [ fig2 ] , we present the coherence for all energy bands for every data set . all comparisons shown are relative to the band a " . from the figure , we find : \1 . for the first and second set the coherence between 0.03 hz and 10 hz becomes weak for higher energies . for the third set , this trend is weaker . \2 . for the first and second set , the coherence is remarkably close to unity below about 0.02 hz for all energy bands ; whereas for the third set , the break frequency is about 1 hz . \3 . for the second set-1 and second set-3 there is a remarkable dip at about 0.3 hz in the coherence curve ; the dip is deeper for higher energies . \4 . for all data sets , the coherence deviates quickly from unity above about 10 hz . we have studies the temporal properties of the spectral classes @xmath1 , @xmath0 and @xmath2 in the soft states of the microquasar grs 1915 + 105 . we found in their pds there are several low frequency qpos or broad features between 0.01 to 10 hz . for the observations we have analyzed , the temporal properties for classes @xmath1 and @xmath2 do not show significant variations across different observations . for the class @xmath0 , we found significant variations on its temporal properties in different observations . the main results of this work are on the coherence function of different classes . for all three classes , the coherence function shows a significant drop above 10 hz ; an energy dependent coherence break between 0.01 to 1 hz also exists . in particular for class @xmath0 , there also exists a coherence dip at around 0.03 hz , which corresponds to a dip between two broad peaks on its power - density spectrum . the quick loss of coherence above about 10 hz seen for grs 1915 + 105 in the soft state is similar to that seen in cyg x-1 and gx339 - 4 in the hard state ; this may be due to some nonlinear processes at high frequencies ( nowak _ et al._1999a ) . however the coherence loss between 0.02 hz and 10 hz in grs 1915 + 105 is different from that seen in cyg x-1 and gx339 - 4 . this difference may be caused by the different spectral states when the sources were observed . in gx339 - 4 ( nowak _ et al._1999b ) , there is also a dip in the coherence function . they suggested that there are multiple broad - band processes occurring in the source that are individually coherent but they are incoherent relative to each other . these dip frequencies are approximately the frequencies at which the different components overlap . for the second set , we compare the coherence with the pds : there are two features at frequencies below 1 hz ; between these components , there is a dip in coherence ( see figure [ fig3 ] ) ; this is consistent to the above suggestion . nowak _ et al._(1999a ) proposed an accretion disk model for black hole binaries in which the corona is inside the inner radius of the accretion disk , and the soft photons from the accretion disk are up - scattered to hard photons in the corona . therefore coherence loss will occur at time scales shorter than the dynamical time scale of the corona . the break frequency in the coherence may then be used to estimate the size of the corona , and thus the inner accretion disk radius . for grs1915 + 105 , the characteristic frequency of about 0.02 hz implies that the corona s size and the inner disk radius is in the order of 10@xmath3 gm / c@xmath4 if the black hole is about 10@xmath11 . this is clearly unphysical . as discussed in poutanen _ et al._(1999 ) , the x@xmath12-rays may be produced in compact magnetic flares at radii @xmath13 100 gm/@xmath14 from the central black hole . they predicted that the coherence will deviate from unity above a characteristic frequency . this characteristic frequency is then associated with the longest time - scale of an individual flare @xmath151/2@xmath16@xmath17 . if we take @xmath18 hz , then @xmath19@xmath20=8 s. in fact the shape of the coherence curve is also remarkably similar to the prediction of their model . we owe tremendously to the anonymous referee , who s deep insights , professionalism and invaluable suggestions have improved the work significantly . we thank interesting discussions with prof . m. wu and dr . this study is supported in part by the special funds for major state basic research projects and by the national natural science foundation of china . snz also acknowledges supports by nasa s marshall space flight center and through nasa s long term space astrophysics program . belloni , t. , & hasinger , g. , 1990 , a&a , * 230 * , 103 . belloni , t. , _ et al . _ , 2000 , a&a , * 355 * , 271 . castro - tirado , a. , _ et al . _ , apjs , * 92 * , 469 , 1994 . cui , w. , zhang , s. n. , & chen , w. 1998 , , 492 , l53 cui , w. , zhang , s. n. , focke , w. , & swank , j. 1997 , , 484 , 383 greiner , j. , _ et al . _ , 2001 , a&a * 373l * , 37 . greiner , j. , morgan , e. h. , & remillard , r. a. , 1996 , apj , * 473 * , 107 . klein - wolt , m. , _ et al . _ , 2002 , , 331 , 745 markwardt , c. b. , swank , j. h. , & taam , r. e. , 1999 , apj , * 513 * , 37 . mirabel , i. f. , & rodrguez , l. f. , 1994 , nature,*371 * , 46 . mirabel , i. f. & rodrguez , l. f. 1999 , , 37 , 409 mirabel , i. f. & rodriguez , l. f. 1998 , , 392 , 673 morgan , e. h. , remillard , r. a. , & greiner , j. , 1997 , apj , * 482 * , 1086 . muno , m. p. , morgan , e. h. , & remillard , r. a. , 1999 , apj , * 527 * , 321 . muno , m. p. , _ et al . _ , 2001 , apj , * 556 * , 515 . nowak , m. a. , _ et al . _ , 1999a , apj , * 510 * , 874 nowak , m. a. , _ et al . _ , 1999b , apj , * 517 * , 355 poutanen , j. & fabian , a. c. , 1999 , mnras , * 306 * , l31 reig , p. , _ et al . _ , 2000 , , 541 , 883 remillard , r. , _ et al . _ , 2002 , procs . of the 4th microquasar workshop , 2002 , eds . durouchoux , fuchs , & rodriguez , astro - ph/0208402 strohmayer , tod e. , 2001 , apj , * 554 * , 169 trudolyubov , s. , churazov , e. , & gilfanov , m. 1999 , astl , * 25 * , 718 vaughan , b. a. & nowak , m. a. 1997,apj , * 474 * , l43 zhang , s. n. , _ et al . _ , 2000 , science , 287 , 1239 zhang , s. n. , cui , w. , & chen , w. 1997 , , 482 , l155 cccc the first set & class @xmath1 & & + 10408 - 01 - 09 - 00 & 29/05/96 & 12:44 & 5744 + 10408 - 01 - 11 - 00 & 31/05/96 & 11:26 & 10432 + 10408 - 01 - 12 - 00 & 05/06/96 & 11:36 & 10600 + 10408 - 01 - 13 - 00 & 07/06/96 & 09:39 & 10832 + 10408 - 01 - 17 - 01 & 22/06/96 & 17:52 & 3392 + 10408 - 01 - 18 - 00 & 25/06/96 & 06:44 & 3680 + 10408 - 01 - 19 - 00 & 29/06/96 & 19:57 & 2160 + 10408 - 01 - 19 - 01 & 29/06/96 & 13:12 & 3344 + 10408 - 01 - 19 - 02 & 29/06/96 & 16:28 & 3184 + 10408 - 01 - 20 - 00 & 03/07/96 & 08:27 & 3328 + 10408 - 01 - 20 - 01 & 03/07/96 & 11:39 & 2936 + the second set-1 & class @xmath0 & & + 10408 - 01 - 34 - 00 & 16/09/96 & 10:04 & 7920 + the second set-2 & class @xmath0 & & + 10408 - 01 - 35 - 00 & 22/09/96 & 06:30 & 6624 + the second set-3 & class @xmath0 & & + 10408 - 01 - 36 - 00 & 28/09/96 & 00:09 & 5600 + the third set & class @xmath2 & & + 10408 - 01 - 14 - 00 & 12/06/96 & 00:06 & 1312 + 10408 - 01 - 14 - 01 & 12/06/96 & 01:42 & 1072 + 10408 - 01 - 14 - 02 & 12/06/96 & 03:18 & 1072 + 10408 - 01 - 14 - 03 & 12/06/96 & 04:54 & 1072 + 10408 - 01 - 14 - 04 & 12/06/96 & 06:30 & 1408 +
we present results from the analysis of x - ray power density spectra and coherence when grs 1915 + 105 is in soft states . we use three data sets that belong to @xmath0 , @xmath1 and @xmath2 classes in belloni _ et al._(2000 ) . we find that the power density spectra appear to be complex , with several features between 0.01 and 10 hz . the coherence deviates from unity above a characteristic frequency . we discuss our results in different models . the corona size in the sphere - disk model implied by this break frequency is on the order of 10@xmath3 gm / c@xmath4 , which is unphysical . our results are more consistent with the prediction of the model of a planar corona sustained by magnetic flares , in which the characteristic frequency is associated with the longest time - scale of an individual flare , which is about eight seconds .
low energy electron transport across the interface between normal metals and superconductors ( ns ) is provided by the mechanism of andreev reflection @xcite . this mechanism involves conversion of a subgap quasiparticle entering the superconductor from the normal metal into a cooper pair together with simultaneous creation of a hole that goes back into the normal metal . each such act of electron - hole reflection corresponds to transferring twice the electron charge @xmath1 across the ns interface and results , e.g. , in non - zero conductance of the system at subgap energies @xcite . andreev reflection is also responsible for dc josephson effect in superconducting weak links without tunnel barriers . suffering andreev reflections at both @xmath2 interfaces , quasiparticles with energies below the superconducting gap are effectively `` trapped '' inside the junction forming a discrete set of levels which can be tuned by passing the supercurrent across the system @xcite . at the same time , these subgap andreev levels themselves contribute to the supercurrent @xcite thus making the behavior of superconducting point contacts and @xmath3 junctions in many respects different from that of tunnel barriers . note that the above theories remain applicable if one can neglect coulomb effects . in small - size superconducting contacts , however , such effects can be important and should in general be taken into account . a lot is known about interplay between fluctuations and charging effects in superconducting tunnel barriers @xcite . here we examine the properties of superconducting junctions going beyond the tunneling limit . below we will demonstrate that coulomb blockade in such junctions weakens with increasing barrier transmissions and eventually disappears in the limit of fully open superconducting contacts . we will also argue that in superconducting systems similarly to normal contacts @xcite there also exists a direct relation between coulomb effects and current fluctuations . as it is shown in fig . 1 , we will consider big metallic reservoirs one of which is superconducting while another one could be either normal or superconducting . these two reservoirs are connected by a rather short normal bridge ( conductor ) with arbitrary transmission distribution @xmath4 of its conducting modes and normal state conductance @xmath5 . both phase and energy relaxation of electrons may occur only in the reservoirs and not inside the conductor which length is assumed to be shorter than dephasing and inelastic relaxation lengths . as usually , coulomb interaction between electrons in the conductor area is accounted for by some effective capacitance @xmath6 . in order to analyze electron transport in the presence of interactions we will make use of an approach based on the effective action formalism combined with the scattering matrix technique @xcite . this approach can be conveniently generalized to superconducting systems . in fact , the structure of the effective action remains the same also in the supeconducting case , one should only replace normal propagators by @xmath7 matrix green functions which account for superconductivity , as it was done , e.g. , in @xcite . following the standard procedure we express the kernel @xmath8 of the evolution operator on the keldysh contour in terms of a path integral over the fermionic fields which can be integrated out after the standard hubbard - stratonovich decoupling of the interacting term . then the kernel @xmath8 takes the form @xmath9 ) , \label{pathint}\ ] ] where @xmath10 are fluctuating phases defined on the forward and backward parts of the keldysh contour and related to fluctuating voltages @xmath11 across the conductor as @xmath12 . here and below we set @xmath13 . the effective action consists of two terms , @xmath14=s_c[\varphi ] + s_t[\varphi ] $ ] , where @xmath15= \frac{c}{2e^2}\int\limits_0^t dt ' ( \dot\varphi_{1}^2-\dot\varphi_{2}^2)\equiv \frac{c}{e^2}\int\limits_0^t dt \dot\varphi^+\dot\varphi^-\label{sc}\end{aligned}\ ] ] describes charging effects and the term @xmath16 $ ] accounts for electron transfer between normal and superconducting reservoirs . it reads @xcite @xmath17=-\frac{i}{2}\sum_n{\rm tr } \ln \left [ 1+\frac{t_n}{4}\left ( \left\ { \check g_m , \check g_s \right\}-2\right ) \right ] , \label{st}\ ] ] where @xmath18 and @xmath19 are @xmath20 green - keldysh matrices of m- and s - electrodes which product implies time convolution and which anticommutator is denoted by curly brackets . in eq . ( [ sc ] ) we also introduced `` classical '' and `` quantum '' parts of the phase , respectively @xmath21 and @xmath22 . for later purposes we also express the average current and the current - current correlator via the effective action as @xmath23 } , \label{curr}\\ & & \frac12\langle \hat i\hat i\rangle_+=-e^2\int { \cal d } \varphi_{\pm}\frac{\delta^2}{\delta \varphi_-(t)\delta\varphi_-(t')}e^{is[\varphi ] } , \label{corr}\end{aligned}\ ] ] where @xmath24 . let us introduce the matrix @xmath25=1-t_n/2+(t_n/4)\left\ { \check g_m , \check g_s \right\}|_{\varphi_-=0}$ ] . as the action @xmath26 vanishes for @xmath27 one has @xmath28 . making use of this property we can identically transform the action ( [ st ] ) to @xmath29 , \label{xx}\ ] ] where @xmath30 . now let us separately consider ns and ss interfaces . at temperatures and voltages well below the superconducting gap andreev contribution to the action of ns system dominates . hence , it suffices to consider the limit of low energies @xmath31 . in this limit we can define the andreev transmissions @xcite @xmath32 and andreev conductance @xmath33 . let us assume that either dimensionless andreev conductance @xmath34 is large , @xmath35 , or temperature is sufficiently high ( though still smaller than @xmath36 ) . in either case one can describe quantum dynamics of the phase variable @xmath37 within the quasiclassical approximation @xcite which amounts to expanding @xmath26 in powers of ( small ) `` quantum '' part of the phase @xmath38 . employing the above equations and expanding @xmath26 up to terms @xmath39 we arrive at the andreev effective action @xcite @xmath40 where @xmath41 } \varphi^-(t')\varphi^-(t '' ) \nonumber\\ & & \times [ \beta_a \cos(2\varphi^+(t')-2\varphi^+(t '' ) ) + 1-\beta_a ] \label{sins}\end{aligned}\ ] ] and @xmath42 is the andreev fano factor defined in a complete analogy with the normal fano factor @xmath43 . we observe that the action @xmath26 is expressed in exactly the same form as that for normal conductors @xcite derived within the the same quasiclassical approximation for the phase variable @xmath44 . in order to observe the correspondence between the action @xcite and that defined in eqs . ( [ finals])-([sins ] ) one only needs to interchange normal and andreev conductances as well as the corresponding fano factors @xmath45 and to account for an extra factor 2 in front of the phase @xmath46 under @xmath47 in eq . ( [ sins ] ) . this extra factor implies doubling of the charge during andreev reflection . turning to superconducting contacts we assume that fluctuating phases @xmath48 are sufficiently small and perform regular expansion of the exact effective action in powers of these phases . then we obtain @xmath49 where @xmath50 is the time - independent phase difference , @xmath51 defines the supercurrent across the system @xcite and @xmath52 with both kernels @xmath53 and @xmath54 being real functions . the complete expressions for these functions turn out to be somewhat lengthy and for this reason are not presented here . below we only emphasize some of the properties of @xmath53 and @xmath54 . to begin with , it is straightforward to verify that in the lowest order in barrier transmissions @xmath4 the result ( [ finalss])-([siii ] ) reduces to the standard aes action @xcite for tunnel barriers in the limit of small phase fluctuations . qualitatively new features emerge in higher orders in @xmath4 being directly related to the presence of subgap andreev levels @xmath55 inside the contact . consider , for instance , the kernel @xmath54 . it can be split into three contributions of different physical origin @xmath56 the first of these terms , @xmath57 , represents the subgap contribution due to discrete andreev states . the fourier transform of this term has the form @xmath58 [ \delta\left(\omega-2\epsilon_n(\chi)\right)+ \delta\left(\omega+2\epsilon_n(\chi)\right ) ] \bigg\}. \label{i1 } \end{aligned}\ ] ] it is obvious that this contribution is not contained in the aes action at all . the second term @xmath59 can be interpreted as the interference term between subgap andreev levels and quasiparticle states above the gap . in the low temperature limit @xmath60 the fourier transform of this term @xmath61 differs from zero only at sufficiently high frequencies @xmath62 . at higher temperatures @xmath63 , however , @xmath61 vanishes only for @xmath64 and remains non - zero otherwise . in the limit of small barrier transmissions this term scales as @xmath65 and , hence , is not contained in the aes action either . finally , the third term @xmath66 accounts for the contribution of quasiparticles with energies above the gap . in the high frequency limit @xmath67 or for @xmath68 this term reduces to the standard result for a normal conductor @xmath69 . turning now to the function @xmath53 in eq . ( [ srrr ] ) we note that its fourier transform can be represented as @xmath70 , where both @xmath71 and @xmath72 are real functions . the function @xmath71 is even in @xmath73 while @xmath72 is an odd function of @xmath73 , thus implying that the function @xmath53 is real . the functions @xmath53 and @xmath54 are not independent . for instance , the fourier transform @xmath72 is related to @xmath74 by means of the fluctuation - dissipation relation @xmath75 . the two functions @xmath71 and @xmath72 are in turn linked to each other by the causality principle : the function @xmath53 should vanish for @xmath76 . finally we would like to point out that with the aid of the above gaussian effective action one can easily evaluate the phase - phase correlation functions for our problem . combining eqs . ( [ finalss])-([siii ] ) with ( [ sc ] ) one finds ( cf . , e.g. @xcite ) @xmath77 now we will employ the above results in order to describe the effect of electron - electron interactions on transport properties of superconducting contacts . we start from ns systems . in this case in the absence of interactions we set @xmath78 and trivially recover the standard result @xmath79 . for the current fluctuations @xmath80 from eqs . ( [ finals])-([sins ] ) and ( [ corr ] ) analogously to @xcite we obtain @xmath81 this equation fully describes current noise in ns structures at energies well below the superconducting gap . for @xmath82 eq . ( [ sn1 ] ) reduces to the result @xcite while in the diffusive regime the correlator ( [ sn1 ] ) matches with the semiclassical result @xcite . let us now turn on interactions . in this case one should add the charging term ( [ sc ] ) to the action and account for phase fluctuations . proceeding along the same lines as in @xcite , for @xmath35 or max@xmath83 we get @xmath84 . \label{iv}\ ] ] where @xmath85 is the digamma function , @xmath86 and @xmath87 . the last term in eq . ( [ iv ] ) is the interaction correction to the i - v curve which scales with andreev fano factor @xmath88 in exactly the same way as the shot noise . thus , we arrive at an important conclusion : _ interaction correction to andreev conductance of ns structures is proportional to the shot noise power in such structures_. this fundamental relation between interaction effects and shot noise goes along with that established earlier for normal conductors @xcite extending it to superconducting systems . in both cases this relation is due to discrete nature of the charge carriers passing through the conductor . another important observation is that the interaction correction to andreev conductance defined in eq . ( [ iv ] ) has exactly the same functional form as that for normal conductors , cf . ( 25 ) in @xcite . furthermore , in a special case of diffusive systems we have @xmath89 , @xmath90 and the only difference between the interaction corrections to the i - v curve in normal and ns systems is the charge doubling in the latter case . as a result , the coulomb dip on the i - v curve of a diffusive ns system at any given @xmath91 is exactly _ 2 times narrower _ than that in the normal case . we believe that this narrowing effect was detected in normal wires attached to superconducting electrodes @xcite , cf . 3c in that paper . let us now turn to the electron - electron interaction correction to the equilibrium josephson current ( [ ichi ] ) . previously such correction was analyzed in the case of josephson tunnel barriers in the presence of linear ohmic dissipation @xcite . the task at hand is to investigate the interaction correction to the supercurrent in contacts with arbitrary transmission distribution . in order to evaluate the interaction correction it is necessary to go beyond the gaussian effective action ( [ finalss])-([siii ] ) and to evaluate the higher order contribution @xmath92 . it is easy to observe that the interaction correction to the supercurrent is provided by the following non - gaussian terms in the effective action : @xmath93 the function @xmath94 can be written as @xmath95 where @xmath96 . the function @xmath97 can be expressed in a similar way . adding the non - gaussian terms ( [ 3rd ] ) to the action and employing eq . ( [ curr ] ) we arrive at the following expression for the interaction correction @xmath98 where the phase - phase correlators are defined in eq . ( [ phiphi ] ) . let us consider the first term in the right - hand side of eq . ( [ zs ] ) . it is easy to see that in the limit of low temperatures only frequencies @xmath99 contribute to the integral in eq . ( [ phiphi ] ) for @xmath100 while the contribution from the frequency interval @xmath101 vanishes . furthermore , the leading contribution from the first term in eq . ( [ zs ] ) is picked up logarithmically from the interval @xmath102 where @xmath103 and the function @xmath104 tends to a frequency independent value . after a straightforward but tedious calculation in the interesting frequency range @xmath105 from eq . ( [ st ] ) one finds @xmath106 this high - frequency term involves the factor @xmath107 , i.e. it vanishes for fully open conducting channels . combining eqs . ( [ as ] ) , ( [ yw ] ) with ( [ zs ] ) , we arrive at the expression for the supercurrent @xmath108 in the limit of low temperatures the interaction correction reads @xmath109 where @xmath110 is the dimensionless normal state conductance of the contact . this result is justified as long as the coulomb correction @xmath111 remains much smaller than the non - interacting term @xmath112 ( [ ichi ] ) . typically this condition requires the dimensionless conductance to be large @xmath113 . note that eq . ( [ intcor ] ) was derived only from the first term in eq . ( [ zs ] ) . the second term in this equation involving the function @xmath114 and the correlator @xmath115 can be treated analogously . it turns out to be smaller than that of the first term by the logarithmic factor @xmath116 . let us emphasize again an important property of the result ( [ intcor ] ) : the interaction correction contains the factor @xmath107 and , hence , vanishes for fully open barriers . in other words , _ no coulomb blockade of the josephson current is expected in fully transparent superconducting contacts_. the expression for the interaction correction ( [ intcor ] ) can further be specified in the case of diffusive contacts . in the absence of interactions the josephson current in such contacts follows from ( [ ichi ] ) and takes the form corresponding to the zero - temperature limit of a well known kulik - omelyanchuk formula for a short diffusive wire @xmath117 including interactions and averaging ( [ intcor ] ) with the bimodal transmission distribution @xmath118 one finds @xmath119 . \label{intdif}\end{aligned}\ ] ] note that the result ( [ intcor ] ) can formally be reproduced if one substitutes @xmath120 into eq . ( [ ichi ] ) , where @xmath121 and then expands the result to the first order in @xmath122 . interestingly , the same transmission renormalization ( [ tcor ] ) follows from the renormalization group ( rg ) equations @xcite @xmath123 derived for _ normal _ conductors . in order to arrive at eq . ( [ tcor ] ) one should just start the rg flow at @xmath124 and stop it at @xmath125 . thus , the result ( [ intcor ] ) can be interpreted in a very simple manner : coulomb interaction provides high frequency renormalization @xmath126 ( [ tcor ] ) of the barrier transmissions which should be substituted into the classical expression for the supercurrent ( [ ichi ] ) . it should be stressed , however , that the last step would by no means appear obvious without our rigorous derivation since the coulomb correction to the josephson current originates from the term @xmath127 in the effective action which is , of course , totally absent in the normal case . in this paper we derived a general expression for the effective action of superconducting contacts with arbitrary transmissions of conducting channels . in the case of ns systems we described the interplay between coulomb blockade and andreev reflection and demonstrated a direct relation between shot noise and interaction effects in these structures . the fundamental physical reason behind this relation lies in discrete nature of the charge carriers electrons and cooper pairs passing through ns interfaces . our results allow to explain recent experimental findings @xcite . superconducting contacts with arbitrary channel transmissions show qualitatively new features as compared to the case of josephson tunnel barriers @xcite . the main physical reason for such differences is the presence of subgap andreev bound states inside the system . our results for the interaction correction might explain a rapid change between superconducting and insulating behavior recently observed @xcite in comparatively short metallic wires with resistances close to the quantum resistance unit @xmath128 k@xmath129 in - between two bulk superconductors . previously it was already argued @xcite that such a superconductor - to - insulator crossover can be due to coulomb effects . our present results provide further quantitative arguments in favor of this conclusion . * acknowledgment * this work was supported in part by rfbr grant 09 - 02 - 00886 . 50 andreev a.f . 1964 _ sov . phys . jetp _ * 19 * 1228 . blonder g.e . , tinkham m. , and klapwijk t.m . 1964 _ phys . rev . b _ * 25 * 4515 . kulik i.o.1970 _ sov . . jetp _ * 30 * 944 . ishii c. 1970 _ progr . phys . _ * 44 * 1525 . beenakker c.w.j . 1991 _ phys . lett . _ * 67 * 3836 . galaktionov a.v . and zaikin a.d . 2002 _ phys . b _ * 65 * 184507 . schn g. and zaikin a.d . _ _ 1990 * 198 * 237 . golubev d.s . and zaikin a.d . 2001 _ phys . lett . _ * 86 * 4887 . galaktionov a.v . , golubev d.s . , and zaikin a.d . 2003 _ phys . b _ * 68 * 085317 ; _ ibid . _ * 68 * 235333 . kindermann m. and nazarov yu.v . 2003 _ phys . lett . _ * 91 * 136802 . golubev d.s . and zaikin a.d . 2004 _ phys . b _ * 69 * 075318 . bagrets d.a . and nazarov yu.v . 2005 _ phys . lett . _ * 94 * 056801 . zaikin a.d . 1994 _ physica b _ * 203 * 255 . snyman i. and nazarov yu.v . 2008 _ phys . b _ * 77 * 165118 . galaktionov a.v . and zaikin a.d . 2009 _ phys . rev . b _ * 80 * 174527 . kulik i.o . and omelyanchuk a.n . 1978 _ sov . j. low temp . phys . _ * 4 * 142 . golubev d.s . and zaikin a.d . 1999 _ phys . rev _ * 9195 . de jong m.j.m . and beenakker c.w.j . 1994 _ phys . b _ * 49 * 16070 . nagaev k.e . and bttiker m. 2001 _ phys . b _ * 63 * 081301 . bollinger a.t . , rogachev a. , and bezryadin a. 2006 _ europhys . _ * 76 * 505 . bollinger a.t . , dinsmore iii r.c . , rogachev a. , and bezryadin a. 2008 _ phys . lett . _ * 101 * 227003 . arutyunov k.yu . , golubev d.s . , and zaikin a.d . 2008 _ phys . rep . _ * 464 * 1 .
we derive an effective action for contacts between superconducting terminals with arbitrary transmission distribution of conducting channels . in the case of normal - superconducting ( ns ) contacts we evaluate interaction correction to andreev conductance and demonstrate a close relation between coulomb effects and shot noise in these systems . in the case of superconducting ( ss ) contacts we derive the electron - electron interaction correction to the josephson current . at @xmath0 both corrections are found to vanish for fully transparent ns and ss contacts indicating the absence of coulomb effects in this limit .
a total of 781 consecutive diabetic patients , mean sem age 59 0.5 years and diabetes duration 11.8 0.4 years , treated at the diabetes outpatient clinic of the university clinics of vienna from 1 january 2006 to 17 february 2007 were studied . upon entry in the study , a careful medical history with special focus on cardiovascular disease all patients were asked to complete the minnesota living with heart failure questionnaire and the dyspnoe score chart . blood pressure , heart rate , electrocardiogram , and a blood sample for the determination of serum cholesterol , triglycerides , creatinine , a1c , blood glucose , and the markers were obtained from each patient , and new york heart association ( nyha ) stage was assessed . the study was conducted in accordance with the declaration of helsinki ii and was approved by the local ethics committee . total cholesterol , ldl cholesterol , hdl cholesterol , triglycerides , blood glucose , a1c , and serum creatinine were determined using standard laboratory procedures . glomerular filtration rate ( gfr ) was calculated by the cockroft - gault and modification of diet in renal disease formulas , respectively . ct - proavp , ct - proet-1 , mr - proadm , and mr - proanp were determined from edta plasma of all patients at baseline with sandwich immunoluminometric assays ( all from b.r.a.h.m.s . , hennigsdorf , berlin , germany ) , as described before ( 47 ) . based on the short observation period , a composite end point consisting of unplanned hospitalization for cardiovascular disease or death was chosen as the primary end point in this study . unplanned cardiovascular events were defined as follows : hospitalization based on heart failure , myocardial infarction or unstable angina , symptomatic bradycardia , atrial fibrillation , ventricular tachycardia , peripheral and central arterial occlusive disease , transient ischemic attack , or stroke . mortality data were obtained from the austrian central office of civil registration ( zentrales melderegister ) . if a patient had died , the date of death was recorded . information about hospitalizations for cardiovascular disease was obtained from hospital files by a cardiologist , unaware of the results at index time . variables are expressed as means sem or mean [ 95% ci ] , as stated . sample size calculation was based on an expected log hazard ratio ( hr ) of 0.5 and an expected event rate of 10% . for = 0.05 and power > 90% , a sample size of 600 patients was obtained . for group comparisons of continuous variables , a two - tailed student 's t test was used . receiver operating characteristic analysis was performed to evaluate the predictive performance of mr - proadm , ct - proet-1 , mr - proanp , and ct - proavp . a forced - entry model was used to evaluate the role of creatinine and the four peptide markers mr - proadm , ct - proet-1 , mr - proanp , and ct - proavp as independent predictors of reaching the end point ( unplanned hospitalization due to cardiovascular event and/or death ) . variables , nyha stage , ihd history , age , bmi , systolic blood pressure , a1c , ldl cholesterol , serum triglycerides , and serum creatinine were included in the models , with or without the addition of either one of the four peptide markers . again stepwise logistic regression models were calculated to identify independent variables to predict the reaching of the end point ( unplanned hospitalization due to cardiovascular event and/or death ) . the p value for entering the stepwise model the stepwise approach was used to determine the most potent predictors independent of the number of events out of the variables as follows : nyha stage , ihd history , age , bmi , systolic blood pressure , a1c , ldl cholesterol , serum triglycerides , serum creatinine , mr - proadm , ct - proet-1 , mr - proanp , and ct - proavp . in addition , 500 bootstrap repetitions were done for the cox regression model , repeating the variable selection for each sample using the same entering and exclusion rules . the results were retested in a forced in model . the proportional hazards assumption was assessed and satisfied for all variables based on time interaction test . to determine independent predictors of serum creatinine , stepwise linear regression was performed . parameters included in the model were age ( years ) , total and ldl cholesterol ( milligrams per deciliter ) , a1c ( percent ) , systolic and diastolic blood pressure rr ( millimeters of mercury ) , heart rate ( minute ) , bmi ( weight in kilograms divided by the square of height in meters ) , ct - proavp ( picomoles per liter ) , ct - proet-1 ( picomoles per liter ) , mr - proadm ( nanomoles per liter ) , and mr - proanp ( picomoles per liter ) . statistical software spss for windows ( release 15.0 ; spss , chicago , il ) was used for analysis . total cholesterol , ldl cholesterol , hdl cholesterol , triglycerides , blood glucose , a1c , and serum creatinine were determined using standard laboratory procedures . glomerular filtration rate ( gfr ) was calculated by the cockroft - gault and modification of diet in renal disease formulas , respectively . ct - proavp , ct - proet-1 , mr - proadm , and mr - proanp were determined from edta plasma of all patients at baseline with sandwich immunoluminometric assays ( all from b.r.a.h.m.s . based on the short observation period , a composite end point consisting of unplanned hospitalization for cardiovascular disease or death was chosen as the primary end point in this study . unplanned cardiovascular events were defined as follows : hospitalization based on heart failure , myocardial infarction or unstable angina , symptomatic bradycardia , atrial fibrillation , ventricular tachycardia , peripheral and central arterial occlusive disease , transient ischemic attack , or stroke . mortality data were obtained from the austrian central office of civil registration ( zentrales melderegister ) . if a patient had died , the date of death was recorded . information about hospitalizations for cardiovascular disease was obtained from hospital files by a cardiologist , unaware of the results at index time . variables are expressed as means sem or mean [ 95% ci ] , as stated . sample size calculation was based on an expected log hazard ratio ( hr ) of 0.5 and an expected event rate of 10% . for = 0.05 and power > 90% , a sample size of 600 patients was obtained . for group comparisons of continuous variables , a two - tailed student 's t test was used . receiver operating characteristic analysis was performed to evaluate the predictive performance of mr - proadm , ct - proet-1 , mr - proanp , and ct - proavp . a forced - entry model was used to evaluate the role of creatinine and the four peptide markers mr - proadm , ct - proet-1 , mr - proanp , and ct - proavp as independent predictors of reaching the end point ( unplanned hospitalization due to cardiovascular event and/or death ) . variables , nyha stage , ihd history , age , bmi , systolic blood pressure , a1c , ldl cholesterol , serum triglycerides , and serum creatinine were included in the models , with or without the addition of either one of the four peptide markers . again stepwise logistic regression models were calculated to identify independent variables to predict the reaching of the end point ( unplanned hospitalization due to cardiovascular event and/or death ) . the p value for entering the stepwise model the stepwise approach was used to determine the most potent predictors independent of the number of events out of the variables as follows : nyha stage , ihd history , age , bmi , systolic blood pressure , a1c , ldl cholesterol , serum triglycerides , serum creatinine , mr - proadm , ct - proet-1 , mr - proanp , and ct - proavp . in addition , 500 bootstrap repetitions were done for the cox regression model , repeating the variable selection for each sample using the same entering and exclusion rules . the results were retested in a forced in model . the proportional hazards assumption was assessed and satisfied for all variables based on time interaction test . to determine independent predictors of serum creatinine , stepwise linear regression was performed . parameters included in the model were age ( years ) , total and ldl cholesterol ( milligrams per deciliter ) , a1c ( percent ) , systolic and diastolic blood pressure rr ( millimeters of mercury ) , heart rate ( minute ) , bmi ( weight in kilograms divided by the square of height in meters ) , ct - proavp ( picomoles per liter ) , ct - proet-1 ( picomoles per liter ) , mr - proadm ( nanomoles per liter ) , and mr - proanp ( picomoles per liter ) . statistical software spss for windows ( release 15.0 ; spss , chicago , il ) was used for analysis . an overview of the demographic and metabolic parameters of the study group is given in table 1 . of the entire study population 54 patients reached the composite end point during the observation period of up to 22 months ( 15 6.9 months , median sd ) . they were significantly different from their event - free counterparts regarding age ( p < 0.0001 ) , serum creatinine ( p = 0.002 ) , nyha classification ( p < 0.0001 ) , but not a1c , total and ldl cholesterol , systolic and diastolic blood pressure , and dyspnoe score . baseline characteristics of the study population in contrast with a1c , glucose and serum lipids , mr - proadm ( 0.93 0.07 vs. 0.62 0.01 nmol / l , p < 0.0001 ) , mr - proanp ( 220.0 25.6 vs. 86.8 2.6 pmol / l , p < 0.0001 ) , ct - proet ( 197.6 7.0 vs. 72.6 0.9 pmol / l , p = 0.001 ) , and ct - proavp ( 16.0 1.8 vs. 9.5 0.5 pmol / l , p = 0.001 ) were all significantly higher in patients reaching the composite end point than in those who did not . with increasing nyha stage , the serum levels of the four peptides also increased significantly ( ptrend < 0.001 for all correlations , as assessed by spearman 's rank test ) . in addition , age , serum triglycerides , a1c , systolic rr , serum creatinine , and gfr ( calculated by both cockroft - gault and modification of diet in renal disease formulas ) were significantly associated with nyha stage ( p < 0.001 ) . in contrast , there was no significant correlation with serum cholesterol , ldl cholesterol , diastolic blood pressure , plasma glucose , or bmi . receiver operating characteristic analysis was used to evaluate the ability of the four marker peptides mr - proadm , ct - proet-1 , mr - proanp , and ct - proavp to predict the primary end point . areas under the curve ( aucs ) of these models were 0.802 0.034 ( mr - proanp ) , 0.698 0.04 ( mr - proadm ) , 0.690 0.038 ( ct - proavp ) , and 0.652 0.048 ( ct - proet ) ( p < 0.001 for all four markers ) . using a cutoff value of 75 pmol / l , mr - proanp in this sample showed a sensitivity of 0.833 , a specificity of 0.576 , a positive predictive value ( ppv ) of 0.132 , and a negative predictive value ( npv ) of 0.978 . the corresponding values for the three other markers are as follows : mr - proadm ( cutoff 0.5 pmol / l ) : sensitivity 0.796 , specificity 0.395 , ppv 0.099 , and npv 0.960 ; ct - proavp ( cutoff 5 pmol / l ) : sensitivity 0.833 , specificity 0.406 , ppv 0.099 , and npv 0.972 ; and ct - proet ( cutoff 60 pg / ml ) : sensitivity 0.722 , specificity 0.325 , ppv 0.078 , and npv 0.942 . logistic cox regression forced - entry analysis was used to identify independent predictors of reaching the composite end point . creatinine was an independent predictor of cardiovascular events but became insignificant upon addition of any of the four markers . mr - proadm and mr - proanp but not ct - proet or ct - proavp were significant predictors of an event ( table 2 ) ( for a direct comparison of the five models , see supplementary tables a1a5 , available in an online appendix at http://care.diabetesjournals.org/cgi/content/full/dc08-2168/dc1 ) . in stepwise cox regression models all four hormones remained significant markers of outcome ( data not shown ) . comparison of association of known risk factors and the four markers with cardiovascular events logistic regression models using the following parameters ( forced entry ) : nyha , age , serum creatinine , ldl cholesterol , serum triglycerides , a1c , systolic rr , bmi ( model 1 ) and the same parameters plus ct - proavp ( model 2 ) , ct - proet ( model 3 ) , mr - proadm ( model 4 ) , or mr - proanp ( model 5 ) , respectively . when stepwise regression analysis was performed including all four hormones beside all risk parameters ( as mentioned in statistical analysis ) , only mr - proanp , together with nyha stage and ihd history , remained a significant predictor of cardiovascular event occurrence ; a 1-sd increment of mr - proanp was associated with a 1.564-fold risk ( 95% ci 1.3601.798 , p = 0.000 ) ( supplementary table a6 , available in an online appendix ) . this result also held true if a forced - entry model was calculated ( data not shown ) . bootstrap testing supported the importance and robustness of the model ( supplementary table a7 ) . significant correlations were seen between serum creatinine ( or gfr , as calculated by the cockroft - gault or mdrd formulas , respectively ) and the plasma markers , as determined by linear regression analysis ( r : mr - proadm 0.401 , mr - proanp 0.341 , ct - proet 0.319 , and c - proavp 0.443 , the plasma markers also correlated significantly with each other ( not shown ) . in a stepwise linear regression analysis ( table 3 ) , in which all hormones besides classic risk factors were included , all four plasma markers remained independent predictors of serum creatinine . association of variables with serum creatinine using stepwise linear regression analysis adjusted r of the model was 0.61l . variables not included in the model were age , total and ldl cholesterol , triglycerides , a1c , fasting glucose , bmi , and diastolic rr . an overview of the demographic and metabolic parameters of the study group is given in table 1 . of the entire study population 54 patients reached the composite end point during the observation period of up to 22 months ( 15 6.9 months , median sd ) . they were significantly different from their event - free counterparts regarding age ( p < 0.0001 ) , serum creatinine ( p = 0.002 ) , nyha classification ( p < 0.0001 ) , but not a1c , total and ldl cholesterol , systolic and diastolic blood pressure , and dyspnoe score . in contrast with a1c , glucose and serum lipids , mr - proadm ( 0.93 0.07 vs. 0.62 0.01 nmol / l , p < 0.0001 ) , mr - proanp ( 220.0 25.6 vs. 86.8 2.6 pmol / l , p < 0.0001 ) , ct - proet ( 197.6 7.0 vs. 72.6 0.9 pmol / l , p = 0.001 ) , and ct - proavp ( 16.0 1.8 vs. 9.5 0.5 pmol / l , p = 0.001 ) were all significantly higher in patients reaching the composite end point than in those who did not . with increasing nyha stage , the serum levels of the four peptides also increased significantly ( ptrend < 0.001 for all correlations , as assessed by spearman 's rank test ) . in addition , age , serum triglycerides , a1c , systolic rr , serum creatinine , and gfr ( calculated by both cockroft - gault and modification of diet in renal disease formulas ) were significantly associated with nyha stage ( p < 0.001 ) . in contrast , there was no significant correlation with serum cholesterol , ldl cholesterol , diastolic blood pressure , plasma glucose , or bmi . receiver operating characteristic analysis was used to evaluate the ability of the four marker peptides mr - proadm , ct - proet-1 , mr - proanp , and ct - proavp to predict the primary end point . areas under the curve ( aucs ) of these models were 0.802 0.034 ( mr - proanp ) , 0.698 0.04 ( mr - proadm ) , 0.690 0.038 ( ct - proavp ) , and 0.652 0.048 ( ct - proet ) ( p < 0.001 for all four markers ) . pmol / l , mr - proanp in this sample showed a sensitivity of 0.833 , a specificity of 0.576 , a positive predictive value ( ppv ) of 0.132 , and a negative predictive value ( npv ) of 0.978 . the corresponding values for the three other markers are as follows : mr - proadm ( cutoff 0.5 pmol / l ) : sensitivity 0.796 , specificity 0.395 , ppv 0.099 , and npv 0.960 ; ct - proavp ( cutoff 5 pmol / l ) : sensitivity 0.833 , specificity 0.406 , ppv 0.099 , and npv 0.972 ; and ct - proet ( cutoff 60 pg / ml ) : sensitivity 0.722 , specificity 0.325 , ppv 0.078 , and npv 0.942 . logistic cox regression forced - entry analysis was used to identify independent predictors of reaching the composite end point . creatinine was an independent predictor of cardiovascular events but became insignificant upon addition of any of the four markers . mr - proadm and mr - proanp but not ct - proet or ct - proavp were significant predictors of an event ( table 2 ) ( for a direct comparison of the five models , see supplementary tables a1a5 , available in an online appendix at http://care.diabetesjournals.org/cgi/content/full/dc08-2168/dc1 ) . in stepwise cox regression models all four hormones remained significant markers of outcome ( data not shown ) . comparison of association of known risk factors and the four markers with cardiovascular events logistic regression models using the following parameters ( forced entry ) : nyha , age , serum creatinine , ldl cholesterol , serum triglycerides , a1c , systolic rr , bmi ( model 1 ) and the same parameters plus ct - proavp ( model 2 ) , ct - proet ( model 3 ) , mr - proadm ( model 4 ) , or mr - proanp ( model 5 ) , respectively . when stepwise regression analysis was performed including all four hormones beside all risk parameters ( as mentioned in statistical analysis ) , only mr - proanp , together with nyha stage and ihd history , remained a significant predictor of cardiovascular event occurrence ; a 1-sd increment of mr - proanp was associated with a 1.564-fold risk ( 95% ci 1.3601.798 , p = 0.000 ) ( supplementary table a6 , available in an online appendix ) . this result also held true if a forced - entry model was calculated ( data not shown ) . bootstrap testing supported the importance and robustness of the model ( supplementary table a7 ) . significant correlations were seen between serum creatinine ( or gfr , as calculated by the cockroft - gault or mdrd formulas , respectively ) and the plasma markers , as determined by linear regression analysis ( r : mr - proadm 0.401 , mr - proanp 0.341 , ct - proet 0.319 , and c - proavp 0.443 , the plasma markers also correlated significantly with each other ( not shown ) . in a stepwise linear regression analysis ( table 3 ) , in which all hormones besides classic risk factors were included , all four plasma markers remained independent predictors of serum creatinine . association of variables with serum creatinine using stepwise linear regression analysis adjusted r of the model was 0.61l . variables not included in the model were age , total and ldl cholesterol , triglycerides , a1c , fasting glucose , bmi , and diastolic rr . 1 ) in diabetic patients , progression to cardiovascular events is associated with the plasma levels of mr - proadm , ct - proavp , ct - proet-1 , and mr - proanp ; 2 ) mr - proanp and mr - proadm are significant independent predictors of cardiovascular events in this patient group ; and 3 ) all four markers are independent from each other predictor of serum levels of creatinine ( and of gfr ) . plasma levels of all four markers were significantly higher in patients reaching the composite end point and significantly higher in patients with increasing severity of cardiac symptoms at baseline as assessed by nyha stage . all four markers were useful as predictors of future cardiovascular events , with mr - proanp being the strongest . in this sample , the two markers mr - proadm and mr - proanp proved to be stronger predictors of an event than traditional risk markers . ct - pro - avp and ct - proet only remained significant in a stepwise but not in a forced - entry cox regression model . of note , all four parameters showed a very good negative predictive quality , a property that would be very valuable were these parameters to be used in a clinical setting . the information that a patient currently has a lower risk for the occurrence of a cardiovascular event over the next year helps clinicians to better target aggressive management and close monitoring to those who really have a higher risk . nh2-terminal - pro brain natriuretic peptide , another natriuretic peptide marker , also showed a high npv in a population comparable to the sample presented here . again , nt - pobnp was superior to traditional risk factors in the prediction of cardiovascular events ( 21 ) . mr - proanp , the strongest predictor in this sample , has been studied in comparison with nt - probnp as the survival predictor in the setting of chronic heart failure ( 12 ) and acute cardiac failure and has been found to be noninferior to this established parameter . it has been shown repeatedly that diabetic nephropathy is a main risk factor for future cardiovascular events ( 22 ) . in confirmation of these observations , in this sample serum creatinine was significantly higher and glomerular filtration rate was significantly lower in those reaching the composite end point over the short time frame of a little more than 1 year . in addition , baseline serum creatinine was strongly associated with all four serum markers of vascular function , and all four remained independent predictors of serum creatinine in the stepwise linear regression model . although creatinine was a highly significant predictor of outcome in a model including traditional risk markers , it became insignificant if any one of the four markers were added . both adrenomedullin and endothelin may be produced by renal endothelial cells and have been studied in the context of nephropathy and renal failure ( 23 ) ; in this sample of diabetic patients the stable serum markers mr - proadm and ct - proet-1 also correlated with gfr and serum creatinine . in another sample of type 2 diabetic patients mr - proadm was increased in the presence of nephropathy and was related to insulin resistance ( 15 ) . the markers mr - proanp and ct - proavp have been studied in the context of sepsis and myocardial infarction ( 24 ) and have been used as outcome parameters regarding morbidity and mortality . these parameters , to our knowledge , have not been studied in patients with diabetes and in the context of diabetic nephropathy . however , ct - proavp ( copeptin ) has been shown to correlate negatively with gfr in chronic heart failure ( 10 ) , and mr - proanp was negatively correlated with serum creatinine in chronic heart failure ( 13 ) . thus , the close relationship of these parameters both with mr - proadm and ct - proet as well as with serum creatinine is noteworthy . it is not known how the precursor peptides of the vasoactive peptides are cleared from the circulation , and decreased renal clearance is at least a partial explanation for the correlation of all four peptides with kidney function . alternatively or in addition , the increase in the plasma levels of the four peptides could also be explained by increased production due to endothelial stress . this natriuretic property is thought to be beneficial , and the fact that patients with heart failure do not have increased natriuresis but instead fluid retention and edema is thought to be due to ( renal ) resistance to the effect . thus , the upregulation of natriuretic peptides in heart failure is thought to be physiological and cardio- and renoprotective . the higher levels of these peptides in patients progressing to an event would therefore represent an increased compensating effort of the body . interestingly , in this study sample all four markers were independent predictors of serum creatinine in the multivariate linear regression analysis . this interesting finding underscores the complex regulation of kidney function with apparently each of the different peptide hormones ( vasopressin , adrenomedullin , endothelin , and atrial natriuretic peptide ) playing a distinct role in the diabetic kidney . in the study sample presented here , the stable markers of all four peptides correlated strongly with each other ( data not shown ) , despite their different origin , and all of them were higher in patients with a future event than in those who remained event free . to our knowledge , this is the first report of the four markers mr - proadm , ct - proet-1 , mr - proanp , and ct - proavp in a large sample of patients with diabetes . the data presented here describe a close relationship between renal function parameters and the four serum markers and a relationship of mr - proanp in particular and to a lesser extent the other three markers and the occurrence of cardiovascular events over a time frame of < 1 year in a cohort of diabetic patients . thus , these markers ( or , rather , the active peptides for which they are surrogate markers ) could be factors linking renal function to cardiovascular events .
objectivethe increased cardiovascular risk in diabetes has been linked to endothelial and renal dysfunction . the aim of this study was to investigate the role of stable fragments of the precursors of adrenomedullin , endothelin-1 , vasopressin , and atrial natriuretic peptide in progression of cardiovascular disease in patients with diabetes.research design and methodsthis was a prospective , observational study design with a composite end point ( death or unexpected admission to hospital due to a cardiovascular event ) on 781 patients with type 2 diabetes ( 54 events , median duration of observation 15 months ) . the four stable precursor peptides midregional adrenomedullin ( mr - proadm ) , midregional proatrial natriuretic peptide ( mr - proanp ) , cooh - terminal proendothelin-1 ( ct - proet-1 ) , and cooh - terminal provasopressin or copeptin ( ct - proavp ) were determined at baseline , and their association to renal function and cardiovascular events was studied using stepwise linear and cox logistic regression analysis and receiver operating characteristic analysis , respectively.resultsmr-proadm , ct - proet-1 , ct - proavp , and mr - proanp were all elevated in patients with future cardiovascular events and independently correlated to serum creatinine . mr - proadm and mr - proanp were significant predictors of a future cardiovascular event , with mr - proanp being the stronger ( area under the curve 0.802 0.034 , sensitivity 0.833 , specificity 0.576 , positive predictive value 0.132 , and negative predictive value 0.978 with a cutoff value of 75 pmol / l).conclusionsthe four serum markers of vasoactive and natriuretic peptides are related to both kidney function and cardiovascular events , thus linking two major complications of diabetes , diabetic nephropathy and cardiovascular disease .
SECTION 1. DEFINITION. In this Act, the term ``Administrator'' means the Administrator of the Federal Emergency Management Agency. SEC. 2. MAINTAINING RISK PREMIUM RATES FOR PROPERTIES PURCHASED AFTER THE DATE OF ENACTMENT OF THE BIGGERT-WATERS FLOOD INSURANCE REFORM ACT OF 2012. (a) In General.--Section 1307(g) of the National Flood Insurance Act of 1968 (42 U.S.C. 4014(g)) is amended-- (1) by striking paragraph (2); and (2) by redesignating paragraphs (3) and (4) as paragraphs (2) and (3), respectively. (b) Effective Date.--Subsection (a) shall take effect as if enacted on the date of enactment of the Biggert-Waters Flood Insurance Reform Act of 2012 (Public Law 112-141; 126 Stat. 916). SEC. 3. DELAY IN FLOOD INSURANCE RATE CHANGES. (a) In General.--Any change in risk premium rates for flood insurance under the National Flood Insurance Program under the amendments made by sections 100205 and 100207 of the Biggert-Waters Flood Insurance Reform Act of 2012 (Public Law 112-141; 126 Stat. 917) to sections 1307 and 1308 of the National Flood Insurance Act of 1968 (42 U.S.C. 4014 and 4015) shall not take effect until-- (1) the date that is 180 days after the date on which the Administrator submits the report on affordability under section 100236(c) of the Biggert-Waters Flood Insurance Reform Act of 2012; or (2) if the Administrator determines that the report on affordability required under section 100236(c) of the Biggert- Waters Flood Insurance Reform Act of 2012 cannot be submitted by the date specified under such section 100236(c), the date that is 180 days after the date on which the Administrator submits the information under section 100236(e)(2) of such Act, as added by section 6 of this Act. (b) Effective Date.--Subsection (a) shall take effect as if enacted as part of the Biggert-Waters Flood Insurance Reform Act of 2012. SEC. 4. STUDIES OF VOLUNTARY COMMUNITY-BASED FLOOD INSURANCE OPTIONS. (a) Study.-- (1) Study required.--The Administrator shall conduct a study to assess options, methods, and strategies for making available voluntary community-based flood insurance policies through the National Flood Insurance Program. (2) Considerations.--The study conducted under paragraph (1) shall-- (A) take into consideration and analyze how voluntary community-based flood insurance policies-- (i) would affect communities having varying economic bases, geographic locations, flood hazard characteristics or classifications, and flood management approaches; and (ii) could satisfy the applicable requirements under section 102 of the Flood Disaster Protection Act of 1973 (42 U.S.C. 4012a); and (B) evaluate the advisability of making available voluntary community-based flood insurance policies to communities, subdivisions of communities, and areas of residual risk. (3) Consultation.--In conducting the study required under paragraph (1), the Administrator may consult with the Comptroller General of the United States, as the Administrator determines is appropriate. (b) Report by the Administrator.-- (1) Report required.--Not later than 18 months after the date of enactment of this Act, the Administrator shall submit to the Committee on Banking, Housing, and Urban Affairs of the Senate and the Committee on Financial Services of the House of Representatives a report that contains the results and conclusions of the study conducted under subsection (a). (2) Contents.--The report submitted under paragraph (1) shall include recommendations for-- (A) the best manner to incorporate voluntary community-based flood insurance policies into the National Flood Insurance Program; and (B) a strategy to implement voluntary community- based flood insurance policies that would encourage communities to undertake flood mitigation activities, including the construction, reconstruction, or improvement of levees, dams, or other flood control structures. (c) Report by Comptroller General.--Not later than 6 months after the date on which the Administrator submits the report required under subsection (b), the Comptroller General of the United States shall-- (1) review the report submitted by the Administrator; and (2) submit to the Committee on Banking, Housing, and Urban Affairs of the Senate and the Committee on Financial Services of the House of Representatives a report that contains-- (A) an analysis of the report submitted by the Administrator; (B) any comments or recommendations of the Comptroller General relating to the report submitted by the Administrator; and (C) any other recommendations of the Comptroller General relating to community-based flood insurance policies. SEC. 5. AMENDMENTS TO NATIONAL FLOOD INSURANCE ACT OF 1968. (a) Adequate Progress on Construction of Flood Protection Systems.--Section 1307(e) of the National Flood Insurance Act of 1968 (42 U.S.C. 4014(e)) is amended by inserting after the second sentence the following: ``Notwithstanding any other provision of law, in determining whether a community has made adequate progress on the construction, reconstruction, or improvement of a flood protection system, the Administrator shall not consider the level of Federal funding of or participation in the construction, reconstruction, or improvement.''. (b) Communities Restoring Disaccredited Flood Protection Systems.-- Section 1307(f) of the National Flood Insurance Act of 1968 (42 U.S.C. 4014(f)) is amended in the first sentence by striking ``no longer does so.'' and inserting the following: ``no longer does so, and shall apply without regard to the level of Federal funding of or participation in the construction, reconstruction, or improvement of the flood protection system.''. SEC. 6. AFFORDABILITY STUDY. Section 100236 of the Biggert-Waters Flood Insurance Reform Act of 2012 (Public Law 112-141; 126 Stat. 957) is amended-- (1) in subsection (c), by striking ``Not'' and inserting the following: ``Subject to subsection (e), not''; (2) in subsection (d)-- (A) by striking ``Notwithstanding'' and inserting the following: ``(1) National flood insurance fund.--Notwithstanding''; and (B) by adding at the end the following: ``(2) Other funding sources.--To carry out this section, in addition to the amount made available under paragraph (1), the Administrator may use any other amounts that are available to the Administrator.''; and (3) by adding at the end the following: ``(e) Alternative.--If the Administrator determines that the report required under subsection (c) cannot be submitted by the date specified under subsection (c)-- ``(1) the Administrator shall notify, not later than 60 days after the date of enactment of this subsection, the Committee on Banking, Housing, and Urban Affairs of the Senate and the Committee on Financial Services of the House of Representatives of an alternative method of gathering the information required under this section; ``(2) the Administrator shall submit, not later than 180 days after the Administrator submits the notification required under paragraph (1), to the Committee on Banking, Housing, and Urban Affairs of the Senate and the Committee on Financial Services of the House of Representatives the information gathered using the alternative method described in paragraph (1); and ``(3) upon the submission of information required under paragraph (2), the requirement under subsection (c) shall be deemed satisfied.''. SEC. 7. FACILITIES IN COASTAL HIGH HAZARD AREAS. (a) Definitions.--In this section-- (1) the term ``coastal high hazard area'' has the same meaning as in section 9.4 of title 44, Code of Federal Regulations, or any successor thereto; (2) the term ``eligible entity'' means an entity that receives a contribution under section 406 of the Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5172); (3) the term ``essential to a community's recovery'' means, with respect to a structure or facility, that the structure or facility is associated with the basic functions of a local government, including public health and safety, education, law enforcement, fire protection, and other critical government operations; and (4) the term ``major disaster'' means a major disaster declared by the President under section 401 of the Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5170). (b) Regulations.-- (1) Substantial improvements.--Notwithstanding section 9.4 of title 44, Code of Federal Regulations, an action relating to a structure or facility located in a coastal high hazard area for which an eligible entity received a contribution under section 406 of the Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5172) shall be deemed to be a ``substantial improvement'' for purposes of part 9 of title 44, Code of Federal Regulations, if-- (A) the action involves the replacement of a structure or facility that-- (i) was located in the coastal high hazard area before the incident that caused the structure or facility to be totally destroyed; and (ii) is essential to a community's recovery from a major disaster; (B) there is no practicable alternative to locating a replacement structure or facility in the coastal high hazard area; (C) the replacement structure or facility conforms to the most recent Flood Resistant Design and Construction standard issued by the American Society of Civil Engineers, or any more stringent standard approved by the Administrator; and (D) the eligible entity develops evacuation and emergency response procedures to reduce the risk of loss of human life and operational disruption from a flood. (2) Relocation.-- (A) Relocation required.--The amendments under paragraph (1) shall provide that if the Administrator determines that there is a practicable alternative to the original site of a structure or facility described in paragraph (1) that is outside the coastal high hazard area and that provides better protection against the flood hazard or other hazards associated with coastal high hazard areas, the replacement structure or facility shall be relocated to the alternative site. (B) Relocation.--If a replacement structure or facility is relocated under subparagraph (A), the original site for the destroyed structure or facility shall be deed restricted in conformance with part 80 of title 44, Code of Federal Regulations. (C) No relocation.--If a replacement structure or facility is rebuilt at the same location, the eligible entity shall set aside an alternative parcel of land in the coastal high hazard area of equal or greater size, to be deed restricted in conformance with part 80 of title 44, Code of Federal Regulations, that the Administrator determines-- (i) provides better protection against floods; or (ii) promotes the restoration of natural and beneficial functions of coastal floodplains, including protection to endangered species, critical habitat, wetlands, or coastal uses. (3) Applicability.--This section shall apply with respect to any major disaster declared on or after the date of enactment of this Act.
Amends the National Flood Insurance Act of 1968 to repeal the prohibition against provision of flood insurance by the Administrator of the Federal Emergency Management Agency (FEMA) to prospective insureds at rates less than standard estimates for property purchased after enactment of the Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters). (Thus allows risk premium rates lower than standard rates for certain property purchased after Biggert-Waters.) Delays the effective date of any flood insurance rate changes until 180 days after FEMA submits: (1) a certain report on methods to establish an affordability framework for the National Flood Insurance Program (NFIP), or (2) notice to the congressional committees concerned of an alternative method of gathering information for such report if the report cannot be submitted by its due date. Directs FEMA to study options, methods, and implementing strategies for making available voluntary community-based flood insurance policies through NFIP. Prohibits FEMA, when determining whether a community has made adequate progress on the construction, reconstruction, or improvement of a flood protection system, from considering the level of federal funding or participation. Deems an action for the repair, restoration, and replacement of a totally destroyed structure or facility located in a coastal high hazard area for which an eligible entity received a contribution under the Robert T. Stafford Disaster Relief and Emergency Assistance Act to be a &quot;substantial improvement&quot; for which grant funds may be used, if specified conditions are met. Requires a replacement structure or facility to be relocated to an alternative site if FEMA determines that a practicable alternative located outside the coastal high hazard area exists and provides better protection against hazards associated with coastal high hazard areas. Prescribes deed restrictions to dedicate and maintain it in perpetuity as open space for the conservation of natural floodplain functions for any property involved in the construction of replacement structures or facilities, either the original site if the replacement structure or facility is relocated, or an alternative parcel of land in the coastal high hazard area if the replacement is rebuilt at the same location.
spontaneous spinal epidural hematoma ( sseh ) represents 0.3 - 0.9% of spinal epidural space - occupying lesions . the clinical course has characteristics : without preceding trauma , patients experience an acute onset of local discomfort or pain , sometimes with radicular paresthesia . within hours , signs of spinal cord compression appear , presenting as progressive paraplegia and loss of sensory function . despite the clinical syndrome of sseh it is important to recognize these signs since early diagnosis and prompt surgical evacuation provide the maximum chance of functional recovery . a 51-yr - old man with a history of myocardial infarction was treated with percutaneous transluminal coronary angioplasty . after being discharged , he visited the department of internal medicine for three months and was taking 200 mg / day of aspirin . the patient felt the discomfort of midback area and mild lower extremity weakness 11 days before admitted to emergency room ( er ) . he ignored these discomforts which was not a great deal in his daily activity and on the 11th day from the onset of symptoms , he swam as usual in a pool for about 1 hr . after swimming , he felt worsening back pain and aggravated lower extremity weakness and so he visited er and was examined by a physician in er . his physician noticed the rapidly progressive paraplegia and a loss of leg sensory function , and consulted the department of neurology . neurologic physician who suspected the acute spinal cord disorder , ordered high dose steroid injection and serial neurologic examination . the neurologist thought this patient 's symptom was not a surgical condition but a medical condition . the patient 's neurologic status showed complete paraplegia in the lower extremity , t10-t12 dermatome hypesthesia , l1 below dematome anesthesia , no bladder function , patella tendon reflex ( -/- ) , and ankle jerk ( -/- ) . his coagulation profile showed pt : 0.85 ( 0.9 - 1.10 inr ) , aptt : 40.2 ( 29.8 - 41.8 sec ) . the neurologist ordered mri . mri scan showed an epidural hematoma with spinal cord compression from t8 to l2 ( fig . 1 , 2 ) . it was 28 hr after symptom onset , we performed the bilateral decompressive laminectomy from t 8 to l2 immediately . a large posterolateral hematoma and old blood clot was found within the epidural space ( fig . , his neurologic status showed a complete paraplegia , l1 below anesthesia , no bladder function and no deep tendon reflex . a 39-yr - old man was diagnosed with deep vein thrombosis ( left common iliac , external iliac , superficial femoral , common femoral vein ) at department of general surgery , and he was treated with the infusion catheter urokinase ( 1,100,000 unit/14 hr ) . the following day , he was performed with aspiration thrombectomy and stent at the department of diagnostic radiology , followed by the infusion catheter urokinase ( 1,000,000 unit/6 hr ) treatment . during that evening , at 6:00 p.m. , he felt the lower back discomfort and mild pain . and the physician ordered a pain killer . because this pain disappeared soon , the physician was not in an account . mri scan showed an epidural hematoma with spinal cord compression from t11 to l2 ( fig . 4 , 5 ) . the patient 's neurologic status showed a complete motor loss except a right iliopsoas muscle power trace , l1 - 2 dermatome hypesthesia , l3 below dermatome anesthesia , perianal sense ( - ) , patella tendon reflex ( -/- ) , and ankle jerk ( -/- ) . his coagulation profiles showed pt : 1.3 ( 0.9 - 1.10 inr ) , aptt : 34.8 ( 28 - 40 sec ) . postoperative mri shows extended dural sac and cord but no neurologic improvement was noted after surgery . on follow - up 12 months , his neurologic status showed a complete motor loss except the right iliopsoas , quadriceps , tibialis anterior muscle power trace , no sensory recovery , no bladder function , and no deep tendon reflex . a 51-yr - old man with a history of myocardial infarction was treated with percutaneous transluminal coronary angioplasty . after being discharged , he visited the department of internal medicine for three months and was taking 200 mg / day of aspirin . the patient felt the discomfort of midback area and mild lower extremity weakness 11 days before admitted to emergency room ( er ) . he ignored these discomforts which was not a great deal in his daily activity and on the 11th day from the onset of symptoms , he swam as usual in a pool for about 1 hr . after swimming , he felt worsening back pain and aggravated lower extremity weakness and so he visited er and was examined by a physician in er . his physician noticed the rapidly progressive paraplegia and a loss of leg sensory function , and consulted the department of neurology . neurologic physician who suspected the acute spinal cord disorder , ordered high dose steroid injection and serial neurologic examination . the neurologist thought this patient 's symptom was not a surgical condition but a medical condition . the patient 's neurologic status showed complete paraplegia in the lower extremity , t10-t12 dermatome hypesthesia , l1 below dematome anesthesia , no bladder function , patella tendon reflex ( -/- ) , and ankle jerk ( -/- ) . his coagulation profile showed pt : 0.85 ( 0.9 - 1.10 inr ) , aptt : 40.2 ( 29.8 - 41.8 sec ) . the neurologist ordered mri . mri scan showed an epidural hematoma with spinal cord compression from t8 to l2 ( fig . 1 , 2 ) . it was 28 hr after symptom onset , we performed the bilateral decompressive laminectomy from t 8 to l2 immediately . a large posterolateral hematoma and old blood clot was found within the epidural space ( fig . , his neurologic status showed a complete paraplegia , l1 below anesthesia , no bladder function and no deep tendon reflex . a 39-yr - old man was diagnosed with deep vein thrombosis ( left common iliac , external iliac , superficial femoral , common femoral vein ) at department of general surgery , and he was treated with the infusion catheter urokinase ( 1,100,000 unit/14 hr ) . the following day , he was performed with aspiration thrombectomy and stent at the department of diagnostic radiology , followed by the infusion catheter urokinase ( 1,000,000 unit/6 hr ) treatment . during that evening , at 6:00 p.m. , he felt the lower back discomfort and mild pain . and the physician ordered a pain killer . because this pain disappeared soon , the physician was not in an account . mri scan showed an epidural hematoma with spinal cord compression from t11 to l2 ( fig . 4 , 5 ) . the patient 's neurologic status showed a complete motor loss except a right iliopsoas muscle power trace , l1 - 2 dermatome hypesthesia , l3 below dermatome anesthesia , perianal sense ( - ) , patella tendon reflex ( -/- ) , and ankle jerk ( -/- ) . his coagulation profiles showed pt : 1.3 ( 0.9 - 1.10 inr ) , aptt : 34.8 ( 28 - 40 sec ) . postoperative mri shows extended dural sac and cord but no neurologic improvement was noted after surgery . on follow - up 12 months , his neurologic status showed a complete motor loss except the right iliopsoas , quadriceps , tibialis anterior muscle power trace , no sensory recovery , no bladder function , and no deep tendon reflex . approximately 350 cases have been reported in the medical literature , and the annual incidence has been estimated at 0.1 per 100,000 . predisposing factors such as inherited coagulopathy , spinal vascular malformation , anticoagulant therapy , therapeutic thrombolysis , and epidural analgesia , have been implicated , along with hypertension and antiplatelet therapy ( 1 , 2 ) . the clinical course is characteristic : without preceding trauma , patients experience an acute onset of local discomfort or pain , sometimes with radicular paresthesia . within hours , signs of spinal cord compression appeared , and paraplegia and loss of sensory function progress . despite the clinical syndrome of sseh differential diagnosis includes spinal abscess , tumor , ischemia , transverse myelitis or acute vertebral disc disease ( 3 ) . because the results of operative decompression of the spinal cord mainly depend on the duration of the symptoms , the time lost for diagnostic measures may have negative influence on the functional outcome . currently , mri has been used as the first choice of diagnostic method for spinal emergencies because it allows rapid evaluation of large parts of the vertebral column and the spinal cord . the prognosis of neurologic deficits predominantly depends on the time interval between the onset of symptoms and the surgical decompression ( 2 - 6 ) . in our patients , so the preoperative neurologic status was aggravated , and unfortunately as a result no improvement of symptom showed . both the patient and the doctor should not take early , but mild , signs lightly which may result in delayed diagnosis , like our two cases . in sseh , the small signs such as minor pain and slight discomfort appear , followed by rapid neurological disorder , thus , these small signs should be considered prudently and immediately carry out mri to make quick and accurate diagnosis before the patient 's neurological state is aggravated . the prognosis appears to be related to the severity of the preoperative neurologic deficit and early operation for rapid decompression of the spinal cord is crucial . lawton et al . ( 7 ) confirmed the relationship between neurologic recovery , the timing of surgery ( operation time ) and the preoperative neurological status . they showed functional recovery to correlate inversely with the duration of spinal cord compression and the degree of severity . our patients ' symptoms evolved rapidly and by the time of surgical decompression , 24 hr after onset , he was completely paraplegic . there is a evidence to suggest that functional recovery might have been better if the preoperative deficit was less severe . surgery performed more than 12 hr after symptom onset is unlikely to be successful ( 6 ) . immediate drainage is recommended by most surgeons . in our patients , we thought that this unsuccessful result was due to the complete paraplegia preoperatively which was caused by delayed diagnosis . the time interval between onset of symptoms and surgery is determined by how long it takes for a patient to recognize symptoms and enter into the medical system , and how long it takes for the physicians to recognize clinical signs and obtain radiological studies . although physicians can do nothing about the patient 's response time , they can accelerate the diagnostic evaluation , thereby minimizing the surgical delay . the outcome is mainly depended on the time taken from symptom onset till operation , therefore early and accurate diagnosis such as careful history taking and immediate mri evaluation is necessary . these symptoms initially appear as mild back pain and discomfort , which should not be ignored , instead close observation should be made to contribute to make early and accurate diagnosis .
we present two patients who had acute paraplegia with sensory loss due to spontaneous spinal epidural hematoma ( sseh ) . one had myocardial infraction and the other had deep vein thrombosis , and the former was treated with anticoagulants and the latter was treated with thrombolytic agent . we analyzed the neurological status of our two cases each between its preoperative and postoperative state . postoperatively both showed no improvement of neurologic symptom , and on follow - up of 12 months , one showed no neurologic improvement and the other showed a insignificant improvement of lower extremity muscle power ( trace knee extensor / ankle dorsiflexor ) . we thought that this poor outcome was due to delayed operation , which was done more than 24 hr after the symptom onset . the outcome in sseh is essentially determined by the time taken from symptom onset to operation . therefore , early and precise diagnosis such as careful history taking and mri evaluation is necessary .
the triceps tendon usually ruptures when there is fall on an outstretched hand with the elbow in incomplete extension with or without a concomitant blow to the posterior aspect of the elbow.1 acute rupture of triceps following trauma usually occurs at the osteotendinous junction whereas a rupture at the myotendinous junction occurs less often.2 predisposing factors include local steroid injection , olecranon bursitis , and hyperparathyroidism.3 careful examinations of the active range of motion of the elbow determine the character of the tear , whether partial or complete.4 initial diagnosis may be difficult because a palpable defect is not always present and pain may limit the motion . when the diagnosis is missed , it leads to prolonged disability to the extensor mechanism of the elbow . mri also plays a vital role in diagnosing this condition and determining its character.5 here , we report a chronic triceps insufficiency managed with extensor carpi radialis longus and palmaris longus tendon grafts . a 25-year - old male carpenter presented with a 1 year history of a significant fall sustaining multiple injuries . the major disabling injury following trauma was a fracture in both bones of left forearm which was treated with an open reduction and internal fixation . the patient also told a history of pain and swelling in the posterior aspect of the right elbow following the initial trauma . soon after the swelling subsided , he noticed a depression in the posterior aspect of the elbow just above the olecranon for which he never sought any intervention . when he returned to his work as a carpenter after healing of other fractures , he felt disability limiting his activity which included inability to hammer and lack of extension . clinical examination revealed a depression just above the olecranon at the osteotendinous junction of triceps . he was able to do everything related with flexion , pronation , and supination , but was depending only on gravity for extention . radiographs revealed avulsion of a fleck of bone from the olecranon which migrated proximally attributing to the chronicity of the conditions [ figure 1 ] . plain x - ray of the elbow ( anteroposterior and lateral view ) showing avulsion of a fleck of bone from olecranon ( write arrow ) surgery was planned to fill the defect in the triceps and to reinforce it to the olecranon . the exposure of the tendon by midline posterior exposure showed a gap of 7 cms [ figure 2a ] . extensor carpi radialis longus tendon was released from its insertion by a small horizontal incision over the second metacarpal base . the palmaris longus tendon was released at the level of flexor crease of the wrist and also pulled out through a proximal incision . ( a ) intraoperative picture showing the insufficient triceps . ( b ) anchoring the graft to the olecranon . ( c ) postpulvertaft weaving of the graft to the triceps after harvest , a double - stranded graft measuring 15 cm in length and 6 mm in width was made and sutured together . the double - stranded graft was passed through the tunnel with equal length on both sides . the grafts on either side were brought together proximally making four strands which were sutured as a single unit [ figure 2b ] . the proximally retracted triceps was anchored to the quadruple strand of the graft by the pulvertaft weave technique [ figure 2c ] . a full range of motion of the elbow was checked , and an above elbow plaster slab was given with the elbow in 15 of flexion . the above elbow plaster slab was maintained for 6 weeks to allow adequate tendon to bone healing . , the patient regained full range of motion of the elbow [ figure 3 ] . on assessment of function using mayo elbow performance score , the maximum score of 100 was obtained which attributes to the full functional status of the patient.6 clinical photograph showing postoperative range of motion the study has been reviewed by the appropriate ethics committee and has therefore been performed in accordance with the ethical standards laid down in an appropriate version of the 1964 declaration of helsinki . the tear usually occurs when there is eccentric load to a contracting triceps , most commonly during sports.78 the best management is to avoid misdiagnosis and to treat the condition at the earliest.9 acute ruptures are successfully managed with nonabsorbable sutures passed through drill holes in the olecranon and anchored to the triceps.10 a chronic rupture is difficult to be reattached because of the retracted muscle belly.9 the literature provides various methods for correction oflarge triceps insufficiencies . it includes a turn down flap of triceps , anconeus rotation flap , allograft of tendo achillis , and autograft of hamstring tendon.1112 the use of extensor carpi radialis longus and palmaris longus to compensate a insufficient triceps has never been reported previously . extensor carpi radialis longus muscle has been used in various types of procedures for corrective hand surgery and is a proper muscle for correction of finger clawing.13 similarly , palmaris longus tendon is also being used for various procedures without significant functional impairment of the donor site.14 the easy availability of these tendons under regional anesthesia from the same limb without causing any significant functional impairment of the donor site led to their choice for our procedure . tendon to bone healing is said to be more secure , successful and functionally adaptive and hence preferred in this case.1517 the diameter of the tendon was fit to the size of the tunnel to prevent any wear and tear of the tendon . until adequate tendon to bone healing was achieved , the limb was immobilised . the patient achieved complete range of motion after 10 weeks with adequate physiotherapy . at the end of rehabilitation , the patient had almost full strength of extension . functionally , as graded by the mayo elbow scoring system , the patient had maximum points and a full function of the affected elbow.6
chronic triceps insufficiency , causing prolonged disability , occurs due to a missed diagnosis of an acute rupture . we report a 25 year old male with history of a significant fall sustaining multiple injuries . since then , he had inability in extending his right elbow for which he sought intervention after a year . diagnosis of triceps rupture was made clinicoradiologically and surgery was planned . intraoperative findings revealed a deficient triceps with a fleck of avulsed bone from olecranon . ipsilateral double tendon graft including extensor carpi radialis longus and palmaris longus were anchored to triceps and secured with the olecranon . six - months follow revealed a complete active extension of elbow and a full function at the donor site .
polarization is one of the clearest signatures of synchrotron radiation , if this is produced by electrons gyrating in a magnetic field that is at least in part ordered . for this reason , polarization measurements can provide a crucial test of the synchrotron shock model @xcite , the leading scenario for the production of the burst and , in particular , the afterglow photons . attempts to measure the degree of linear polarization yelded only an upper limit ( @xmath0 for grb 990123@xcite ) , until the observations on the afterglow of the burst of may 10 , 1999 . a small but significant amount of polarization was detected ( @xmath1@xcite ) @xmath2 hours after the batse trigger and confirmed in a subsequent observation two hours later@xcite . even if synchrotron radiation can naturally account for the presence of linearly polarized light in a grb afterglow , a significant degree of anisotropy in the magnetic field configuration or in the fireball geometry is required . if , in fact , the synchrotron emission is produced in a fully symmetrical set - up , all the polarization components average out giving a net unpolarized flux . the presence of partially ordered magnetic field ( in causally disconnected domains ) has been discussed by gruzinov & waxman @xcite , however their model overpredicts , in its simplest formulation , the observed amount of polarization . here we discuss a different possibility , in which the asymmetry is provided by a collimated fireball observed off axis , while the magnetic field is tangled in the plane perpendicular to the velocity vector of the fireball expansion . indeed , the smooth break in the lightcurve of grb 990510 @xcite has been interpreted as due to a collimated fireball observed slightly off axis . grb 990510 was detected by batse on - board the compton gamma ray observatory and by the _ beppo_sax gamma ray burst monitor and wide field camera on 1999 may 10.36743 ut @xcite . its fluence ( 2.5@xmath3 erg @xmath4 above 20 kev ) was relatively high @xcite . follow up optical observations started @xmath5 hr later and revealed an @xmath6 @xcite optical transient ( ot ) . the ot showed initially a fairly slow flux decay @xmath7 @xcite , which gradually steepened ; vreeswijk et al . @xcite detected fe ii and mg ii absorption lines in the optical spectrum of the afterglow . this provides a lower limit of @xmath8 to the redshift , and a @xmath9ray energy of @xmath10 erg , in the case of isotropic emission . we observed the ot associated with grb 990510 @xmath2 hours after the gamma ray trigger at the eso vlt - antu ( ut1 ) in polarimetric mode , performing four 10 minutes exposures in the r band at four angles ( @xmath11 , @xmath12 , @xmath13 and @xmath14 ) of the retarder plate@xcite . the average magnitude of the ot in the four exposures was @xmath15 . relative photometry with respect to all the stars in the field was performed and each couple of simultaneous measurements at orthogonal angles was used to compute the points in fig . [ fig1 ] ( laft panel ) ( see@xcite for details ) . the parameter @xmath16 is related to the degree of linear polarization @xmath17 and to the position angle of the electric field vector @xmath18 by : @xmath19 @xmath17 and @xmath18 are evaluated by fitting a cosine curve to the observed values of @xmath16 . the derived linear polarization of the ot of grb 990510 is @xmath20% ( 1@xmath21 error ) , at a position angle of @xmath22 . [ fig1 ] ( left panel ) shows the data points and the best fit @xmath23 curve . the statistical significance of this measurement is very high . a potential problem is represented by a `` spurious '' polarization introduced by dust grains interposed along the line of sight , which may be preferentially aligned in one direction . the normalization of the ot measurements to the stars in the field already corrects for the average interstellar polarization of these stars , even if this does not necessarily account for all the effects of the galactic ism along the line of sight to the ot ( e.g. the ism could be more distant than the stars , not inducing any polarization of their light ) . to check this possibility , we plot in fig . [ fig1 ] ( right panel ) the degree of polarization vs. the instrumental position angle for each star and for the ot . it is apparent that , while the position angle of all stars are consistent with being the same ( within 10 degrees ) , the ot clearly stands out . the polarization position angle of stars close to the ot differs by @xmath24 from the position angle of the ot . this is contrary to what one would expect if the polarization of the ot were due to the galactic ism . polarization induced by absorption in the host galaxy can be constrained to be @xmath25 , due to the lack of any absorption in the optical filters in addition to the local value ( see@xcite for more details ) . we therefore conclude that the ot , even if contaminated by interstellar polarization , must be intrinsically polarized to give the observed orientation . we consider a slab of magnetized plasma , in which the configuration of the magnetic field is completely tangled if the slab is observed face on , while it has some some degree of alignment if the slab is observed edge on . such a field can be produced by compression in one direction of a volume of 3d tangled magnetic field@xcite or by weibel instability@xcite . if the slab is observed edge on , the radiation is therefore polarized at a level , @xmath26 , which depends on the degree of order of the field in the plane . if the emitting slab moves in the direction normal to its plane with a bulk lorentz factor @xmath27 , we have to take into account the relativistic aberration of photons . this effect causes photons emitted at @xmath28 in the ( primed ) comoving frame @xmath29 to be observed at @xmath30 ( see also@xcite ) . we assume that the fireball is collimated into a cone of semi aperture angle @xmath31 , and that the line of sight makes an angle @xmath32 with the jet axis . as long as @xmath33 , the observer receives photons from a circle of semi - aperture angle @xmath34 around @xmath32 . consider the edge of this circle : radiation coming from each sector is highly polarized , with the electric field oscillating in radial direction ( see@xcite for more details ) . as long as we observe the entire circle , the configuration is symmetrical , making the total polarization to vanish . however , if the observer does not see part of the circle , some net polarization survives in the observed radiation . this happens if a beamed fireball is observed off axis when @xmath35 . at the beginning of the afterglow , when @xmath27 is large , the observer sees only a small fraction of the fireball and no polarization is observed . at later times , when @xmath27 becomes smaller than @xmath36 , the observer will see only part of the circle centered in @xmath32 : there is then an asymmetry , and a corresponding net polarized flux . to understand why the polarization angle in this configuration is horizontal , consider that the part of the circle which is not observed would have contributed to the polarization in the vertical direction . at later times , as the fireball slows down even more , a larger area becomes visible . when @xmath37 , the dominant contribution to the flux comes from the upper regions of the fireball which are vertically polarized . the change of the position angle happens when the contributions from horizontal and vertical polarization are equal , resulting in a vanishing net polarization . at still later times , when @xmath38 , light aberration vanishes , the observed magnetic field is completely tangled and the polarization disappears . figure [ fig2 ] shows the result of the numerical integration of the appropriate equations ( see@xcite for the detailed discussion ) . as derived in the above qualitative discussion , the lightcurve of the polarized fraction shows two maxima , with the position angle rotated by 90@xmath39 between them . it is interesting to note the link with the lightcurve . the upper panel of fig . [ fig2 ] shows the lightcurve of the total flux divided by the same lightcurve in the assumption of spherical geometry . as expected , the lightcurve of the beamed fireball shows a break with respect to the spherical one . a larger off axis ratio produce a more gentle break in the lightcurve , and is associated with a larger value of the polarized fraction . the behaviour of the total flux and of the polarization lightcurves allow us to constrain the off axis ratio @xmath40 , but is insensitive to the absolute value of the beaming angle @xmath41 . therefore , even if we could densely sample the polarization lightcurve , the beaming angle could be derived only assuming a density for the interstellar medium , i.e. a relation between the observed time and the braking law of the fireball . on the other hand , the detection of a @xmath42 rotation of the polarization angle of the afterglow would be the clearest sign of beaming of the fireball , expecially if associated with a smooth break in the lightcurve . polarimetric follow up of afterglows is hence a powerful tool to investigate the geometry of fireballs . axelrod t. , mould j. , and schmidt b. , gcn 315 ( 1999 ) covino s. et al . , _ a&a _ , * 348 * , l1 ( 1999 ) dadina m. et al . , iauc 7160 ( 1999 ) ghisellini g. , and lazzati d. , _ mnras _ , * 309 * , l7 ( 1999 ) gruzinov a. , and waxman e. , _ apj _ , * 511 * , 852 ( 1999 ) hjorth j. , et al . , _ science _ , * 283 * , 2037 ( 1999 ) israel g. l. et al . , _ a&a _ , * 348 * , l5 ( 1999 ) kippen r. m. , gcn 322 ( 1999 ) laing r. a. , _ mnras _ , * 193 * , 439 ( 1980 ) medvedev m. v , and loeb a. , _ apj _ submitted ( astro - ph/9904363 ) meszros p. , and rees m. j. , _ apj _ , * 476 * , 232 ( 1997 ) vreeswijk p. m. , et al . , gcn 324 ( 1999 ) wijers r. a. m. j. et al . , _ apj _ , * 523 * , l33 ( 1999 )
we present the recent discovery of linear polarization of the optical afterglow of grb 990510 . effects that could introduce spurious polarization are discussed , showing that they do not apply to the case of grb 990510 , which is then intrinsically polarized . it will be shown that this observation constrains the emission mechanism of the afterglow radiation , the geometry of the fireball and degree of order of the magnetic field . we then present the theoretical interpretations of this observation with particular emphasis on the possibility of observing polarization in beamed fireballs .
the experimental realization of annular trapping potentials @xcite has recently led to the observation of persistent superfluid flow in a multiply - connected geometry @xcite . the question of the stability of these superfluid currents is a complex matter and depends on the nature of the dynamical excitations available to the system . the current consensus is that stability is limited by the penetration of vortices through the edge of the superfluid with a concomitant change in the phase of the superfluid order parameter . several theoretical studies support this scenario @xcite . however , underlying these dynamical instabilities is the inherent metastability of the superfluid system . as emphasized by bloch @xcite , this metastability is revealed through the energy of the superfluid as a function of its angular momentum , its so - called yrast spectrum @xcite . in this paper , we investigate the yrast spectrum of a two - component bose gas in the ring geometry . specifically , we have in mind the situation in which the atoms are confined to a torus where the transverse confinement is so tight that the system is effectively one - dimensional . when the two species have equal masses @xmath0 , it can be shown quite generally @xcite that the yrast spectrum takes the form e_0(l ) = + e_0(l ) , where @xmath1 is the radius of the ring , @xmath2 is the total angular momentum and @xmath3 is the total mass of the system . here , @xmath4 is the total number of atoms of type @xmath5 and @xmath6 . the function @xmath7 has inversion symmetry and possesses the periodicity property @xmath8 the above properties of the yrast spectrum are independent of the detailed nature of the inter - particle interactions @xcite . for contact interactions , the interactions can be characterised by the dimensionless parameters @xmath9 where the subscripts @xmath10 and @xmath11 take on the values @xmath5 and @xmath6 . ( a detailed definition of these interaction parameters is given in sec . ii . ) however , as shown in several previous studies @xcite , the _ mean - field _ yrast spectrum of the two - component system , with the added restriction that all interaction strengths have a common value @xmath12 , exhibits two additional properties . first , the part of the spectrum in the fundamental range @xmath13 , where @xmath14 , is not in general an analytic function of the ( dimensionless ) angular momentum per particle . in particular , the derivative of the spectrum is found to exhibit discontinuities at @xmath15 , where @xmath16 is the minority concentration and @xmath17 . the number of discontinuities @xmath18 depends on the two relevant parameters of the model , namely @xmath12 and @xmath19 . more specifically , it was established that @xmath18 derivative discontinuities occur when the coordinate @xmath20 lies within a region bounded by the two critical curves @xmath21 and @xmath22 in the @xmath12-@xmath19 plane @xcite . these curves are illustrated by the solid lines in fig . [ gammaxb ] for @xmath23 . importantly , the point of non - analyticity , @xmath24 , is associated with the condensate wavefunctions having a plane - wave form , @xmath25 , where @xmath26 . as a result , the relevant critical @xmath27 curves in fig . [ gammaxb ] can also be viewed as defining the regions within which plane - wave yrast states emerge . for values of @xmath28 other than these special values , the yrast state is in general a soliton state . now , because of the periodicity and inversion symmetry of @xmath7 , a derivative discontinuity at @xmath29 implies discontinuities at @xmath30 as well , where @xmath31 is any non - zero integer . the yrast states at these angular momenta are @xmath32 . the second important property of the yrast spectrum concerns a subset of such non - analytic points , namely those at @xmath33 . it can be shown @xcite that the yrast spectrum has local minima at these angular momenta ( and only these from all possible @xmath30 ) for any integer @xmath18 provided @xmath12 exceeds the critical interaction strength _ this expression provides another set of critical @xmath21 curves , which are indicated by the dashed lines in fig . [ gammaxb ] . since a local minimum at @xmath34 necessarily implies that the corresponding state @xmath35 is already an yrast state , @xmath36 is thus a sufficient , but not necessary , condition for the existence of plane - wave yrast states . this is reflected in the fact that the dashed curves in fig . [ gammaxb ] are displaced to the right of the solid curves ; with increasing @xmath12 for a fixed @xmath19 , one first crosses a solid curve at which point some plane - wave state becomes an yrast state , and then the dashed curve beyond which this plane - wave state is a local minimum . the existence of an energy minimum is of particular significance since , as argued by bloch @xcite , it implies the possibility of persistent superfluid flow . thus , the condition @xmath36 can be taken as the stability condition for persistent currents at the angular momentum @xmath37 . and @xmath22 , the plane - wave states @xmath38 , with @xmath39 are yrast states . the critical curve for @xmath40 , @xmath41 , is not shown . the dashed lines are critical curves defining the regions in which the @xmath42 state supports persistent currents . ] the above conclusions , pertaining to a system with symmetrical interaction strengths , were reached with aid of analytic soliton solutions to the coupled gross - pitaevskii equations , from which the full yrast spectrum could be determined @xcite . these conclusions were confirmed by smyrnakis _ et al . _ @xcite using an alternative approach . since the symmetrical model is rather special , it is unclear whether the aforementioned properties of the mean - field yrast spectrum remain valid for the asymmetrical model in which the interparticle interactions take on different values . this is the main question to be addressed in this paper . to answer it we adopt the strategy , motivated by the symmetrical model , of examining the mean - field energy functional in the vicinity of plane - wave states . even though analytic soliton solutions are not known for the asymmetrical model , we are able to use a perturbative analysis to determine the general behaviour of the energy functional near plane - wave states and obtain critical conditions analogous to those displayed in fig . [ gammaxb ] . since all of the results for the symmetrical model are recovered by this approach , we are confident that the conditions we derive do in fact determine the stability of persistent currents in the asymmetrical model . the rest of the paper is organized as follows . in sec . [ stability ] , we derive inequalities involving the system parameters which establish whether a given plane - wave state is a local minimum of the gross - pitaevskii energy functional . such a state has a specific angular momentum @xmath28 . we then argue that the lowest - energy plane - wave state having this angular momentum is a global minimum if the inequalities for this state are satisfied and , hence , is an yrast state . these predictions are then checked against known limiting situations , including that of the symmetric model . in sec . [ critical_conditions ] we then analyze in more detail the behaviour of the yrast spectrum in the vicinity of a plane - wave state corresponding to a global minimum . we first establish that the yrast spectrum has a derivative discontinuity at the angular momentum @xmath28 of this state . the instability of persistent currents at this angular momentum is then signalled by the critical condition that one of the slopes of the yrast spectrum vanishes . at this point , the plane - wave state is no longer a local minimum . we then obtain a subsequent critical condition for the disappearance of the derivative discontinuity . this condition provides a bound for the plane - wave state to be an yrast state . we conclude this section with some applications of these critical conditions in the determination of plane - wave yrast states . the main results of the paper are summarized in the final section . the system of interest is a two - species bose gas consisting of @xmath43 particles of type @xmath5 and @xmath44 particles of type @xmath6 confined to a ring of radius @xmath1 . we assume that the two species have the same mass @xmath0 . within a mean - field description , the condensate wave functions @xmath45 and @xmath46 define the gross - pitaevskii ( gp ) energy functional ( in units of @xmath47 ) @xmath48 = & \int_0^{2\pi}d\theta \left ( x_a\left \right |^2\right ) + x_a^2\pi \gamma_{aa}\int_0^{2\pi}d\theta & + 2x_ax_b \pi \gamma_{ab}\int_0^{2\pi}d\theta |\psi_a(\theta)|^2 \label{efunc}\end{aligned}\ ] ] where the dimensionless interaction parameters are defined as @xmath49 with @xmath50 . for the most part , we are concerned with repulsive interactions , @xmath51 . with the condensate wave functions normalized according to _ 0 ^ 2 d|_s()|^2 = 1 , [ normalization ] the energy per particle defined in eq . ( [ efunc ] ) depends on the particle numbers only through the concentrations @xmath52 . our ultimate objective is the determination of the yrast spectrum which is defined by the lowest energy of the system as a function of the angular momentum . in units of @xmath53 , the total angular momentum of the system is given by |l[_a,_b ] = _ s _ 0 ^ 2 d_s^ * ( ) _ s ( ) . [ ang_mom ] the minimization of eq . ( [ efunc ] ) with the constraint @xmath54 = l$ ] gives the yrast energy |e_0(l ) = l^2 + e_0(l ) , where @xmath55 is an even function of @xmath28 with the periodicity property @xmath56 , @xmath57 being an arbitrary integer @xcite . as a result of these properties , the yrast spectrum is completely determined by the behaviour of @xmath55 in the interval @xmath58 . if a point @xmath59 in this interval is a point of nonanalyticity of @xmath55 , then the points @xmath60 for any integer @xmath57 are points of nonanalyticity of @xmath61 . as we shall show , these points occur at plane - wave yrast states . we are therefore led to an investigation of the behaviour of the gp energy functional in the vicinity of some arbitrary plane - wave state @xmath62 . as discussed in the introduction , the conditions for which such a state is an yrast state are known in the case of the symmectrical model . there it is found that the yrast spectrum exhibits a derivative discontinuity at the angular momentum corresponding to this state , namely at l= x_a + x_b + k x_b , where @xmath63 . furthermore , the criteria for the yrast spectrum exhibiting a local minimum at one of these angular momenta is also known . the question we wish to address in this paper is the extent to which such states can be yrast states in the asymmetrical model . for the @xmath64 plane - wave state we have 2x_a x_b _ ab + x_b^2 _ bb ) . all such plane - wave states have the same interaction energy @xmath65 but a kinetic energy which depends on the parameters @xmath31 and @xmath66 or alternatively , @xmath28 and @xmath18 . in searching for the minimum energy plane - wave state of a given angular momentum @xmath28 , it is useful to note that @xmath16 is in general a rational number which we denote by x_b = where @xmath67 and @xmath68 are positive integers having no common divisor . one can easily check that the plane - wave states ( @xmath69 ) defined by the parameters = + mp , k = k - mq , where @xmath70 , all have the same angular momentum @xmath28 . in view of eq . ( [ e_pw ] ) , the lowest energy state from this infinite set is obtained for the smallest value of @xmath71 . this value of @xmath72 will be found in the range -k [ k - range ] where @xmath73 $ ] is the largest integer less than or equal to @xmath74 , that is , the floor of @xmath74 . it is worth noting that the allowed angular momentum values of the plane - waves states take the form @xmath75 , where @xmath76 is an arbitrary integer . of course , when the restriction @xmath77 is imposed , one need only consider @xmath78 $ ] . let us now consider the plane - wave state ( @xmath79 ) with @xmath18 restricted to the range given by eq . ( [ k - range ] ) . if @xmath68 is odd , the set of @xmath18 values in this range corresponds to a complete residue system modulo @xmath68 . thus , when each possible value of @xmath18 in this range is paired with each possible value of @xmath31 , all possible values of the angular momentum @xmath28 are generated without duplication . as a result , the value of @xmath18 which minimizes the energy for a given @xmath28 is unique . the situation for even @xmath68 is slightly different since @xmath80 $ ] and @xmath73 $ ] are congruent and there are two plane - wave states , namely ( @xmath81 , @xmath82 ) and ( @xmath83 , @xmath84 ) , which have the same angular momentum and energy . thus , one can not decide which of these two states is a potential yrast state at this particular angular momentum . however , we do have a prescription for selecting , from all possible plane - wave states having the same @xmath28 , the specific state(s ) that are potential yrast states . we note that if @xmath19 is treated as a continuous variable , it will of course take on irrational values . in this case , no two plane - wave states will have the same angular momentum and the complexities associated with rational @xmath19 are avoided . however , whenever @xmath19 takes on a rational value , the above considerations will once again apply . although our analysis could be restricted to the plane - wave states with @xmath18 in the range specified by eq . ( [ k - range ] ) , it is more convenient to consider in the following the gp energy functional in the vicinity of an arbitrary @xmath85 plane - wave state . to be specific , our goal is to establish the conditions for which the energy functional will exhibit a local minimum at this state . to this end , we consider states @xmath86 which deviate slightly from @xmath62 , _ viz . _ @xmath87 where the deviations are expressed in the form @xmath88 the normalization of these states implies @xmath89 and @xmath90 . alternatively , these normalization conditions can be expressed as @xmath91 substituting eq . ( [ 4.2_wf_perturbed ] ) into eq . ( [ efunc ] ) and eliminating @xmath92 and @xmath93 by means of eqs . ( [ a_norm ] ) and ( [ b_norm ] ) , we find that the change in energy to second order in the deviations is given by @xmath94 & = \bar e[\psi_a,\psi_b ] -\bar e[\phi_\mu,\phi_\nu ] \nn \\ & \simeq \sum_{m>0}\bv^\dag_m { \calh}_m \bv_m , % & = x_a(c^2_{-1}+c^2_1 ) + x_b\left [ ( 1 - 2n)d^2_{n-1}+(1 + 2n)d^2_{n+1 } \right ] \nn \\ % & + \gamma_{aa}x^2_a(c_{-1}+c_1)^2 + \gamma_{bb}x^2_b(d_{k-1}+d_{k+1})^2 + 2x_ax_b\gamma_{ab}(c_{-1}+c_1)(d_{k-1}+d_{k+1 } ) . \label{dff}\end{aligned}\ ] ] where @xmath95 and _ m= ( cccc x_a(x_a_aa+m^2 - 2 m ) & x_a^2_aa & x_ax_b_ab & x_ax_b_ab + x_a^2_aa & x_a(x_a_aa+m^2 + 2 m ) & x_ax_b_ab & x_ax_b_ab + x_ax_b_ab&x_ax_b_ab & x_b(x_b_bb+m^2 - 2 m ) & x_b^2_bb + x_ax_b_ab&x_ax_b_ab & x_b^2_bb & x_b(x_b_bb+m^2 + 2 m ) ) . [ h_matrix ] we thus see that the change in energy is a quadtratic form . if the matrices @xmath96 are all positive definite , the energy @xmath97 $ ] is a local minimum in the function space defined by @xmath45 and @xmath46 . according to sylvester s criterion @xcite , positive - definiteness is assured if all the leading principal minors of @xmath96 are positive , namely @xmath98 it is straighforward to show that eq . ( [ ineq2 ] ) implies eq . ( [ ineq1 ] ) ; likewise , eq . ( [ ineq4 ] ) together with eq . ( [ ineq2 ] ) implies eq . ( [ ineq3 ] ) . thus , eqs . ( [ ineq2 ] ) and ( [ ineq4 ] ) are the fundamental inequalities determining the positive - definiteness of @xmath99 . furthermore , these inequalities are satisfied for all @xmath100 if they are satisfied for @xmath101 . we thus see that the inequalities @xmath102 are the necessary and sufficient conditions for @xmath103 being a local minimum in the function space . it is important to note that , although the state @xmath103 has the angular momentum @xmath104 , the variations in eq . ( [ 4.2_wf_perturbed ] ) allow for deviations of the angular momentum from this value . in other words , the local minimum that we are finding is not constrained by the angular momentum @xmath28 ; the local minimum exists for arbitrary variations of the condensate wave functions about the plane - wave state of interest . on the other hand , it is possible to consider variations which are further constrained ( apart from normalization ) by the angular momentum @xmath104 ; such states define a hypersurface in function space . the state @xmath103 lies on this surface and , if the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) are satisfied , its energy is lower than that of any other state in its vicinity on the hypersurface . if this state were in fact a _ minimum on the hypersurface , it would be , by definition , an yrast state . since the specific plane - wave state @xmath62 , where @xmath105 with @xmath18 restricted to the range in eq . ( [ k - range ] ) , has the lowest energy of all the plane - wave states having the same angular momentum , it is clearly a candidate for being the global minimum . in the appendix , we show that conditions exist for which such a state is assured to be a global minimum on the @xmath106 hypersurface . we now make the stronger assumption that this specific plane - wave state is a global minimum when the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) are satisfied , and is hence an yrast state . as we shall show , this assumption is consistent with the results of the symmetric model and , for reasons of continuity , would be expected to continue holding as the interaction parameters gradually become asymmetrical . furthermore , the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) also ensure that the energy increases as one moves away from @xmath62 in directions of either increasing or decreasing angular momenta . according to the bloch criterion , this would imply that persistent currents are stable at the @xmath106 angular momentum point of the yrast spectrum . we now consider some special cases in order to make contact with earlier work . unless stated otherwise , the @xmath31 and @xmath66 indices will henceforth refer to plane - wave states for which the difference @xmath107 is restricted to the range in eq . ( [ k - range ] ) . this case corresponds to integral angular momenta , @xmath109 . ( [ stab2 ] ) then reduces to ( x_a_aa- ) ( x_b_bb- ) > x_ax_b^2_ab . [ integral - l1 ] this together with eq . ( [ stab1 ] ) implies x_a_aa + x_b_bb > 4n^2 - 1 . [ integral - l2 ] these are the two inequalities given in ref . @xcite that establish the stability of persistent currents at integral values of @xmath28 . for @xmath110 , eq . ( [ integral - l1 ] ) reduces to [ energetic_stability ] ( x_a_aa+ ) ( x_b_bb+ ) > x_ax_b^2_ab this is the condition for energetic or dynamic stability and ensures that the uniform state @xmath111 is stable against phase separation . this state gives the absolute minimum of the gp energy functional , and by virtue of the periodicity of @xmath55 , the states @xmath112 with integral angular momentum @xmath109 are _ all _ yrast states . it is thus clear from a consideration of this special case that the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) do in fact define a global minimum on the @xmath113 hypersurface . if we now consider the special case @xmath114 , the inequality in eq . ( [ integral - l1 ] ) for @xmath115 reduces to x_a_aa+x_b_bb < ( 4n^2 - 1 ) . this inequality is incompatible with eq . ( [ integral - l2 ] ) which implies that the @xmath112 state _ can not _ be a local minimum for these interaction parameters . however , if eq . ( [ energetic_stability ] ) is satisfied , this state is still an yrast state . thus , a local minimum in function space is not a necessary condition for the plane - wave state being an yrast state . in the following section we will show that the absence of a local minimum in function space also implies the lack of a local minimum in the yrast spectrum and the absence of persistent currents . as explained in ref . @xcite , the physical reason for the absence of persistent currents when @xmath114 is that the bogoliubov excitations exhibit a particle - like dispersion which destabilizes superfluid flow . in this case , the inequality in eq . ( [ stab2 ] ) reduces to ( 1 - 4 ^ 2)(2x_a_aa + 1 - 4 ^ 2 ) + 2x_b_bb(1 - 4 ^ 2 ) > 0 . [ mu.ne.nu ] if @xmath117 , this inequality implies ^2 < . [ nu - ineq ] the values of @xmath66 satisfying this inequality depend on the values of the parameters @xmath118 , @xmath119 and @xmath120 . if @xmath121 , we have @xmath122 . thus in view of eq . ( [ stab1 ] ) , the inequality in eq . ( [ mu.ne.nu ] ) can only be satisfied if @xmath123 . we have thus established that local minima can occur at the states @xmath124 with angular momenta @xmath125 or at @xmath126 with angular momenta @xmath127 . in this latter case , however , the range of @xmath66 is limited by the inequality in eq . ( [ nu - ineq ] ) . in the symmetric model with @xmath128 , the only possible value of @xmath66 in eq . ( [ nu - ineq ] ) is zero if we take @xmath5 to be the majority component ( @xmath129 ) . thus local minima can only occur for the @xmath124 states in this case . furthermore , eq . ( [ mu.ne.nu ] ) gives > . this is precisely the condition for persistent currents to occur at @xmath125 found in ref . @xcite using the explicit soliton solutions . once again we see that the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) predict the stability of persistent currents at an _ yrast _ state . finally , we wish to point out that @xmath66 in eq . ( [ nu - ineq ] ) , if unrestricted by the range of @xmath18 , can be made arbitrarily large by making @xmath119 sufficiently small and @xmath120 sufficiently large . if @xmath130 , the angular momentum of the @xmath126 state is @xmath131 . choosing @xmath132 we have @xmath133 which is also the angular momentum of the @xmath134 state . by eq . ( [ e_pw ] ) , this state has a lower energy than the @xmath135 state which therefore can not be an yrast state . thus a local minimum in function space does not mean that one is necessarily dealing with an yrast state . however , as stated earlier , the plane - wave state with the lowest energy is a potential yrast state . all the examples we have considered so far support the assumption that such a state is indeed an yrast state if the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) are satisfied . we now make some observations regarding the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) for general values of the parameters . to simplify matters , we consider the way in which the inequalities can be violated through a variation of only a single parameter . it is easy to see that an increase of @xmath136 or a decrease of either @xmath119 or @xmath120 will eventually lead to a violation of the inequalities , with the consequence that persistent currents are destabilized . the dependence on @xmath19 is more interesting . the left hand side of eq . ( [ stab2 ] ) can be written as a quadratic function of @xmath19 , _ viz . _ , h(x_b ) = ax_b^2 + bx_b + c , [ gxb ] with @xmath137\\ \label{hc } c & = & ( 1 - 4\nu^2)(2\gamma_{aa}+1 - 4\mu^2)\end{aligned}\ ] ] whether or not @xmath138 depends on the nature and location of the roots of @xmath139 . these can be analyzed in terms of the descriminant _ x_b = b^2 - 4ac . the critical values of @xmath19 are determined by the roots of @xmath139 which will be analyzed for the following three cases : ( i ) @xmath140 ; ( ii ) @xmath141 ; and ( iii ) @xmath142 . case ( i ) falls under case 2 discussed above . \(ii ) @xmath141 : if @xmath143 , @xmath139 has no real root and since @xmath144 , @xmath145 for all @xmath19 . thus , the stability of the persistent currents is solely determined by eq . ( [ stab1 ] ) . this implies that persistent currents are stable for 0 < x_b min\{1,}. [ xbstab1 ] this scenario is only possible for @xmath123 since @xmath146 for @xmath147 ( recall eq . ( [ stab1 ] ) ) . if @xmath148 for @xmath123 , @xmath139 has two negative roots if @xmath149 or two positive roots if @xmath150 . the former situation again means that the persistent currents are stable for @xmath19 satisfying eq . ( [ xbstab1 ] ) . in the latter situation , the range of stability of persistent currents is determined by the location of the two positive roots relative to the range specified by eq . ( [ xbstab1 ] ) . if @xmath151 for @xmath152 , @xmath139 has a negative and positive root . if the latter lies in the interval @xmath153 $ ] , persistent currents are stable for values of @xmath19 greater than the positive root and overlapping with the interval defined by eq . ( [ xbstab1 ] ) . \(iii ) @xmath142 : here , persistent currents can not occur for any @xmath19 if @xmath154 . since @xmath155 , this is only possible when @xmath146 , which implies @xmath147 . if @xmath151 for @xmath152 , then again @xmath139 either has two negative roots or two positive roots . two negative roots would mean that the persistent currents are not possible for any value of @xmath19 . in the case of two positive roots , the range of stability is again determined by the location of the roots relative to the range specified by eq . ( [ xbstab1 ] ) . for @xmath123 , @xmath139 has one negative and one positive root . in this case , there is always some finite @xmath19-interval within which persistent currents are possible . curves for persistent currents to be stable at the @xmath156 state for @xmath157 and three different @xmath158 values . ] we now give a simple example illustrating the general discussion given above . to be specific , we determine the dependence on the interaction asymmetries of the critical @xmath19 value for which persistent currents are possible at the @xmath156 plane - wave state . to facilitate the discussion , we parameterise the interactions as @xmath159 , @xmath160 and @xmath161 . the results shown in fig . [ pers ] are obtained for the fixed value of @xmath157 and three @xmath158 values that are representative of cases ( i)-(iii ) . in all the cases considered here , the critical @xmath162 curve is determined by one of the roots of @xmath139 . for @xmath163 , one finds @xmath164 and the critical @xmath162 curve is determined by the only root @xmath165 , where @xmath166 and @xmath167 are specified in eqs . ( [ hb])-([hc ] ) with the appropriate parameters . it is easy to check that the critical curve has an asymptote given by @xmath168 . for @xmath169 , one finds @xmath170 and @xmath139 has one negative and one positive root . the critical curve is given by the positive root @xmath171 with an asymptote @xmath172 . finally for @xmath173 , we have @xmath141 and @xmath139 has two positive roots . the critical curve is given by the smaller root @xmath174 with an asymptote @xmath175 . interestingly , the dependence of @xmath19 on @xmath12 is not monotonic for @xmath169 . thus it is possible that persistent currents are stable at a fixed value of @xmath19 in only a _ finite _ interval of @xmath12 . in other words , persistent currents can be stabilized with increasing @xmath12 but are then destabilized with further increases in @xmath12 . in the previous section we argued that the plane - wave state @xmath85 with the lowest kinetic energy of all plane - wave states having the angular momentum @xmath176 is an yrast state when this state becomes a local minimum of the gp energy functional . furthermore , persistent currents are stable at the angular momentum corresponding to this plane - wave state . we hypothesized that the validity of these statements follows from the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) . in other words , these inequalities are sufficient conditions for @xmath177 to be an yrast state , but as already pointed out , they are not necessary conditions . in this section we investigate the extent to which a necessary condition can be found . if this condition is known , it follows from the periodicity and inversion symmetry of the yrast spectrum that all the states of the form @xmath178 , where @xmath179 and @xmath57 is an arbitrary integer , are yrast states as well . this observation indicates that the necessary condition depends on @xmath31 and @xmath66 only through their absolute difference @xmath180 . in searching for a necessary condition we require more information about the behaviour of the yrast spectrum in the vicinity of the plane - wave state . we first show that the local energy minimum at @xmath177 entails a slope discontinuity of the yrast spectrum at @xmath181 . thus the condition for the stability of persistent currents can be expressed in terms of the slopes of the yrast spectrum at the plane - wave state of interest . for the symmetrical model @xcite , one finds that @xmath177 ceases to be an yrast state when the derivative discontinuity disappears . we can not state definitively that this is also true for the asymmetrical model , however , the condition for which the slope discontinuity disappears can still be determined . we argue that this condition places a bound on the existence of the plane - wave yrast state . to establish the existence of a slope discontinuity , we investigate the behaviour of the yrast spectrum in the neighbourhood of the yrast state @xmath177 with angular momentum @xmath59 . for small deviations @xmath182 of the angular momentum , we expect the yrast state at @xmath28 to deviate only slightly from the plane - wave state @xmath177 at @xmath59 . the yrast state can then be well approximated by @xmath183 where the coefficients of the deviations are small in absolute magnitude in comparison to unity and satisfy the normalization conditions @xmath184 this deviation can of course be either positive or negative . we observe that the square of the modulus of the coefficients appearing in eq . ( [ dl ] ) is of order @xmath185 . the determination of the coefficients is achieved by minimizing the energy functional in eq . ( [ efunc ] ) with the normalization constraint in eq . ( [ normalization ] ) and the angular momentum constraint @xmath186 = l = l_0+\delta l$ ] . to impose this latter constraint , we introduce a lagrange multiplier @xmath187 and minimize the energy functional |f[_a,_b ] = spectrum @xmath61 . the lagrange multiplier @xmath188 obtained in this process is in fact the slope of the yrast spectrum , namely @xmath189 thus , information about the slope of the yrast spectrum is provided by this quantity . substituting eqs . ( [ trwfa ] ) and ( [ trwfb ] ) into eq . ( [ f_fun ] ) and eliminating @xmath92 and @xmath93 by means of eqs . ( [ norma ] ) and ( [ normb ] ) , we obtain @xmath190\simeq \bar f[\phi_\mu,\phi_\nu ] + \bv_1^\dag { \calh}(\omega ) \bv_1 \label{df}\end{aligned}\ ] ] to second order in the expansion coefficients . here @xmath191 and ( ) = _ 1 - , where @xmath192 is defined by eq . ( [ h_matrix ] ) with @xmath101 and @xmath193 is the diagonal matrix =( cccc -x_a & 0 & 0 & 0 + 0 & x_a & 0 & 0 + 0 & 0 & -x_b & 0 + 0&0&0&x_b ) . in terms of this matrix , the angular momentum deviation is given by @xmath194 . the extremization of the functional in eq . ( [ df ] ) leads to the following set of linear equations ( ) _ 1 = 0 . [ le ] for these equations to have a non - trivial solution , one must have @xmath195 . this condition leads to the equation f ( ) - 4x_ax_b^2_ab= 0 , [ cons1 ] which is a quartic equation in @xmath187 . we will demonstrate that two of its solutions in fact correspond to the slopes of the yrast spectrum at @xmath196 . ) and ( [ stab2 ] ) are satisfied and the yrast spectrum shows a cusp - like local minimum ; ( b ) the inequalities are not satisfied but a slope discontinuity persists ; ( c ) the plane - wave state is no longer an yrast state and the yrast spectrum is smooth through the angular momentum @xmath59 . ] to analyze the zeroes of @xmath197 , we begin by assuming that the inequality in eq . ( [ stab2 ] ) is satisfied which implies @xmath198 . we now define the quartic @xmath199 , which is simply @xmath197 shifted vertically by the constant @xmath200 . we also have @xmath201 . the solutions of @xmath202 are ( 2 , 2 ) . if the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) are satisfied , two of these roots are negative and two are positive . furthermore , @xmath203 , which implies that @xmath204 has a maximum at a point @xmath205 between the largest negative root and the smallest positive root . by shifting @xmath204 down by @xmath200 we recover @xmath197 and since @xmath198 , we conclude that @xmath206 must have four distinct , real roots , two of which are negative and two positive . these roots will be denoted by @xmath207 with the ordering _ 1 < _ 2 < 0 < _ 3 < _ 4 . recalling eq . ( [ omega ] ) , these values are to be identified with the slope of the yrast spectrum at @xmath208 . the fact that the energy at @xmath59 is a local minimum when the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) are satisfied now implies that the left and right slopes of the yrast spectrum are necessarily @xmath209 and @xmath210 , respectively . this establishes the fact that the slope of @xmath61 at @xmath211 is discontinuous if @xmath212 $ ] has a local minimum at @xmath85 . the qualitative behaviour of the yrast spectrum in the vicinity of @xmath59 is shown in fig . [ yrast](a ) . once the roots of eq . ( [ cons1 ] ) have been determined , eq . ( [ le ] ) can be solved to yield @xmath213}{2x_b\gamma_{ab } ( 2\mu+1-\omega ) } ; \nn \\ \frac{\delta d^*_{\nu+1}}{\delta c_{\mu-1}}&=-\frac{(2\nu-1-\omega ) \left [ ( \omega-2\mu)^2 - 2x_a\gamma_{aa}-1\right ] } { 2x_b\gamma_{ab } ( 2\mu + 1-\omega)}. \label{coeff}\end{aligned}\ ] ] substituting the coefficients in eq . ( [ coeff ] ) into eq . ( [ dl ] ) we find @xmath214 ^ 2\right \}. \label{cons2}\end{aligned}\ ] ] although it is difficult to see analytically , one can check numerically that @xmath185 is indeed less than zero for @xmath215 and greater than zero for @xmath210 . this confirms that @xmath209 and @xmath210 correspond , respectively , to the portions of the yrast spectrum for @xmath216 and @xmath217 . with variations of @xmath136 . here @xmath218 , @xmath219 , @xmath220 , @xmath221 and @xmath120 = 120 . the critical @xmath136 for which the second root of @xmath197 becomes zero can be obtained from eq . ( [ cons1 ] ) and is found to be @xmath222 . the critical @xmath136 for which the double root of @xmath197 emerges can be obtained from eq . ( [ yrcrit ] ) and is found to be @xmath223 . ] we now suppose that a gradual variation of a system parameter leads to a violation of the inequality in eq . ( [ stab2 ] ) . at this point , the state @xmath85 ceases to be a local minimum of the energy functional in eq . ( [ efunc ] ) . this evolution is easiest to visualize by considering variations in @xmath136 ( see fig . [ fomega ] ) . as @xmath136 is increased , @xmath197 shifts down , with the effect that @xmath209 increases and @xmath210 decreases . since @xmath224 , @xmath209 will first go to zero at some critical value of @xmath136 , at which point @xmath225 ceases to have a local minimum . this signals the fact that persistent currents are no longer stable at @xmath59 . this is consistent with the criterion established earlier in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) , since by setting @xmath226 in eq . ( [ cons1 ] ) one recovers precisely the equality corresponding to eq . ( [ stab2 ] ) . however , since @xmath227 when @xmath228 , the derivative discontinuity in @xmath61 persists , as shown schematically in fig . [ yrast](b ) . as @xmath136 is increased further , the difference between the roots @xmath209 and @xmath210 gradually decreases and they eventually merge into a double root . at this point the discontinuity at @xmath59 vanishes , as indicated schematically in fig . [ yrast](c ) . in going from fig . [ yrast](b ) to fig . [ yrast](c ) we can envisage two possible scenarios . in the first , the plane - wave state is always an yrast state and the merging of the two roots establishes the critical condition for @xmath85 to be an yrast state . in other words , the plane - wave state at @xmath59 ceases to be an yrast state when the slope discontinuity vanishes . however , there is the possibility that a soliton state with an energy lower than that of the plane - wave state at @xmath59 may appear before the merging of the double root . if this happens , the emergence of the soliton state defines the critical condition for the plane - wave state to be an yrast state . in this case , the merging of the double root at best provides a bound on this critical condition . whether or not this latter scenario actually occurs would have to be checked by explicit numerical solutions of the coupled gross - pitaevskii equations for the condensate wave functions . in the following , we will assume that the first scenario discussed above is valid and therefore we will focus on the critical condition for which the quartic @xmath197 has a double root . this occurs when the discriminant @xmath229 of the quartic is zero , that is ( x_s,_ss , , ) ( s ) = 0 . [ yrcrit ] here the discriminant is defined by the determinant of the so - called sylvester matrix @xcite s= ( cccccccc a_4 & a_3 & a_2&a_1 & a_0&0&0 + 0 & a_4 & a_3 & a_2&a_1&a_0&0 + 0&0 & a_4 & a_3&a_2&a_1&a_0 + 4a_4&3a_3&2a_2&a_1&0&0&0 + 0 & 4a_4&3a_3&2a_2&a_1&0&0 + 0&0&4a_4&3a_3&2a_2&a_1&0 + 0&0&0&4a_4&3a_3&2a_2&a_1 , ) , where @xmath230 are the the coefficients of the quartic @xmath197 in eq . ( [ cons1 ] ) , namely @xmath231 alternatively , we can make use of the properties of @xmath197 to determine the critical condition for which the @xmath209 and @xmath210 roots merge and take the common value @xmath232 . we observe that @xmath233 is a cubic and that @xmath234 has three roots , as can be seen from fig . [ fomega ] . the frequency @xmath232 is the root for which @xmath235 . the condition @xmath236 gives the equation ( _ 0 - 2)+(_0 - 2)= 0 . [ gprime ] furthermore , we see that @xmath237 when = 4x_ax_b_ab^2 . [ fomega_0 ] when eqs . ( [ gprime ] ) and ( [ fomega_0 ] ) are used in eq . ( [ cons2 ] ) , we find that @xmath238 becomes zero when @xmath239 . thus the merging of the @xmath209 and @xmath210 roots can be determined by requiring that @xmath240 and @xmath241 be satisfied simultaneously . these two equations , a quartic and quintic respectively , have to be solved numerically and the critical condition found is identical to that obtained from eq . ( [ yrcrit ] ) . finally , we note that if the parameters @xmath242 and @xmath9 satisfy eq . ( [ yrcrit ] ) they also satisfy the equations ( x_s,_ss,-,- ) = 0 , [ inverivr ] and ( x_s,_ss,+n,+n ) = 0 , [ tranivr ] where @xmath57 is an integer . this is due to the fact that the existence of a double root of @xmath197 is not affected by inversion of the function @xmath197 with respect to the @xmath243 axis or translation along the @xmath244 axis . as a result the critical condition can simply be written as ( x_s,_ss,0,|k| ) = 0 , [ yrcrit1 ] where @xmath63 . this agrees with the general observation we made at the beginning of this section that the condition for @xmath85 to be an yrast state should only depend on the absolute difference of @xmath31 and @xmath66 . furthermore , from fig . [ fomega ] we see that the necessary condition for a slope discontinuity to occur in the yrast spectrum is that eq . ( [ cons1 ] ) has four real roots . the latter is ensured if the discriminant is positive , namely , ( x_s,_ss,0,|k| ) > 0 . [ yrnc ] within the first scenario discussed earlier , the inequality in eq . ( [ yrnc ] ) constitutes the condition for @xmath177 being an yrast state . in general , the discriminant @xmath245 is a rather complex function of @xmath19 , @xmath9 and @xmath18 ( we restrict ourselves to non - negative @xmath18 from now on ) . an exception occurs for @xmath246 , where one finds that the inequality in eq . ( [ yrnc ] ) simplifies to ( 2x_a_aa+1 ) ( 2x_b_bb + 1)- 4x_ax_b_ab^2 > 0 . [ crn0 ] this in fact coincides with the condition for stability of the ground state against phase separation ( see eq . ( [ energetic_stability ] ) ) . as shown in ref . @xcite , this stability condition follows from the requirement that the bogoliubov excitations all have positive energies . the inequality in eq . ( [ crn0 ] ) thus ensures that the uniform state ( @xmath247 ) is the ground state of the system and , by virtue of the periodicity of @xmath55 , the yrast states at all integral angular momenta are plane - wave states . the case of @xmath248 , namely the condition for plane - wave yrast states at non - integer angular momentum , is of course more complex . however , in the @xmath249 limit , one finds that the condition in eq . ( [ yrnc ] ) reduces to _ aa > 2k(k-1 ) . [ crb0 ] this suggests that @xmath250 is an yrast state if the interaction strength of the majority component satisfies eq . ( [ crb0 ] ) and the minority concentration is sufficiently small , regardless of the strength of @xmath120 and @xmath136 . similarly in the limit that @xmath251 the condition in eq . ( [ yrnc ] ) reduces to _ bb > 2k(k-1 ) . [ crb1 ] to explore the consequences of interaction asymmetries on the emergence of certain plane - wave yrast states in more detail , we use the parameterization introduced earlier , namely , @xmath159 , @xmath160 and @xmath161 . for each @xmath18 , the critical condition ( x_s,_ss,0,|k| ) ( x_b,,_a,_b,|k| ) = 0 [ crit_yrast ] then defines a hypersurface in the parameter space spanned by @xmath19 , @xmath12 , @xmath252 and @xmath158 . we will be primarily interested in the critical @xmath27 curves on such a hypersurface for fixed values of @xmath252 and @xmath158 . we remind the reader that these critical curves are the analogue of the solid curves in fig . [ gammaxb ] which define when certain plane wave states become yrast states in the symmetric model . these curves are recovered in the limit that @xmath253 . to be specific , we first consider the case of @xmath254 , namely @xmath255 . in fig . [ xg1 ] we show the critical @xmath27 curves , determined from eq . ( [ crit_yrast ] ) , for @xmath256 and various values of @xmath257 . this figure shows how the limit of the symmetric model is approached as @xmath257 tends to zero ( the @xmath256 dashed curve in fig . [ gammaxb ] ) . our first general observation is that these curves all possess a mirror symmetry with respect to the horizontal line @xmath258 . this is due to the fact that for @xmath259 , the discriminant @xmath260 in eq . ( [ yrcrit ] ) is invariant under simultaneous interchanges between @xmath118 and @xmath19 and between @xmath31 and @xmath66 . as a result , we have ( x_b,,,,|k| ) = ( 1-x_b,,,,|k| ) , which explains the mirror symmetry . we note that in generating fig . [ xg1 ] , we are no longer restricting @xmath19 to be the minority concentration but are allowing it to vary continuously between 0 and 1 . presenting the results in this way more clearly displays the continuous variation of the @xmath27 curves . the range @xmath261 of course corresponds to species @xmath5 being the minority concentration and provides no new information when @xmath262 . however , when @xmath263 , this form of the plots provides the relevant information more efficiently . curves for @xmath264 . here @xmath265 assume different values . the symmetric model is recovered in the @xmath266 limit.,title="fig : " ] curves for @xmath264 . here @xmath265 assume different values . the symmetric model is recovered in the @xmath266 limit.,title="fig : " ] we next observe that the limiting @xmath267 curve has a horizontal asymptote for @xmath268 @xcite , which is approached from one side when @xmath269 and from the other when @xmath270 . thus the qualitative behaviour of the @xmath27 curves is quite different in these two cases . although all the curves have an endpoint at @xmath271 ( see eqs . ( [ crb0 ] ) and ( [ crb1 ] ) for @xmath272 ) only the curves for @xmath270 ( left panel in fig . [ xg1 ] ) cross the line @xmath273 perpendicularly at some critical @xmath12 value . it can be shown from eqs . ( [ gprime ] ) and ( [ fomega_0 ] ) that this critical @xmath12 value is given by the simple formula _ cr = . if a point in the @xmath12-@xmath19 plane lies to the right of the @xmath27 curve , then @xmath250 is an yrast state for the given values of the system parameters . for the example being considered in fig . [ xg1 ] , the point ( 40 , 0.3 ) lies to the right of the @xmath274 curve for @xmath275 but to the left of the curve for @xmath276 . this implies that the @xmath277 plane - wave state ceases to be an yrast state at @xmath278 as @xmath257 is decreased continuously from 0.1 to 0.05 . this behaviour is consistent with the discussion given in the appendix . the value of @xmath279 given in eq . ( [ deltae_int_2 ] ) decreases with decreasing @xmath257 so that the conditions required for the @xmath277 plane - wave state to be an yrast state are eventually violated . the main conclusion we reach for this kind of asymmetry is that larger positive values of @xmath257 favour a plane - wave state being an yrast state . for @xmath280 on the other hand ( right panel in fig . [ xg1 ] ) , the curves are bounded by the @xmath266 curves and the @xmath175 or @xmath172 lines and tend to these lines in the large @xmath12 limit . we see that the region in the @xmath12-@xmath19 plane where the plane - wave state is an yrast state diminishes in size as @xmath257 is made more negative . thus negative @xmath257 disfavours a plane - wave state being an yrast state . it is clear that the conditions for a plane - wave state being an yrast state are very sensitive to the sign of @xmath257 and that the case of symmetric interactions ( @xmath266 ) is a very special one . the inset to the right panel of fig . [ xg1 ] shows more clearly how the @xmath27 curves approach the @xmath281 line as @xmath268 . we have here a situation in which at some @xmath19 value , a plane - wave state can become an yrast state with increasing @xmath12 but then ceases to be an yrast state with further increases in @xmath12 . curves for @xmath282 , with @xmath283 ( left panel ) and @xmath284 ( right panel ) . as explained in the text , the plane wave states @xmath285 are yrast states for any @xmath19 when @xmath270 , which explains the absence of the @xmath40 critical curve in the left panel.,title="fig : " ] curves for @xmath282 , with @xmath283 ( left panel ) and @xmath284 ( right panel ) . as explained in the text , the plane wave states @xmath285 are yrast states for any @xmath19 when @xmath270 , which explains the absence of the @xmath40 critical curve in the left panel.,title="fig : " ] in fig . [ xg2 ] we show the @xmath27 curves for different @xmath18 , again for the case @xmath259 . the left panel is for @xmath286 and the right for @xmath284 . for the symmetric model it is known that the @xmath287 state is an yrast state for all @xmath19 and any positive value of @xmath12 . as explained in the appendix , this state is necessarily also an yrast state when @xmath270 , and for this reason , there is no critical curve for @xmath40 in this case . for both signs of @xmath257 we see that the conditions for a plane - wave state being an yrast state become more stringent with increasing @xmath18 . the curves are qualitatively similar to those in fig . [ xg1 ] except for the @xmath40 curve with @xmath269 . in this case , the endpoint of the @xmath288 curve is a point on the @xmath19-axis . as @xmath289 , the @xmath288 curves approach @xmath290 and the @xmath40 state is an yrast state for all @xmath19 and @xmath12 . this implies that the region in the @xmath19-@xmath12 plane _ between _ the @xmath40 critical curves is the region where the @xmath40 state is _ not _ an yrast state . curves for @xmath282 , where @xmath291 and @xmath292 in the left panel , and @xmath291 and @xmath293 in the right panel . the @xmath288 curve is once again absent in the left panel since the @xmath40 plane - wave state is an yrast state for all @xmath19 when @xmath294 and @xmath295.,title="fig : " ] curves for @xmath282 , where @xmath291 and @xmath292 in the left panel , and @xmath291 and @xmath293 in the right panel . the @xmath288 curve is once again absent in the left panel since the @xmath40 plane - wave state is an yrast state for all @xmath19 when @xmath294 and @xmath295.,title="fig : " ] finally we show in fig . [ xg3 ] some examples of critical @xmath27 curves for the system with the most general type of interaction asymmetry @xmath296 . we find that these curves are qualitatively similar to those for @xmath297 , with one obvious difference , namely the absence of mirror symmetry with respect to @xmath290 line . for @xmath298 , @xmath6 is the minority species and @xmath158 is the minority asymmetry parameter . on the other hand , for @xmath299 , @xmath5 is the minority species and @xmath252 is the minority asymmetry parameter . these figures can thus be viewed as providing the critical curves for two different sets of asymmetry parameters for the minority and majority species . the curves for @xmath40 in the right panel of fig . [ xg3 ] show an interesting asymmetry . the @xmath40 critical curve in the range @xmath298 has an endpoint at @xmath290 at @xmath300 . this is true whenever the minority asymmetry parameter @xmath158 is less than zero . as @xmath301 , this curve moves continuously to the @xmath290 line . however , when the minority species is @xmath5 with @xmath302 , the critical curve has an endpoint on the @xmath19 axis that depends on the value of @xmath252 . as @xmath303 , this point moves to @xmath304 and the whole critical curve approaches the @xmath290 line . in this paper we have studied the structure of the mean - field yrast spectrum of a two - component gas in the ring geometry with arbitrary inter - particle interaction strengths . in the case of the symmetric model , the nature of the spectrum can be elucidated by means of analytic soliton solutions of the coupled gp equations @xcite . such solutions , however , are not known for the asymmetric model in which the interaction strengths take on different values . nevertheless , we were able to show that some of the salient properties of the yrast spectrum can be determined via a perturbative analysis of the gp energy functional . in particular , we derived criteria , expressed in terms of inequalities , which determine whether a specific plane - wave state is a local minimum of the gp energy functional . we then assumed that the global minimum of the energy functional on the angular momentum hypersurface corresponding to this plane - wave necessarily occurs at that particular plane - wave state . furthermore , if the gp energy functional has a local minimum at this state , the yrast spectrum does as well and persistent currents are thus stable @xcite at the angular momentum of the plane - wave state . we then showed that the yrast spectrum at these angular momenta has slope discontinuities which persist even after the yrast spectrum ceases to exhibit a local minimum . finally , we showed that the plane - wave state ceases to be an yrast state when the system parameters satisfy a certain critical condition . in the future we plan a more detailed numerical investigation of the yrast spectrum based on the solution of the coupled gp equations for the condensate wave functions . such a study would provide the solitonic portions of the yrast spectrum that join the plane - wave yrast states that we have analysed in this paper . this work was supported by a grant from the natural sciences and engineering research council of canada . this project was implemented through the operational program education and lifelong learning " , action archimedes iii and was co - financed by the european union ( european social fund ) and greek national funds ( national strategic reference framework 2007 - 2013 ) . in this appendix , we investigate the yrast spectrum in the angular moment interval @xmath58 . as discussed in sec . [ stability ] , the plane - wave states of interest in this angular momentum range are @xmath250 with angular momentum @xmath305 , where @xmath18 is an integer restricted to the range given by eq . ( [ k - range ] ) . we argue that such a state can indeed be an yrast state if the intra - species interaction strengths @xmath119 and @xmath120 are both _ sufficiently large _ in comparison to the inter - species interaction strength @xmath136 . in effect we are claiming that conditions exist for which the state @xmath306 is assured to be a global minimum on the @xmath307 hypersurface . we emphasize , however , that in general these are sufficient but not necessary conditions . as found previously , it is possible for these plane - wave states to be yrast states even in the symmetric model where all the interaction parameters are the same . our objective is to determine whether the plane - wave state @xmath306 can be a global minimum of the gp energy functional on the @xmath308 hypersurface . to investigate this possibility we consider the wave functions _ a = _ + _ a , _ b = _ + _ b , [ tpsi ] where @xmath309 and _ a = _ m c_m _ m , _ b = _ m d_m _ m. [ dpsi ] here , the deviations @xmath310 and @xmath311 are not necessarily small , and for certain choices , can in fact lead to another pair of plane waves . however , as established in sec . [ stability ] , the state @xmath85 has the lowest energy of all the plane wave states with the same angular momentum . c_^ * ) + x_b ^2(d_+ d_^ * ) [ deltae_k0 ] and |e_int = x_a^2_aa_0 ^ 2d| _ a()|^2+x_b^2_bb_0 ^ 2d | _ b()|^2 + 2x_ax_b _ ab_0 ^ 2d_a ( ) _ b ( ) , [ deltae_int ] where @xmath313 . we note that the change in interaction energy @xmath279 is zero whenever @xmath86 is a plane - wave state . we now consider the difference in kinetic energy @xmath314 in more detail . using the normalization constraints ( [ a_norm ] ) and ( [ b_norm ] ) in eq . ( [ deltae_k0 ] ) we find |e_k = x_a _ m ( m^2-^2 ) |c_m|^2 + x_b _ m ( m^2 -^2 ) |d_m|^2 [ deltae_k1 ] it is apparent that the change in kinetic energy @xmath315 can be made negative with appropriate wave function variations . the argument we make depends simply on the fact that @xmath315 has a lower bound @xmath316 . the kinetic energy of the plane - wave state @xmath85 is @xmath317 = x_a \mu^2 + x_b \nu^2 $ ] . since the kinetic energy functional @xmath318 $ ] is positive semi - definite , the lowest possible value it can have is 0 . thus the lower bound is given by @xmath319 . it should be noted that this lower bound is reached only for the @xmath111 state which does not lie on the angular momentum hypersurface of interest . nevertheless , this lower bound must still be valid when variations of the wave functions are constrained to have the desired angular momentum . the interaction energy in eq . ( [ deltae_int ] ) can be written as |e_int = x_a^2(_aa-_ab)_0 ^ 2d| _ a()|^2+x_b^2(_bb-_ab)_0 ^ 2d | _ ab_0 ^ 2d|()|^2 , [ deltae_int_2 ] where @xmath320 is the total change in particle density . eq . ( [ deltae_int_2 ] ) reduces to the change in interaction energy in the symmetric model with @xmath321 when @xmath322 . if the @xmath323 plane - wave state is an yrast state in the symmetric model for this value of @xmath19 and interaction parameter @xmath12 , then this state remains an yrast state in the asymmetric model with @xmath324 and @xmath325 since the interaction energy only increases while the change in kinetic energy is unaltered . if this state is not an yrast state in the symmetric model it is still possible that it becomes an yrast state in the asymmetric model . we now turn to the demonstration of this possibility . positive - definite if both @xmath119 and @xmath120 are greater than @xmath136 . since @xmath315 can be negative , the question of interest is whether @xmath326 can be made positive - definite with a suitable choice of parameters . specifically , we wish to determine whether conditions exist such that |e_int > ||e_k| [ inequality ] for _ any _ wave function having the same angular momentum but with a _ lower _ kinetic energy , i.e. @xmath327 . and @xmath315 along a path on the angular momentum hypersurface between the kinetic energy minimizing plane - wave state at 1 and some other plane - wave state at 2 . @xmath279 increases in magnitude as @xmath119 and @xmath120 are increased relative to @xmath136 . it is assumed that @xmath315 becomes negative along parts of the path ; the dashed curve indicates the lower bound on @xmath315 . ] in fig . [ illus ] we illustrate the expected qualitative variation of @xmath279 and @xmath315 along some path on the angular momentum hypersurface between the plane - wave state minimizing the kinetic energy and some other plane - wave state that has a higher kinetic energy . along this path @xmath279 is positive and vanishes at the ends of the path . the dashed line indicates the lower bound on @xmath315 . since @xmath279 can be made arbitrarily large by increasing @xmath119 and @xmath120 relative to @xmath136 , it is clear that @xmath279 can be made to satisfy the inequality in eq . ( [ inequality ] ) except possibly at the start of the path at 1 where it goes to zero . however , at this point we know that , if eqs . ( [ stab1 ] ) and ( [ stab2 ] ) are satisfied , @xmath328 has a local minimum at this point . thus , even if @xmath315 were to decrease as one moved away from 1 , the local minimum at this point would ensure that eq . ( [ inequality ] ) is satisfied . since the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) become even stronger with increasing @xmath119 and @xmath120 , it is clear that it is always possible to ensure that eq . ( [ inequality ] ) is satisfied at all points along the path from 1 to 2 where @xmath315 is negative . this observation implies that the plane - wave state at 1 can be made a global minimum on the angular momentum hypersurface for a suitable choice of the interaction parameters . in the body of the paper we make the stronger assumption that the plane - wave state is a global minimum when the inequalities in eqs . ( [ stab1 ] ) and ( [ stab2 ] ) are satisfied . all our results are consistent with this assumption .
we analyze the mean - field yrast spectrum of a two - component bose gas in the ring geometry with arbitrary interaction asymmetry . of particular interest is the possibility that the yrast spectrum develops local minima at which persistent superfluid flow can occur . by analyzing the mean - field energy functional , we show that local minima can be found at plane - wave states and arise when the system parameters satisfy certain inequalities . we then go on to show that these plane - wave states can be yrast states even when the yrast spectrum no longer exhibits a local minimum . finally , we obtain conditions which establish when the plane - wave states cease to be yrast states . specific examples illustrating the roles played by the various interaction asymmetries are presented .
balancing flows over different routes , commonly referred to as _ multipath routing _ , bears important advantages . first , it makes the network more robust , because in case of a link failure , the flow can still use bandwidth on its other routes . also , under multipath routing the resources of the network are used more efficiently since overcongested flows have more flexibility to spread their traffic over undercongested parts of the network . in this paper we want to elaborate on the latter aspect . questions we are particularly interested in are : how are the resources shared under multipath routing , what causes congestion , and how are these congestion phenomena affected by the number of routes that a flow can use ? recently , multipath routing has attracted substantial attention from the research community . we specifically mention here models that consider the flow - level , i.e. , the timescale at which the number of flows stochastically changes : flows arrive at the network , use a set of links for a while , and then leave . flows can be divided into streaming and elastic flows : _ streaming _ flows ( voice , real - time video ) are essentially characterized by their duration and the rate they transmit at ; on the other hand , _ elastic _ flows ( predominantly data ) are characterized by their size , while their transmission rate is a function of the level of congestion in the network . an implicit assumption in flow - level models is that of _ separation of time - scales _ , meaning that the time it takes for the rate to adapt is negligible compared to changes at the flow - level . a flow - level model for multipath routing , which integrates elastic and streaming traffic , is due to key and massouli @xcite ; models in the context of rate control can be found in the work by kelly and voice @xcite . in this paper we consider multipath routing at the flow - level in networks with routes of length one , adopting the model of @xcite for coordinated multipath routing . here _ coordinated _ multipath routing means that rates are allocated so as to maximize the utilities of the total rates that flows obtain . this is in contrast with uncoordinated multipath routing , where flows on different routes are treated as different users ; then one maximizes the utilities of the rates that a flow achieves on each route . as pointed out in @xcite , such an uncoordinated mechanism generally leads to inferior performance . there a fluid limit for the model of integrated streaming and elastic traffic was established , and uniqueness of the equilibrium point was proven . a first contribution of our work is that we consider a perhaps more realistic model in which the elastic flows have an upper bound they can transmit at . imposing such a peak rate has significant advantages when analyzing the model @xcite ; in addition it has a natural interpretation , as the constraint on the rate can be thought of as the rate of the bottleneck link ( for instance the access rate ) . for this model we establish a fluid limit , and prove existence and uniqueness of an equilibrium point ; in addition we describe a ( finite ) algorithm that finds this equilibrium point . one of the difficulties in the analysis of multipath routing , is that the rate allocation can not be given explicitly , even when the number of flows is fixed ; instead , it follows by solving an optimization problem . additionally , this optimization problem contains further variables , determining how users split their flow on different routes , for which there is in general no unique solution . the second contribution of this paper , is that we ease this analysis by finding generalized cut constraints for the case of routes of length one . these generalized cut constraints assume there are certain cuts , i.e. , subsets of resources , of the network that give sufficient constraints to determine the feasibility of a flow . introducing these reduces the optimization problem to one with the total rates being the only variables over which we are optimizing . in the rate allocation for a given number of flows , we identify a clustering of flows and resources . the generalized cut constraints enable us to construct an algorithm that finds the rate allocation and the clustering explicitly . in this context we will also find an expression for the minimal and maximal rates . related to the question how flows share resources in a multipath network , a crucial concept is _ complete resource pooling , _ which is a state in which the network behaves as if all resources are pooled together , with every user having access to it . it is clearly desirable to achieve this state , as then resources can be shared fairly among all flows and , due to the concavity of the utility functions we are using , in the most utility - efficient way . ( also , this implies that if a network is already in complete resource pooling then increasing the number of routes that a flow can use does not change the rate allocation since the network already operates in the best possible way . however , increasing the number of routes will make it more likely for the network to be in a complete resource pooling state . ) complete resource pooling was already studied by , e.g. , laws @xcite and turner @xcite for networks where users must use one route only but have a set of possible routes to choose from . in this paper , we give explicit conditions for the equilibrium point of the fluid limit to be of the complete - resource - pooling type . our last contribution relates to the purely symmetric circle network . each flow is allowed to use @xmath0 given routes ; we call the network a _ circle _ network as the resources can be arranged in a circle . this setup is reminiscent of the supermarket model by mitzenmacher @xcite , where every flow chooses one of a set of @xmath0 randomly chosen routes ; we believe our model is the more natural one , as users would naturally choose from a set of routes in their proximity . we study the effect increasing @xmath0 , the number of routes that a flow can use , has on congestion in the network . to this end we define ` congestion ' as the event that there is a flow that can transmit at a rate of at most some small value @xmath1 . we then investigate in what kind of cluster this congestion event is most likely to occur . using diffusion - based estimates , we derive expressions for the corresponding probabilities , and conclude that , as the size of the network grows large , the cluster with highest probability of congestion has size @xmath2 . the outline of the paper is as follows . in _ section 2 _ we introduce the model of multipath routing in general networks . we describe the optimization problem that gives the rate allocation for a given number of flows . for the case of routes of length one we describe the clustering in the rate allocation , and introduce notions of connected and strongly connected sets . in this context we also introduce the generalized cut constraints . _ section 3 _ further investigates the rate allocation , for a given number of flows . we develop the algorithm that identifies this allocation , which also gives explicit expressions for the maximum and minimum rate ( over all flows ) , as well as a lower bound on the rate allocated to each of the flows . in _ section 4 _ we then move to the model that incorporates a flow level , and which involves streaming and ( peak - rate constrained ) elastic traffic , as described above . in the scenario of streaming traffic only , the stationary distribution of the corresponding markov chain is given explicitly , but for the other cases this is infeasible . we therefore resort to fluid limits in _ section 5_. after briefly reviewing the results of @xcite , we specifically consider the fluid limit for peak - rate constraint elastic traffic , and prove the uniqueness of an equilibrium point . we also introduce the concept of complete resource pooling , and provide conditions for the equilibrium point of the fluid limit to be in complete resource pooling for both cases , with integrated streaming and elastic traffic and with peak - rate constraint elastic traffic . in _ section 6 _ we turn to the diffusions around the equilibrium point . we explicitly establish the diffusion limit for the purely symmetric circle network . we will calculate the covariance matrix of the stationary distribution that belongs to this diffusion . relying on these estimates , we present expressions for the probabilities that congestion occurs ( that is , some transmission rates are below some critical value @xmath1 ) with a cluster of size @xmath3 , so as to identify the most - likely size of the cluster in periods of congestion . in this section we first define in our multipath setting , given the number of flows present , the allocation of rates to these flows . we will make this allocation more explicit in the next section . in practice , of course , the number of flows will change over time ; in section 4 and further we consider this setting ( where we use the results derived in section 3 for a given number of flows present ) . a second goal of this section is to introduce useful notions such as _ clusters _ and _ ( strong ) connectivity_. we consider networks with multipath routing , i.e. , we have a set of resources @xmath4 with capacities @xmath5 , users ( interchangeably referred to as flow types ) @xmath6 and routes @xmath7 . user @xmath8 can use routes in @xmath9 , where each route @xmath10 is a set of resources @xmath11 . as mentioned above , in this and the following section we still consider the number of flows to be fixed . let @xmath12 denote the number of file transfers of type @xmath8 then the vector @xmath13 of rates allocated to flow of type @xmath8 is the unique solution to the concave optimization problem : @xmath14 here @xmath15 is the rate with which flow of type @xmath8 is processed on route @xmath0 . we agree that @xmath16 if @xmath17 . also , we can define variables @xmath18 for @xmath19 . following @xcite , the utility functions @xmath20 are assumed to be strictly concave , strictly increasing and differentiable . a utility function that is frequently considered in this context is @xmath21 for given weights @xmath22 ; this type of utility curves is usually referred to as weighted @xmath23-fairness . in this paper we will disregard weights and consider @xmath24 , i.e. , each user has the same utility function . it can be checked that the following results hold for any strictly concave , strictly increasing , and differentiable common utility function @xmath25 . considering routes of length one only , the following results are irrespective of the explicit form of the function u and only require that it is a strictly concave , strictly increasing , and differentiable common utility function . here we leave the notion of route , and let @xmath26 be the set of resources user @xmath8 can use . now , @xmath27 denotes the rate user @xmath8 receives from resource @xmath28 . then our optimization problem reads : @xmath29 the stationary conditions of the corresponding lagrangian are @xmath30 where @xmath31 is the lagrange multipliers corresponding to resource @xmath28 . then we can see that for any function @xmath25 as above the optimal solution @xmath32 is sufficiently characterized by @xmath33 with ( [ allocation ] ) we can discover a unique partition of @xmath34 into a collection of _ clusters _ of the form @xmath35 , similarly to hajek s notion @xcite . this is the same for any strictly concave differentiable function @xmath25 . we define a _ cluster _ to be a non - empty pair @xmath35 with @xmath36 , @xmath37 such that if @xmath38 then @xmath39 consists of all @xmath40 such that there exists a path @xmath41 such that for a feasible allocation @xmath32 solving ( [ opti2 ] ) @xmath42 and @xmath43 are strictly positive for all @xmath44 . similarly , @xmath45 consists of all @xmath46 for which there exists a path @xmath47 such that for a feasible allocation @xmath32 solving ( [ opti2 ] ) @xmath42 and @xmath43 are strictly positive for all @xmath44 . with @xmath48 this gives indeed a unique partition since it can be checked that the existence of a path between @xmath8 and @xmath28 , or respectively between @xmath8 and @xmath49 or @xmath28 and @xmath49 , is an equivalence relation . it can be easily deduced from ( [ allocation ] ) that if @xmath8 and @xmath50 are in the same cluster then @xmath51 . intuitively , a cluster @xmath35 is such that the optimal rate allocation is as if all resources of @xmath45 are pooled together with exactly all flows of @xmath39 having access to it . consider figure 1 which shows a circle network of four resources ( described by squares labelled 1 , 2 , 3 , and 4 ) of unit capacity and four flow types ( described by circles labelled 1 , 2 , 3 , and 4 ) . each type @xmath8 can use resources @xmath8 and @xmath52 , which is illustrated by edges ( where @xmath53 is understood as 1 ) . in this example @xmath54 , the number of flows of type 1 , is relatively large , ie . suppose @xmath55 , so that flows of type 1 monopolize resources 1 and 2 . types 2 , 3 and 4 are sharing resources 3 and 4 equally . thus we have two clusters , one consisting of flows of type 1 and resources 1 and 2 , and another consisting of flow types 2 , 3 , and 4 and resources 3 and 4 . as a consequence of the previous definition all flows @xmath8 in the same cluster will have the same rate @xmath56 . if we have two distinct clusters in which the common rate @xmath56 is the same , then we say that these clusters are at the same _ cluster level_. for a cluster @xmath35 we then have that the set @xmath57 , i.e. , the set consisting of all @xmath8 that have to go through @xmath45 , is a subset of @xmath39 . we will be particularly interested in the event of complete resource pooling , i.e. , when @xmath34 is just a single cluster . the following notions of connectivity help us to characterize the structure of the network and to identify clusters . these notions concern the general structure of @xmath58 without taking into account the allocation at state @xmath59 . a pair @xmath35 with @xmath36 , @xmath37 is _ connected _ if @xmath45 can not be partitioned in nonempty @xmath60 such that @xmath39 can be partitioned into @xmath61 , @xmath62 with @xmath63 for @xmath64 and @xmath65 for @xmath66 . equivalently , for all @xmath67 there is a path @xmath68 such that @xmath69 for all @xmath70 regarding @xmath58 as a bipartite graph with nodes @xmath6 and @xmath4 and an edge between @xmath71 and @xmath46 if and only if @xmath72 , we see that a pair @xmath35 is connected if and only if the subgraph @xmath35 is connected . we can also see that for @xmath35 to be a cluster it has to be connected . a subset @xmath37 is _ connected _ if @xmath73 is connected . we define @xmath74 to be the set of _ connected _ subsets of @xmath4 . a subset @xmath37 is _ strongly connected _ if @xmath75 is _ connected_. let @xmath76 be the set of _ strongly connected _ subsets of @xmath4 . figure 2 demonstrates connected sets of the circle network with four resources from figure 1 . any set of consecutive resources @xmath77 , @xmath78 and also @xmath79 and @xmath80 are connected as are the sets isomorphic to these . here , all of these sets are also strongly connected since consider eg . @xmath81 and any splitting of @xmath78 into two non - empty sets would lose either user 2 or 3 . it is immediate that ` strongly connected ' implies ` connected ' . however , the reverse is not true as figure 3 illustrates . here the set of resources 1 , 2 and 3 is connected , but _ not _ strongly connected , since we can partition it in subsets @xmath77 and @xmath82 . then @xmath83 , which contains flow type 1 only , is the same as @xmath84 . the notion of strongly connected sets is so important because it enables us to identify the generalized cut constraints in the next section , and they are also essential to the rate - allocation algorithm of section 3 . being connected but not strongly connected , height=144 ] the feasibility constraints from ( [ opti2 ] ) are inconvenient because they involve additional variables @xmath27 for which there is no unique solution . however , we can rewrite them in terms of the @xmath56 as @xmath85 these inequalities are called generalized cut constraints . they will be inevitable in the next section when proving the validity of the optimal allocation attained by our proposed algorithm . the equivalence of these generalized cut constraints to the feasibility constraints from ( [ opti2 ] ) is proved in lemma [ theoremgcc ] in the appendix . @xcite have shown in their proposition 5.1 that it is possible even for the general multipath network to write the feasibility constraints in terms of generalized cut constraints as @xmath86 where @xmath87 is a some non - negative matrix and @xmath88 is a positive vector ; here @xmath89 refers to the vector with @xmath8-th coordinate equalling @xmath90 . however , no explicit expression for @xmath87 and @xmath88 has been found so far in such a general multipath network . so far we characterized the rate allocation ( for a given number of flows ) as the solution to an optimization problem ; we did not give any explicit expressions . this section gives a ( finite ) algorithm that identifies this allocation , and which in addition yields the clustering . we finally also present explicit expressions for the minimally and maximally achieved rate ( over all classes ) . to ease notation we write in the following for @xmath37 @xmath91 also , for @xmath92 , we write @xmath93 note that this is the rate in cluster @xmath35 if @xmath35 is a cluster . if @xmath35 is a union of clusters @xmath94 , then @xmath95 which means that @xmath96 is a weighted harmonic mean of the rates @xmath97 of clusters @xmath94 with corresponding weights @xmath98 . it is important to note that the same still holds if @xmath94 are not necessarily clusters , i.e. , when we only know that @xmath35 is a union of pairwise disjoint @xmath94 . in the sequel we write @xmath99 to denote a weighted harmonic mean of @xmath100 , with weights @xmath101 , @xmath102 . in the cases we deal with , the magnitude of the weights will either be not important or obvious , e.g. , when @xmath103 , the corresponding weight for @xmath97 is @xmath104 . now we are ready to describe the algorithm that finds the optimal allocation @xmath32 and the clustering , when the number of flows present is given by the vector @xmath59 . [ algorithm ] for general networks with routes of length one and fixed number of flows @xmath59 the optimal @xmath105 can be obtained by the following algorithm : for @xmath106 * find the minimum @xmath107 * let @xmath104 be the union of all arguments that achieve the minimum above and @xmath108 * for all @xmath109 let @xmath56 be the minimum of above . * now , set @xmath110 , @xmath111 and @xmath112 , so that we also have @xmath113 . repeat this procedure with the reduced network until we are left with a network without resources and users . moreover , @xmath94 found at the @xmath3-th step of the algorithm is the union of all clusters at the @xmath3-th level . the strongly connected sets that are arguments of ( [ minirate ] ) with their corresponding @xmath114 being the clusters of the network . observe that this algorithm is well defined . for any finite network with set of resources @xmath115 there will be at least one and a finite number of @xmath116 . thus , the minimum exists at each step . also , the algorithm will stop because of the network being finite . in order to prove thm . [ algorithm ] , we need the following essential properties . the proof of the lemma is given in the appendix . [ lemma ] at each step @xmath3 of the algorithm for the corresponding @xmath34 , we have 1 . @xmath117 2 . if @xmath118 for @xmath37 then @xmath119 . 3 . @xmath120 4 . @xmath121 , i.e. , cluster levels are found in increasing order of their rates . we need to check that the allocation @xmath32 achieved in @xmath122 satisfies the relations ( [ allocation ] ) with cluster levels @xmath104 . for this we need to show the following . * at each step @xmath3 the allocation within the @xmath3th cluster level @xmath94 is feasible , i.e. , none of the constraints of ( [ cutconstraints ] ) for @xmath123 is violated taking the amended @xmath124 . by lemma [ lemma].1 we have @xmath125 where @xmath126 for all @xmath127 . so , this implies that we have @xmath128 as desired . also , @xmath129 and @xmath130 if @xmath131 . * the first line of ( [ allocation ] ) holds : consider any @xmath132 , not at the same level , and @xmath133 . assume @xmath134 , which implies ( by lemma [ lemma].4 ) that @xmath135 is at an earlier cluster level than @xmath136 , say @xmath94 . then since @xmath137 and @xmath138 , we know that @xmath139 . + so , @xmath28 is at a cluster level before @xmath136 and @xmath140 . * no capacity is wasted . at each cluster level @xmath94 where @xmath141 the total load is @xmath142 where the first equality follows from lemma [ lemma].3 . since we know that none of the capacity constraints is violated , we conclude that each link is fully used . + note , when @xmath143 , we have @xmath144 , so this must be at the last level . the union of the earlier clusters then has the form @xmath145 with @xmath146 , so @xmath147 and the links in @xmath45 were redundant . this completes the proof . with the rate - allocation algorithm of thm . [ algorithm ] at our disposal , we can give an expression for the minimal and maximal rates . [ thmminmaxrate ] for general networks with routes of length one the minimal and maximal rates are @xmath148 the proof for the minimal rate follows immediately from ( [ algorithm ] ) . for the maximal rate we know from ( [ algorithm ] ) that at the last step we are left with clusters of the form @xmath35 such that @xmath149 , so the maximum of ( [ maxrate ] ) gives an upper bound for the maximal rate . to see that this maximum is actually attained , let @xmath45 be the argument for the maximum in ( [ maxrate ] ) . then @xmath150 is the set of all flows that can possibly use resources in @xmath45 . so , a weighted harmonic mean of the rates of flows @xmath151 is at least @xmath152 hence there must be at least one flow having a rate greater than or equal to this and so the maximal rate follows . also with theorem [ algorithm ] we can give a lower bound on the rates of any flow . [ thmxirate ] for each @xmath71 we have @xmath153 we know @xmath8 has to be in some cluster , say @xmath154 where @xmath155 is the order it appears in ( [ algorithm ] ) . consider the union of @xmath94 for @xmath156 . this will be of the form @xmath75 . since @xmath157 is a weighted harmonic mean of the rates of clusters @xmath94 for @xmath156 , and since among those the cluster of @xmath8 has the highest rate , we find the lower bound on @xmath56 . _ in the minimizations and maximizations above we could always look at general subsets instead of connected ones , and the proofs would work in exactly the same way . however , restricting to connected or even strongly connected sets is sufficient , and has the important advantage of decreasing the number of sets over which we are minimizing . _ where the previous section considered the rate allocation for a given number of flows , we now add a dynamic component : we let flows arrive ( according to a poisson process ) , use a set of links for a while , and then leave again . given that , at some point in time @xmath158 , there are @xmath159 flows present , then the rates are allocated as in ( [ optimization ] ) . under the appropriate exponentiality assumption regarding the flow size distribution , @xmath160 is a continuous - time markov chain . we distinguish between two different kinds of traffic : streaming and elastic traffic . streaming flows have a random but fixed duration that is independent of the current level of congestion ; think of voice or streaming video . elastic flows have a random but fixed size ( say , in mbits ) . in the sequel we consider three cases , one with purely streaming traffic , one with integrated streaming and elastic traffic , and one with elastic traffic only in which we impose a peak rate . if only streaming traffic is present , then the transition rates do not depend on the resource allocation @xmath32 . assume that flows of type @xmath8 arrive according to a poisson process with rate @xmath161 , and that their duration is exponentially distributed with mean @xmath162 . with @xmath163 denoting the number of streaming traffic of type @xmath8 at time @xmath158 , it is evident that the transition rates are those of a multidimensional m / g/@xmath164 queue : @xmath165 it follows immediately that in steady - state the @xmath166 are independent poisson variables with mean @xmath167 . we are interested in the probability of a rate becoming very small . for positive real @xmath168 we can evaluate the probability @xmath169 where @xmath170 is a convex subset of @xmath171 depending on @xmath168 . it contains some neighborhood of the origin and is bounded by hyperplanes @xmath172 . it is natural to assume that streaming flows are blocked if the allocated rate falls under a certain threshold . denoting the set of feasible states by @xmath87 , we obtain @xcite that the corresponding equilibrium distribution is proportional to the multivariate - poisson distribution given above . in particular , @xmath173 where @xmath174 is the set of vectors @xmath175 such that @xmath176 , with @xmath177 representing the @xmath8-th unit vector . we end with the obvious remark that since the equilibrium distribution is of product form , it is the same for _ any _ duration distribution with means @xmath162 ; the blocking probability is insensitive to the service time distribution . here we review the model of integrated streaming and elastic traffic that was considered by key and massouli @xcite . importantly , in this model streaming and elastic traffic are treated the same , in the following sense . the rate allocation is computed , for _ both _ streaming and elastic flows through ( [ optimization ] ) . streaming flows terminate ` autonomically ' , that is , with a rate that does not depend on the current congestion level ( and hence also not on their current transmission rates ) . elastic flows , however , terminate with a rate that is proportional to their momentary transmission rate . with @xmath178 corresponding to elastic flows and @xmath179 to streaming flows , this leads , in self - evident notation , to a markov chain with transition rates : @xmath180 in @xcite it was shown that in this model the addition of streaming traffic brings some sort of a ` stabilization effect ' to the network ; as a result , the model has a non - trivial fluid limit ( that is , a fluid limit with an equilibrium point that does not equal 0 ) . they claim that a sufficient and necessary condition for stability that the @xmath181 have to satisfy is @xmath182 with the generalized cut constraints ( [ cutconstraints ] ) we can rewrite this condition as @xmath183 note that the stability constraints do not involve the streaming flows ; as their rate can be pushed down arbitrarily low , their presence has no impact on the stability of the network ( which may be considered a less realistic feature of this setup ) . an important observation is that no closed - form expression of the equilibrium distribution of this markov chain is available . finally , we consider a network with elastic traffic only . we impose a peak rate on the flows transmission rates . as argued in @xcite , this is convenient as it facilitates the analysis of the corresponding fluid limit ( see section 5 of the present paper ) ; in fact , the introduction of peak rates has again a ` stabilization effect ' ( cf . the previous subsection ) , in that it leads to non - trivial fluid limits . in addition , it is remarked that imposing peak rates is quite natural : it is not realistic that flows can transmit at link speed ; in fact the peak rates can be interpreted as the access rates of the users . we denote the peak rate of type @xmath8 by @xmath184 in self - evident notation , this leads to a markov chain with transition rates @xmath185 here we want the loads @xmath181 to be such that the stability condition ( [ cond21 ] ) for the integrated streaming and elastic traffic case of @xcite holds , which again translates to ( [ stability ] ) . it is conceivable that this is the stability condition for this case . it is noted that , again , no closed - form expression of the equilibrium distribution of this markov chain is available . as mentioned above , the markov chains presented in the previous section do not lead to a closed - form steady - state distribution ( except for the purely streaming case ) . this motivates why we now explore these systems under a fluid scaling . loosely speaking , a fluid scaling for the process @xmath178 is the sped - up process @xmath186 that we obtain when we scale up arrival rates and capacities by the same factor @xmath187 . the fluid limit is then the rescaled process @xmath188 , in the limit as @xmath189 , which then gives a reasonable approximation of the real system when arrival rates and capacities are large . in our work we will not treat in detail the convergence issues that play a role here ; see for instance @xcite for more background . in the previous section , we mentioned that either adding streaming traffic or imposing peak rates has the effect that we obtain a non - trivial fluid limit . to explain what goes wrong otherwise , consider the following argument . in case of elastic traffic only , without a peak rate , the dynamics of the fluid limit are given through @xmath190 so that the equilibrium point should satisfy @xmath191 but with the stability condition ( [ stability ] ) this contradicts the capacity constraints being tight at the optimal @xmath32 , so no equilibrium point other than @xmath192 can exist in this case , where @xmath192 is a point of discontinuity of the gradient of the fluid limit . if , however , the case of integrated streaming and elastic traffic , or the case of peak - rate constrained elastic traffic is considered , we _ do _ get a non - trivial fluid limit . we show this for the peak - rate constraint case later in this section ; moreover , we give an algorithm how to find this equilibrium point . for integrated streaming and elastic traffic , the existence of a unique equilibrium point was already shown in @xcite , but we give a quick review of their result . we end this section by introducing the notion of complete resource pooling , and give conditions that tell us when the equilibrium point corresponds to complete resource pooling . from @xcite , we know that under integrated streaming and elastic traffic the fluid limit becomes @xmath193 then for an equilibrium point @xmath194 of the fluid limit we have @xmath195 where @xmath196 is the rate allocation at the equilibrium point @xmath197 . in @xcite it is shown that @xmath196 is the unique solution to the optimization problem @xmath198 then @xmath199 and @xmath200 determine @xmath201 uniquely , hence the equilibrium point is unique . on the other hand , with our generalized cut constraints ( [ cutconstraints ] ) from section 2 , this optimization problem translates to @xmath202 here the strict concavity of the optimization problem is immediate which implies the uniqueness of the optimum and of the equilibrium point . since our optimization problem is now exactly of the form ( [ opti2 ] ) , the equilibrium point can be found by the algorithm in section 3 . by introducing peak rates @xmath203 , we will always have an equilibrium point for any network with routes of length one . the fluid limit becomes @xmath204 so that an equilibrium point satisfies @xmath205 . it is instructive to see , as an example , what happens in the symmetric case , i.e. , @xmath206 for all @xmath8 . then the unique equilibrium is @xmath201 with @xmath207 . this equilibrium point exists if @xmath208 for all @xmath8 , i.e. , @xmath209 which reduces to the stability condition @xmath210 no other equilibrium point can exist , which can be seen as follows . suppose there would be another equilibrium point , i.e. , there are certain @xmath8 that are peak - rate constrained , say all @xmath211 , and others that are not . then @xmath211 if and only if @xmath212 so , for the cluster level @xmath75 with minimal rate under the usual allocation @xmath213 , we must have @xmath214 . then for @xmath151 , we have that @xmath215 we know from ( [ allocation ] ) that flows @xmath8 of @xmath114 will take up all the resources in @xmath45 : @xmath216 however , at our equilibrium point the left hand side equals @xmath217 and hence the former equality would violate the stability condition . now , for general peak rates we introduce the following theorem that claims the existence of a unique equilibrium point under some condition and also determines an algorithm that finds this equilibrium point . [ prallocation ] consider a network @xmath218 with routes of length one , traffic intensities @xmath219 satisfying the stability condition ( [ stability ] ) , and @xmath220 let the peak rates be ordered : @xmath221 . then there is always a unique equilibrium point @xmath201 for the fluid limit . it can be determined by the following algorithm : for @xmath106 , * find the minimum @xmath222 * let @xmath104 be the union of all strongly connected sets that achieve the minimum above , @xmath108 * let @xmath223 be the unique value of @xmath224 such that @xmath225 $ ] . * for all @xmath109 set @xmath226 . * for @xmath227 set @xmath228 , for @xmath229 set @xmath230 . * now , set @xmath231 , @xmath111 and @xmath112 , so that we also set @xmath113 . repeat this procedure with the reduced network until we are left with a network without resources and users . moreover , the @xmath94 achieved at each step of the algorithm gives the @xmath3-th cluster level at the equilibrium point @xmath201 . exactly flows in @xmath232 are peak - rate constrained . _ we need the condition @xmath233 because otherwise we could have a range of equilibrium points . consider , for example , the circle network with @xmath234 , @xmath235 and @xmath236 then for all @xmath237 $ ] we have an equilibrium point : @xmath238 our algorithm would not recognize the full range of equilibrium points but would associate @xmath239 to the smallest cluster level @xmath3 such that @xmath240 , and as such give it the same rate . _ if the conditions @xmath233 hold , then at an equilibrium point @xmath201 we must have at least one peak - rate constraint flow in every cluster . this is because otherwise @xmath241 then , given a cluster @xmath35 of some equilibrium point , the following lemma tells us that we can uniquely determine the rate within this cluster and thus also which flows of @xmath39 are peak - rate constrained . [ lemmari ] for @xmath35 , @xmath242 and @xmath243 , @xmath88 such that @xmath244 , the following holds : 1 . there is a unique @xmath245 such that @xmath246 for this @xmath224 any @xmath247 with @xmath248 minimizes @xmath249 2 . for @xmath250 we have @xmath251 and for @xmath252 @xmath253 here the fraction @xmath254 is the rate @xmath199 for a cluster @xmath35 at an equilibrium point if exactly flows @xmath211 are peak - rate constrained . this follows from the fact that we know that flows @xmath8 that are not peak - rate constrained will take up capacity @xmath255 of the cluster . then the remaining flows in the cluster share the remaining capacity among each other . for @xmath201 with @xmath256 to be a feasible equilibrium point , we need @xmath257 for @xmath258 and @xmath259 for @xmath260 . the lemma says that if there is an equilibrium point with cluster @xmath35 , then there is a unique @xmath224 such that @xmath38 are peak - rate constraint exactly if @xmath261 . in particular the lemma already implies that only one equilibrium point in complete resource pooling can exist . for a proof of this lemma we refer to the appendix . to ease notation we write in the following @xmath262 we separate the proof in different parts : 1 . the algorithm is well - defined ; 2 . the algorithm gives us a feasible equilibrium point ; 3 . if we have an equilibrium point @xmath201 , then this @xmath201 is found by the algorithm . here ( ii ) gives us the existence of an equilibrium point whereas ( iii ) proves uniqueness . we will proof the parts of the theorem one by one : 1 . observe that if the stability conditions ( [ stability ] ) hold , the numerator of ( [ minprate ] ) is positive for all @xmath263 . then the minimum is well - defined and positive and attained for some @xmath45 and @xmath264 . hence by lemma [ lemmari ] the minimum is greater than @xmath265 and there exists a unique @xmath266 such that the minimum is in @xmath267 $ ] + also if the stability conditions hold , by lemma [ lemmari ] a unique @xmath224 corresponding to a specific @xmath45 is well - defined and if @xmath263 and @xmath268 achieve the same minimum then @xmath269 since the fractions are by lemma [ lemmari ] in @xmath270 $ ] and @xmath271 $ ] respectively . we only need to check that the stability conditions hold at each step of the algorithm . we know the stability conditions hold initially , and therefore , by lemma [ lemmari].1 and lemma [ aplemma].1 , [ aplemma].2 , and [ aplemma].3 , we have for any @xmath272 @xmath273 we thus have @xmath274 so that the stability conditions hold at the second step of the algorithm . it follows that @xmath275 , @xmath276 and @xmath277 are well defined . then we can use lemma [ lemmari].1 and lemma [ aplemma ] again , and conclude by induction that the stability conditions hold at any step @xmath3 . _ to see why the stability conditions are important here , assume for the moment that they do not hold and that we have some @xmath278 with @xmath279 . then for @xmath280 , the numerator would be 0 or negative ( depending on whether the inequality holds strictly ) and the sum in the denominator would be empty . then the fraction could be made @xmath281 or @xmath282 respectively and the minimization would not give us a feasible rate . we want to show that the @xmath283 found by the algorithm do indeed correspond to an equilibrium point : ( a ) the @xmath199 is in line with the allocation @xmath213 from theorem [ algorithm ] , and ( b ) it holds that @xmath284 the latter equations follow immediately from the construction of @xmath201 and @xmath285 , in conjunction with lemma [ lemmari].1 . to see that @xmath286 , we have to show that @xmath287 is the union of strongly connected @xmath45 minimizing ( [ minirate ] ) , which reads @xmath288 by the choice of @xmath287 and @xmath289 , and invoking lemma [ aplemma].3 , we know @xmath290 by lemma [ lemmari].1 the left - hand side is smaller than or equal to @xmath291 , which implies @xmath292 which is equivalent to @xmath293 now , by lemma [ aplemma].4 @xmath294 , so that the right - hand side of ( [ minfrac ] ) is greater than or equal to the left - hand side of the previous equation . we thus have that the right - hand side of ( [ minfrac ] ) is greater than or equal to @xmath295 , and therefore by the last equality in ( [ eq ] ) @xmath287 does indeed minimize ( [ minfrac ] ) . to see that @xmath287 is the union of all @xmath45 minimizing this fraction , observe that equality holds in the above argument if and only if @xmath296 , @xmath297 and @xmath298 the latter equation is equivalent to @xmath299 so @xmath287 is indeed the union of all @xmath45 minimizing above . + since we have shown in ( i ) that the stability conditions still hold for the reduced network , it follows in the same way that @xmath104 is the solution to the @xmath3-th step of the algorithm in the algorithm of thm . [ algorithm ] , and so this allocation algorithm for @xmath199 is indeed in line with thm.[algorithm ] . hence , the @xmath201 constructed by the algorithm is an equilibrium point . suppose we have an equilibrium point @xmath201 with @xmath300 cluster levels @xmath94 corresponding to rates @xmath301 determined by ( [ algorithm ] ) and @xmath302 . with conditions ( [ neqcond ] ) in force , we have that @xmath303 , and as a consequence we know there must be at least one flow in each cluster which is peak - rate constrained . hence by lemma [ lemmari].1 , we have that @xmath223 is the unique value such that @xmath304 . we can easily see that @xmath223 must be non - decreasing in @xmath3 since the rate @xmath301 is increasing with increasing level , and so flows are more likely to be peak - rate constrained . we want to show that @xmath287 is the union of all @xmath45 minimizing ( [ minprate ] ) , that is , @xmath305 by lemma [ lemmari].1 it is enough to show that each @xmath104 is the solutions of the @xmath3-th step of the algorithm of thm.[prallocation ] . take any @xmath116 and partition it into @xmath306 let @xmath307 be @xmath308 . due to lemma [ lemma].1 and [ lemma].3 we know @xmath309 now , by lemma [ lemmari].1 , it holds that @xmath310 , and therefore by lemmas [ aplemma2 ] and [ aplemma3 ] we find @xmath311 then @xmath312 for all @xmath263 , so that @xmath287 and @xmath289 minimize ( [ minprate ] ) and the corresponding minimum is @xmath295 . now equality holds in the last inequality only if @xmath313 , so all strongly connected @xmath45 minimizing ( [ minprate ] ) are subsets of @xmath287 . conversely , if we partition @xmath287 in strongly connected sets @xmath314 , then @xmath315 since we know equality holds , we must have @xmath316 for all such @xmath314 . + hence , @xmath287 , the first cluster level of an equilibrium point , must be the union of all strongly connected sets achieving the minimum in ( [ minprate ] ) . since by lemma [ aplemma2 ] the stability conditions still hold for the reduced network , as does condition ( [ neqcond ] ) , we can show in the same way that @xmath104 is the solution of the @xmath3-th step of the algorithm in thm.[prallocation ] . hence , any equilibrium point can be found by the algorithm . thus , the algorithm determines the unique equilibrium point . the first goal of this section is to formally define what we mean by _ complete resource pooling_. we remark that laws @xcite and turner @xcite , among others , used this notion for networks where users choose one of a possible set of paths . we say that in a network @xmath218 the state @xmath317 describing the number of flows is in _ complete resource pooling _ if there exists a neighborhood of @xmath59 such that for each @xmath318 in this neighborhood the optimal @xmath319 determined by ( [ optimization ] ) satisfies @xmath320 this means that if @xmath59 is in complete resource pooling , then the system behaves as if the total capacity is pooled in one resource with every flow having access to it . it is desirable that the equilibrium point of the fluid limit is in complete resource pooling , as then the capacities are used efficiently in that resources are shared equally over all flows . it is seen that , for a given total sum of capacities and total number of flows , the utility is maximized in a complete resource pooling state . also , a nice advantage is that when we know a state @xmath59 is already in complete resource pooling , then increasing the number of routes that a flow can use will not change the optimal allocation @xmath321 . in particular this implies that the diffusion will be unchanged , as we will see in the next section . however , we note that increasing the number of possible routes for one flow will increase the set of states @xmath59 that are in complete resource pooling . for these reasons we want to investigate when the equilibrium point is in complete resource pooling . in the case of integrated streaming and elastic traffic we can determine more explicitly when the equilibrium point is in complete resource pooling . in the integrated streaming and elastic traffic case , a sufficient and necessary condition for the equilibrium point being in complete resource pooling is @xmath322 consider @xmath323 , @xmath324 for all @xmath71 where @xmath325 then @xmath32 satisfies @xmath326 so it is the common rate if we have complete resource pooling at @xmath327 and if so , @xmath327 will be an equilibrium point . in order for @xmath32 to be the rate @xmath328 for all @xmath8 determined by ( [ allocation ] ) , we require for all @xmath116 that @xmath329 substituting for @xmath32 , this is equivalent to @xmath330 since by stability the right - hand side is strictly negative , the inequality is satisfied if the left - hand side is non - negative . thus , the condition in ( [ poolingprc ] ) is also sufficient for complete resource pooling in this case . in the integrated streaming and elastic traffic case , a sufficient condition for the equilibrium point being in complete resource pooling is @xmath331 we will use these results about the equilibrium point being in complete resource pooling in the next section when we study the diffusion approximation of the processes . the following theorem determines a sufficient condition for complete resource pooling depending only on the traffic intensities @xmath219 of elastic traffic and not on the peak rates . [ poolingprc ] in the peak - rate constrained case , a sufficient condition for the existence of an equilibrium point in complete resource pooling is @xmath332 if the equilibrium point is in complete resource pooling the common rate must be @xmath333 we want to show that at @xmath201 with @xmath228 for @xmath261 and @xmath334 for @xmath335 , we indeed have complete resource pooling under our condition . suppose not , then say @xmath45 is the cluster with maximal rate @xmath336 , then by ( [ algorithm ] ) @xmath337 contradicting the sufficient condition . in general , by ( [ prallocation ] ) the equilibrium point is in complete resource pooling if and only if @xmath338 for all @xmath116 , where @xmath224 minimizes the left - hand side over @xmath339 . however , unlike the previous sufficient condition , these inequalities do not have a clear interpretation . in this section we study the diffusion limit of our processes around the equilibrium point @xmath201 of the fluid limit that we identified in the previous section . we do so for a special type of network , namely the _ symmetric circle network _ which we introduce in detail below ; the setting we consider is that of integrated streaming and elastic traffic . our most important contribution is that we explicitly determine the stationary distribution to which the diffusion limit converges as @xmath158 tends to infinity . in particular we calculate the covariances of the numbers of flows of different types , and we prove that these are positive . the last part of the section will be devoted to obtaining insight in the state of the network conditional on congestion , where we define congestion as the event that at least one of the user types gets a transmission rate that is less than a certain critical value @xmath340 based on this stationary distribution we develop an estimate for the probability that , given congestion , the flow type with this minimal rate is in a cluster of size @xmath3 and compare these probabilities for different values of @xmath3 . we find that in the limiting regime where @xmath341 , the size of the network , goes to infinity the most likely value for @xmath3 is @xmath2 where @xmath0 is the number of routes that each flow is allowed to use . we consider the fluid limit for a process of @xmath342 type of flows , determined by the ( coupled ) differential equations @xmath343 , which have a unique equilibrium point , say @xmath201 . then the diffusion limit is the process @xmath344 in the limit as @xmath187 goes to infinity . this process , call it @xmath345 , satisfies approximately the following stochastic differential equation @xmath346 where * the matrix @xmath347 is obtained from the linearization around the equilibrium point @xmath348 , that is @xmath349 * @xmath350 is @xmath351 where @xmath350 is an @xmath342-dimensional vector of independent standard brownian motions and @xmath352 the diagonal matrix with entries @xmath353 . as @xmath158 tends to infinity , this diffusion limit process converges to a multivariate normal distribution with mean zero and a covariance matrix that can be determined via @xmath354 see e.g. @xcite for more background on this approach . we want to point out that we do not prove these convergence results for our case rigorously . in general calculating @xmath355 , as well as the covariance matrix , in closed - form is not possible . however , if we assume the equilibrium point is in complete resource pooling , all of the rates are of the form @xmath356 , and a small perturbation of the number of flows of any type will not affect that we are in complete resource pooling . this means that @xmath357 , evaluated in @xmath358 , is the same for all @xmath359 , and the matrix @xmath347 will be of the form @xmath360 for peak - rate constraint elastic traffic and@xmath361 where the first @xmath341 indices correspond to the elastic user types , and the last @xmath341 indices to the streaming user types . an obvious model to ease the calculation is one in which the equilibrium point is in complete resource pooling because then the derivative @xmath362 is the same for all @xmath363 . also if this happens , the equilibrium point is the same if we increase the number of routes that a flow can use and thus , the diffusion will be the same . we now show how to compute the corresponding covariance matrix explicitly in the case of a purely symmetric circle network . we consider a network consisting of @xmath341 resources of certain capacities , where each resource is a route of length one . there are @xmath341 different flow types that are each sharing over @xmath0 consecutive routes , where @xmath0 is some integer greater than 1 , that is @xmath364 ( where these numbers should be taken modulo @xmath341 , to make sure that we obtain route numbers in the set @xmath365 . this model is an obvious one to analyze and has similarities to the supermarket model as in @xcite . the difference is that we do not assume that users can use any @xmath0 routes but only those routes that are close to them . here a strongly connected set @xmath45 is such that @xmath366 for some @xmath367 , @xmath368 and the corresponding @xmath114 is the arc @xmath369 for @xmath370 or @xmath371 for @xmath372 . then the generalized cut constraints of ( [ cutconstraints ] ) read @xmath373 for @xmath367 and @xmath374 , and @xmath375 it turns out that the minimal rate can be evaluated by theorem [ thmminmaxrate ] as : @xmath376 in the sequel we analyze this circle network in the purely symmetric case : the arrival rate , capacities and mean flow sizes are assumed to be uniform across all user types , i.e. , @xmath377 , @xmath378 , @xmath379 , @xmath380 for all @xmath8 and @xmath381 for all @xmath28 ; as before , we write @xmath382 . recall the dynamics of the fluid limit for integrated streaming and elastic traffic from section 5.2 . for our purely symmetric case the equilibrium point can easily be seen to be @xmath383 we note that this is independent of @xmath0 , the number of routes that a flow can use . in the next subsection we will explicitly determine the covariance matrix using ( [ covmatrix ] ) . in this section we will denote a state ( n(t),m(t ) ) as a @xmath384 dimensional vector with @xmath8th entry @xmath12 for @xmath385 and @xmath386th entry @xmath387.recalling the dynamics of the fluid limit from section 5.2 we see that for our case @xmath347 is of the form @xmath388 where @xmath389 are all @xmath390 matrices . here @xmath87 is given by ( with @xmath391 ) @xmath392 all entries of @xmath393 are equal and given by @xmath394 finally @xmath88 is a diagonal matrix , with all diagonal elements equalling @xmath395 our first goal is to compute @xmath355 . due to the structure of @xmath347 this is of the form @xmath396\times[n+1,2n]}\\ 0\quad & e^{-ct } \end{array } \right);\ ] ] here @xmath397\times[n+1,2n]}$ ] denotes the submatrix consisting of the first @xmath341 rows and the last @xmath341 columns of a @xmath398 matrix @xmath399 . note that @xmath87 is of the form @xmath400 where @xmath401 is a generator matrix with all off - diagonal entries equalling @xmath402 , @xmath6 the identity matrix and @xmath402 and @xmath403 are positive reals given by @xmath404 it follows that @xmath405 where @xmath406 is a stochastic matrix . to calculate @xmath406 explicitly , observe that because of the symmetry all diagonal entries must be equal , as well as all off - diagonal entries . merging states @xmath407 to @xmath341 we can see that @xmath408 is the same as @xmath409 for @xmath410 calculating eigenvalues gives us @xmath411 . thus we have determined the matrix @xmath412 : for @xmath391 , @xmath413 with @xmath414 , we immediately have that @xmath415 . it is left to compute @xmath416\times[n+1,2n]}$ ] . we use that @xmath393 is a completely symmetric matrix with all entries equalling @xmath417 . observe @xmath418\times[n+1,2n]}^k=\sum_{i=0}^{k-1}{c^{k-1-i}ba^i}\ ] ] with @xmath419 we get @xmath420 @xmath421 is a completely symmetric generator matrix , i.e. , a generator with all diagonal entries equal , as well as all off - diagonal entries equal . proof by induction . say @xmath422 is a generator matrix with off - diagonal entries @xmath423 and diagonal ones @xmath424 . then @xmath425 and @xmath426 since scaling and adding up completely symmetric generator matrices preserves the symmetry and generator property , we have that @xmath427 for some completely symmetric generator matrix @xmath428 . it follows that @xmath418\times[n+1,2n]}^k=\sum_{i=0}^{k-1}{c^{k-1-i}b(-\tilde{q}+d^i i)}.\ ] ] now note that @xmath429 since @xmath393 has all entries equal and since columns of @xmath428 sum up to 0 . we therefore have that @xmath418\times[n+1,2n]}^k = b \sum_{i=0}^{k-1}{\eta^{k-1-i}d^i}=b \eta^{k-1}\frac{1-({d}/{\eta})^{k}}{1-{d}/{\eta } } = b \frac{\eta^k - d^k}{\eta - d},\ ] ] which implies that @xmath430\times[n+1,2n]}= \sum_{k\geq 0}{b \frac{\eta^k - d^k}{\eta - d}\frac{(-t)^k}{k!}}=b\frac{e^{-t\eta}-e^{-td}}{\eta - d}.\ ] ] now , we can calculate the covariance matrix , corresponding to the steady - state of the diffusion , as in ( [ covmatrix ] ) , with @xmath352 the diagonal matrix with entries @xmath431 this finally gives us the covariance matrix : for @xmath391 we have @xmath432 and @xmath433 the results on the @xmath387 are of course in line with the fact that the numbers of streaming flows correspond to independent poisson distributions . we see that the all covariances are _ strictly positive_. this could be expected , since an increase in streaming or elastic flows of one type results in less capacity for the other flows , and therefore an increase in the number of elastic flows of other types . since the equilibrium distribution is in complete resource pooling it does not make a difference whether these flows are close to the increasing flows or not . also the decay of these covariances in @xmath341 is plausible because , given that the equilibrium point is in complete resource pooling , the impact of the increase in flows of another type should decrease in the number of different flow types . assuming this stationary distribution , @xmath434 is also distributed multivariate normal with mean @xmath435 and covariance matrix @xmath436 we now want to use the above results to estimate the probabilities of the flow of minimal rate being in a cluster of size @xmath3 , given there is congestion . we define the event of congestion by requiring that there is at least one user type for which the rate allocated to a single flow , @xmath56 , is below a critical threshold @xmath340 we know from ( [ minrate ] ) in section 3 that @xmath437 we want to find the value of @xmath3 that maximizes this probability . an evident approximation , which makes sense for @xmath1 very small , is , with @xmath438 the event that user type @xmath8 is in a cluster of size @xmath3 , @xmath439 using the multivariate normal approximation , we write this as @xmath440 observe here that for a fixed @xmath3 smaller than @xmath341 this probability is decreasing in @xmath0 , indicating that ( as expected ) the minimal rate is increasing in @xmath0 . this observation confirms that the greater the number of routes a flow can use , the more efficient the network operates . for fixed @xmath0 the value for @xmath3 with highest probability ( among all values smaller than @xmath341 ) is the one where @xmath441 is minimal . for @xmath1 small the dominating term is @xmath442 by differentiating one can check that if @xmath443 , the minimal value will be at @xmath444 now , recall from the expressions for @xmath445 and @xmath446 that we can write for some @xmath447 that does not depend on @xmath341 : @xmath448 then when @xmath341 grows large , @xmath446 tends to zero , which implies that @xmath443 tends to @xmath449 which is larger than zero and thus the most likely @xmath3 is @xmath450 , which tends to @xmath2 as @xmath451 . we want to compare the value of ( [ dominating ] ) for @xmath452 with the corresponding argument of @xmath453 in ( [ probability ] ) for @xmath372 . for @xmath452 we find @xmath454 whereas for @xmath372 @xmath455 thus , when @xmath341 is sufficiently large , and if @xmath0 is @xmath456 , @xmath452 indeed maximizes the probability of a flow with a very small rate being in a cluster of size @xmath3 . the above computations indicate that for the minimal rate to be very small , we need @xmath2 flow types to be congested . thus , in the case where we split flow over two routes only , the minimal rate can become very small as the result of the number of flows of one type growing very large . in the case @xmath457 , however , when each flow can use three routes , we need two consecutive flow types to be congested in order for the minimal rate becoming very small . this seems to be a much rarer event and it suggests that increasing the number of routes that a flow can use from 2 to 3 brings substantial performance improvements in the circle model . this is in line with results by turner @xcite , where the circle network is considered in a slightly different setting : there customers can not split their traffic on different routes but choose the least loaded of a set of @xmath0 neighboring routes . turner s simulations show that for the circle network there is a considerable quantative difference in the probabilities that a queue becomes very large for @xmath458 and @xmath457 . _ acknowledgements : _ the authors of this paper are indebted to frank kelly for many invaluable remarks and suggestions . 99 and m. mandjes ( 2009 ) , bandwidth sharing networks under a diffusion scaling . _ annals of operations research . _ balanced loads in infinite networks . _ annals of applied probability _ , * 6 * , pp . 4875 . and s. shreve ( 1991 ) . _ brownian motion and stochastic calculus . _ springer - verlag , new york , usa . ( 1994 ) . _ reversibility and stochastic networks . _ wiley , chichester , uk . stability of end - to - end algorithms for joint routing and rate control . _ computer communication review _ , * 35 * , pp.512 . and r. williams ( 2004 ) . state space collapse and diffusion approximation for a network operating under a fair bandwidth sharing policy . _ annals of applied probability_. and l. massouli ( 2006 ) . fluid models of integrated traffic and multipath routing . _ queueing systems , _ * 53 * , pp . 8598 . and l. massouli ( 2005 ) . integrating streaming and file transfer internet traffic : fluid and diffusion approximations . _ queueing systems , _ * 55 * , pp . 195205 . resource pooling in queueing networks with dynamic routing . _ advances in applied probability , _ * 24 * , pp.699726 . the power of two choices in randomized load balancing . _ ieee transactions on parallel and distributed systems , _ * 12 * , pp . 10941104 . resource pooling in stochastic networks . phd thesis , university of cambridge . the effect of increasing routing choice on resource pooling . _ probability in the engineering and informational sciences , _ * 12 * , pp . 109124 . stability of multi - path dual congestion control algorithms . _ proc.valuetools 06 , 1st international conference on performance evaluation methodologies and tools .
in this paper we study coordinated multipath routing at the flow - level in networks with routes of length one . as a first step the static case is considered , in which the number of flows is fixed . a clustering pattern in the rate allocation is identified , and we describe a finite algorithm to find this rate allocation and the clustering explicitly . then we consider the dynamic model , in which there are stochastic arrivals and departures ; we do so for models with both streaming and elastic traffic , and where a peak - rate is imposed on the elastic flows ( to be thought of as an access rate ) . lacking explicit expressions for the equilibrium distribution of the markov process under consideration , we study its fluid and diffusion limits ; in particular , we prove uniqueness of the equilibrium point . we demonstrate through a specific example how the diffusion limit can be identified ; it also reveals structural results about the clustering pattern when the minimal rate is very small and the network grows large .
Story highlights The fossil is 520 million years old and was found in China Using multiple images of the animal, the researchers discovered the nervous system They also saw the brain was like those of today's spiders, scorpions The work shows the early evolutionary differences, researcher says The ancient world was full of strange animals that have gone extinct, such as a group of marine species with claw-like structures emerging from their heads. A new study suggests that these creatures were related to spiders and scorpions. Researchers discovered the fossilized remains of a species in southwest China that provides new insights into the evolution of animals in the modern era, scientists said. They report their findings in the journal Nature. Scientists believe that the creature -- 1 inch long, and with two pairs of eyes -- lived 520 million years ago and that it crawled or swam in the ocean. They were able to reconstruct the creature's nervous system to gain insights about its evolutionary relationships to animals familiar to us. "For the first time, we are able to use fossilised neural anatomy to sort out how fossil animals are related to animals today," study co-author Xiaoya Ma of the Department of Earth Sciences at the Natural History Museum in London wrote in an e-mail. This creature belongs to the Alalcomenaeus genus, and its place in the animal kingdom lies in "a group of weird extinct animals" called the "megacheiran" or "great appendage" arthropods, Ma said. The species of the Alalcomenaeus group had elongated, segmented bodies with about 12 pairs of appendages they used for swimming or crawling. They also had a pair of long, scissor-like head claws, most likely for grabbing or sensing. Scientists say the reconstruction of the new creature's nervous system is the most complete for an arthropod living at that time, in the Cambrian geological period. Discovery makes a splash: The rarest whale The brain and central nervous system of the creature are organized in a way that is similar to those of the chelicerata, the group that includes horseshoe crabs and scorpions. This suggests a close evolutionary relationship between the ancient Alalcomenaeus and the living chelicerata. A distinct group of arthropods called the mandibulates includes lobsters, insects, centipedes and millipedes. Last year at the same site in China -- called the Chengjiang formation near Kunming -- Ma and colleagues discovered a 520 million-year-old crustacean-type nervous system in an animal called Fuxianhuia. Taken together, these discoveries suggest that by 520 million years ago, the two major groups of arthropods had diverged. Their common ancestor must have been older, researchers said. "This means the ancestors of spiders and their kin lived side by side with the ancestors of crustaceans," co-author Nick Strausfeld, neuroscience professor at the University of Arizona, said in a statement. Strausfeld's team used sophisticated imaging techniques to look at the inch-long Alalcomenaeus fossil. One kind of scan revealed that iron had built up in the nervous system as the creature fossilized. They also used a technique called computed tomography that reconstructs 3-D features. By combining these images and discarding any data that weren't in both, they were able to create a sort of negative X-ray photograph, "and out popped this beautiful nervous system in startling detail," Strausfeld said. It confirmed what scientists had believed from the creature's outward appearance: The extinct genus Alalcomenaeus was related to chelicerates (spiders, scorpions and others). They also saw that the brain in the fossil was like the brains found in modern scorpions and spiders. If researchers find a fossil with features shared by this creature and the crustacean-like fossil Ma and colleagues found last year, that could be a common ancestor of both. There's plenty more weirdness from ancient history to uncover. 18-foot oarfish discovered ||||| Scientists have discovered the fossilized brain of an animal that lived 520 million years ago. It is the oldest mostly intact nervous system to have ever been found. The incredible ancient brain and nervous system, described in the journal Nature, belongs to an Alacomenaeus, a member of the mega-claw family. These animals earned the name "mega-claw" (megacheiran), because they have two large scissor-like appendages that protrude from the top of their heads. Megacheirans lived in the early Cambrian-era ocean, swimming and scuttling around with nearly one dozen little legs, or swimmerettes. Scientists believe their bizarre head appendages were used to grab food or feel around their ancient ocean environment. To the untrained eye the animal looks a bit like a modern day shrimp (see photos in the gallery above), but analysis of its fossilized brain and central nervous system reveal it is more closely related to modern day spiders and scorpions. Lead author Nick Strausfeld, a neuroanatomist at the University of Arizona, has been searching for intact ancient invertebrate brains to learn when crustaceans (crabs, lobsters, crayfish, shrimp, krill) and chelicerates (horseshoe crabs, spiders, scorpions, ticks) began to differentiate. From the fossilized brain of mega-claw, which is related to modern day chelicerates, and the fossilized brain of a crustacean that he found last year, he has discovered that even 520 million years ago, crustaceans and chelicerates were already neurologically distinct. "Their brains have not changed all that much in that time," he said in an interview with the Los Angeles Times. Pulling the neurological information from the fossil was a complicated task. Strausfeld searched through thousands of fossils stored at the Yunnan Key Laboratory for Palaeobiology in southwest China to find an ancient fossil with a modern-looking nervous system. For reasons that Strausfeld cannot explain, iron deposits had selectively accumulated in the animal's nervous system at the time of fossilization, and that let researchers image the nervous system hundreds of millions of years later. "We have no idea why the nervous system would be almost stained with these metallic deposits," he said in an interview with the Los Angeles Times. "It is helpful for our research, but we are really mystified by it." Strausfeld said he is planning to return to Yunnan University next May to continue his research. "We are going to be looking like crazy for anything that would suggest a more simple blend of these two types of brains," he said. If you think invertebrate brains are great, not gross, follow me on Twitter for more stories like this. ALSO: Photos: Major fossil cache in L.A. Extreme pumpkins: Is a 1,985-pound pumpkin just too big? Last blood meal found in belly of 46-million-year-old mosquito fossil
– Scientists digging through old fossils have identified a 520-million-year-old mega-claw with an almost completely preserved nervous system—the oldest such find ever, the LA Times reports. The specimen belongs to the Alalcomenaeus family, which is part of a larger group of "megacheirans," meaning roughly "mega-claw" or "great appendage." These creatures were notable for the claw-like limbs growing out of their heads; this particular creature was just an inch long. The specimen, found in the Yunnan Key Laboratory in China, mysteriously had iron deposits in its nervous system, making it easy to image. "We have no idea why" the iron was there, the lead researcher says. "It is helpful for our research, but we are really mystified." Researchers were hoping brain imaging could shed light on the creature's evolutionary lineage, specifically when crustaceans, like crabs and lobsters, became distinct from chelicerates, like horseshoe crabs, scorpions, and spiders. They found that the specimen's brain was distinct from a 520-million-year-old crustacean discovered there last year, indicating that the two groups were already distinct at that time, and their common ancestor must be even older, CNN explains.
let @xmath0 be the group of permutations of @xmath4 . any permutation @xmath5 has a unique cycle decomposition , which partitions the set @xmath6 into orbits under the natural action of @xmath7 . the cycle structure of @xmath7 is the integer partition of @xmath1 associated with this set partition , in other words , the ordered sizes of the cycles ( blocks of the partition ) ranked in decreasing size . it is customary not to include the fixed points of @xmath7 in this structure . for instance , the permutation @xmath8 has 3 cycles , @xmath9 , so its cycle structure is @xmath10 ( and one fixed point which does not appear in this structure ) . a conjugacy class @xmath11 is the set of permutations having a given cycle structure . let @xmath12 denote the support of @xmath13 , that is , the number of nonfixed - points of any permutation @xmath14 . in what follows we deal with the case where @xmath13 consists of a single @xmath2-cycle , in which case @xmath15 ( see , however , remark [ rem-2 ] ) . it is well known and easy to see that in this case , if @xmath2 is even , then @xmath13 generates @xmath0 , while if @xmath16 is odd , then @xmath13 generates the alternate group @xmath17 of even permutations . let @xmath18 be the continuous - time random walk associated with @xmath19 . that is , let @xmath20 be a sequence of i.i.d . elements uniformly distributed on @xmath13 , and let @xmath21 be an independent poisson process with rate 1 ; then we take @xmath22 where @xmath23 indicates the composition of the permutations @xmath24 and @xmath25 . @xmath26 is a markov chain on @xmath27 which converges to the uniform distribution @xmath28 on @xmath27 when @xmath29 is even , and to the uniform distribution on @xmath30 when @xmath31 is odd . in any case we shall write @xmath28 for that limiting distribution . we shall be interested in the mixing properties of this process as @xmath32 , as measured in terms of the total variation distance . let @xmath33 be the distribution of @xmath34 on @xmath27 , and let @xmath28 be the invariant distribution of the chain . let @xmath35 where @xmath36 is the total variation distance between the state of the chain at time @xmath37 and its limiting distribution @xmath28 . ( below , we will also use the notation @xmath38 where @xmath39 and @xmath40 are collections of random variables with laws @xmath41 to mean @xmath42 . ) the main goal of this paper is to prove that the chain exhibits a sharp cutoff , in the sense that @xmath36 drops abruptly from its maximal value 1 to its minimal value 0 around a certain time @xmath43 , called the mixing time of the chain . ( see @xcite or @xcite for a general introduction to mixing times . ) note that if @xmath13 is a fixed conjugacy class of @xmath27 and @xmath44 , @xmath45 can also be considered a conjugacy class of @xmath46 by simply adding @xmath47 fixed points to any permutation @xmath48 . with this in mind , our theorem states the following : [ t : mix ] let @xmath49 be an integer , and let @xmath50 be the conjugacy class of @xmath27 corresponding to @xmath2-cycles . the continuous time random walk @xmath51 associated with @xmath52 has a cutoff at time @xmath53 , in the sense that for any @xmath54 , there exist @xmath55 large enough so that for all @xmath56 , @xmath57 as explained in section [ subsec - background ] below , this result solves a well - known conjecture formulated by several people over the course of the years . [ rem-2 ] theorem [ t : mix ] can be extended , without a significant change in the proofs , to cover the case of general fixed conjugacy classes @xmath13 , with @xmath58 independent of @xmath1 . in order to alleviate notation , we present here only the proof for @xmath2-cycles . a more delicate question , that we do not investigate , is what growth of @xmath59 is allowed so that theorem [ t : mix ] would still be true in the form @xmath60 the lower bound in ( [ tlb1 ] ) is easy . for the upper bound in ( [ tub1 ] ) , due to the birthday problem , the case @xmath61 should be fairly similar to the arguments we develop below , with adaptations in several places , for example , in the argument following ( [ eq-070110 ] ) ; we have not checked the details . things are likely to become more delicate when @xmath62 is of order @xmath63 or larger . yet , we conjecture that ( [ tub1 ] ) holds as long as . this problem has a rather long history , which we now sketch . mixing times of markov chains were studied independently by aldous @xcite and by diaconis and shahshahani @xcite at around the same time , in the early 1980s . diaconis and shahshahani @xcite , in particular , establish the existence of what has become known as the _ cutoff phenomenon _ for the composition of random transpositions . random transpositions is perhaps the simplest example of a random walk on @xmath27 and is a particular case of the walks covered in this paper , arising when the conjugacy class @xmath45 contains exactly all transpositions . the authors of @xcite obtained a version of theorem [ t : mix ] for this particular case ( with explicit choices of @xmath64 for a given @xmath65 ) . as is the case here , the hard part of the result is the upper - bound ( [ tub ] ) . remarkably , their solution involved a connection with the representation theory of @xmath27 , and uses rather delicate estimates on so - called character ratios . soon afterwards , a flurry of papers tried to generalize the results of @xcite in the direction we are taking in this paper , that is , when the step distribution is uniform over a fixed conjugacy class @xmath45 . however , the estimates on character ratios that are needed become harder and harder as @xmath29 increases . flatto , odlyzko and wales @xcite , building on earlier work of vershik and kerov @xcite , obtained finer estimates on character ratios and were able to show that mixing must occur before @xmath66 for @xmath29 fixed , thus giving another proof of the diaconis shahshahani result when @xmath67 . ( although this does not appear explicitly in @xcite , it is recounted in diaconis s book @xcite , page 44 . ) improving further the estimates on character ratios , roichman @xcite was able to prove a weak version of theorem [ t : mix ] , where it is shown that @xmath36 is small if @xmath68 for some large enough @xmath69 . in his result , @xmath29 is allowed to grow to infinity as fast as @xmath70 for any @xmath71 . to our knowledge , it is in @xcite that theorem [ t : mix ] first formally appears as a conjecture , although we have no doubt that it had been privately made before . ( the lower bound for random transpositions , which is based on counting the number of fixed points in @xmath34 , works equally well in this context and provides the conjectured correct answer in all cases . ) lulov @xcite dedicated his ph.d . thesis to the problem , and lulov and pak @xcite obtained a partial proof of the conjecture of roichman , in the case where @xmath29 is very large , that is , greater than @xmath72 . more recently , roussel @xcite and @xcite made some progress in the small @xmath29 case , working out the character ratios estimates to treat the case where @xmath73 . saloff - coste , in his survey article ( @xcite , section 9.3 ) discusses the sort of difficulties that arise in these computations and states the conjecture again . a summary of the results discussed above is also given . see also @xcite , page 381 , where work in progress of schlage - puchta that overlaps the result in theorem [ t : mix ] is mentioned . to prove theorem [ t : mix ] , it suffices to look at the cycle structure of @xmath34 and check that if @xmath74 is the number of cycles of @xmath34 of size @xmath75 for every @xmath76 , and if @xmath77 then the total variation distance between @xmath78 and @xmath79 is close to 0 , where @xmath80 is the cycle distribution of a random permutation sampled from @xmath81 . we thus study the dynamics of the cycle distribution of @xmath34 , which we view as a certain coagulation fragmentation chain . using ideas from schramm @xcite , it can be shown that large cycles are at equilibrium much before @xmath43 , that is , at a time of order @xmath82 . very informally speaking , the idea of the proof is the following . we focus for a moment on the case @xmath83 of random transpositions , which is the easiest to explain . the process @xmath84 may be compared to an erds rnyi random graph process @xmath85 where random edges are added to the graph at rate 1 , in such a way that the cycles of the permutation are subsets of the connected components of @xmath86 . schramm s result from @xcite then says that , if @xmath87 with @xmath88 ( so that @xmath86 has a giant component ) , then the macroscopic cycles within the giant component have relaxed to equilibrium . by an old result of erds and rnyi , it takes time @xmath89 for @xmath86 to be connected with probability greater than @xmath90 . by this point the giant component encompasses every vertex and thus , extrapolating schramm s result to this time , the macroscopic cycles of @xmath34 have the correct distribution at this point . a separate and somewhat more technical argument is needed to deal with small cycles . more formally , the proof of theorem [ t : mix ] thus proceeds in two main steps . in the first step , presented in section [ smallcycles ] and culminating in proposition [ prop - small ] , we show that after time @xmath91 , the distribution of _ small cycles _ is close ( in variation distance ) to the invariant measure , where a _ small cycle _ means that it is smaller than a suitably chosen threshold approximately equal to @xmath92 . this is achieved by combining a queueing - system argument ( whereby initial discrepancies are cleared by time slightly larger than @xmath43 and equilibrium is achieved ) with a priori rough estimates on the decay of mass in small cycles ( section [ subsec - verif ] ) . in the second step , contained in section [ sec - schramm ] , a variant of schramm s coupling from @xcite is presented , which allows us to couple the chain after time @xmath91 to a chain started from equilibrium , within time of order @xmath93 , if all small cycles agree initially . in this section we prove the following proposition . let @xmath94 be the number of cycles of size @xmath75 of the permutation @xmath34 , where @xmath84 evolves according to random @xmath2-cycles ( where @xmath95 ) , but does not necessarily start at the identity permutation . let @xmath96 denote independent poisson random variables with mean @xmath97 . fix @xmath98 and let @xmath99 be the closest dyadic integer to @xmath100 . we think of cycles smaller than @xmath101 as being small , and big otherwise . let @xmath102 , @xmath103 and @xmath104 introduce the stopping time @xmath105 therefore , prior to @xmath106 , the total number of small cycles in each dyadic strip @xmath107 , @xmath108 @xmath109 never exceeds @xmath110 . [ p : step2 ] suppose that @xmath111 as @xmath112 , and that initially , @xmath113 for all @xmath114 , for some @xmath115 independent of @xmath116 or @xmath1 . then for any sequence @xmath117 such that @xmath118 as @xmath119 and @xmath120 , @xmath121 in particular , under the assumptions of proposition [ p : step2 ] , for any @xmath54 there is a @xmath122 such that for all @xmath1 large , @xmath123 in sections [ subsec - verif ] and [ subsec - conclusionrev ] , proposition [ p : step2 ] is applied to the chain after time roughly @xmath124 , at which point the initial conditions @xmath125 satisfy ( [ assumptionstep2prime ] ) ( with high probability ) . proof of proposition [ p : step2 ] the proof of this proposition relies on the analysis of the dynamics of the small cycles , where each step of the dynamics corresponds to an application of a @xmath2-cycle , by viewing it as a coagulation fragmentation process . to start with , note that every @xmath2-cycle may decomposed as a product of @xmath126 transpositions @xmath127 thus the application of a @xmath2-cycle may be decomposed into the application of @xmath126 transpositions : namely , applying @xmath128 is the same as first applying the transposition @xmath129 followed by @xmath130 and so on until @xmath131 . whenever one of those transpositions is applied , say @xmath132 , this can yield either a fragmentation or a coagulation , depending on whether @xmath133 and @xmath134 are in the same cycle or not at this time . if they are , say if @xmath135 ( where @xmath76 and @xmath7 denotes the permutation at this time ) , then the cycle @xmath136 containing @xmath133 and @xmath134 splits into @xmath137 and everything else , that is , @xmath138 . if they are in different cycles @xmath136 and @xmath139 then the two cycles merge . to track the evolution of cycles , we color the cycles with different colors ( blue , red or black ) according ( roughly ) to the following rules . the blue cycles will be the large ones , and the small ones consist of red and black . essentially , red cycles are those which undergo a `` normal '' evolution , while the black ones are those which have experienced some kind of error . by `` normal evolution , '' we mean the following : in a given step , one small cycle is generated by fragmentation of a blue cycle . it is the first small cycle that is involved in this step . in a later step of the random walk , this cycle coagulates with a large cycle and thus becomes large again . if at any point of this story , something unexpected happens ( e.g. , this cycle gets fragmented instead of coagulating with a large cycle , or coagulates with another small cycle ) we will color it black . in addition , we introduce ghost cycles to compensate for this sort of error . we now describe this procedure more precisely . we start by coloring every cycle of the permutation @xmath140 which is larger than @xmath141 blue . we denote by @xmath142 the fraction of mass contained in blue cycles , that is , @xmath143 note that by definition of @xmath106 , @xmath144 for all @xmath145 . we now color the cycles which are smaller than @xmath101 either red or black according to the following dynamics . suppose we are applying a certain @xmath2-cycle @xmath146 , which we write as a product of @xmath126 transpositions @xmath147 ( note that we require that @xmath148 for @xmath149 ) . _ red cycles . _ assume that a blue cycle is fragmented and one of the pieces is small , and that this transposition is the first one in the application of the @xmath2-cycle @xmath150 to involve a small cycle . in that case ( and only in that case ) , we color it red . red cycles may depart through coagulation or fragmentation . a coagulation with a blue cycle , if it is the first in the step and no small cycles were created in this step prior to it , will be called _ lawful_. any other departure will be called _ unlawful_. if a blue cycle breaks up in a way that would create a red cycle and both cycles created are small ( which may happen if the size of the cycle is between @xmath101 and @xmath151 ) , then we color the smaller one red and the larger one black , with a random rule in the case of ties . _ black cycles . _ black cycles are created in one of two ways . first , any red cycle that departs in an unlawful fashion and stays small becomes black . further , if the transposition @xmath152 is not the first transposition in this step to create a small cycle from a blue cycle , or if it is but a previous transposition in the step involved a small cycle , then the small cycle(s ) created is colored black . now , assume that @xmath132 involves only cycles which are smaller than @xmath101 : this may be a fragmentation producing two new cycles , or a merging of two cycles producing one new cycle . in this case , we color the new cycle(s ) black , no matter what the initial color of the cycles , except if this operation is a coagulation _ and _ the size of this new cycle exceeds @xmath101 , in which case it is colored blue again . thus , black cycles are created through either coagulations of small parts or fragmentation of either small or large parts , but black cycles disappear only through coagulation . we aim to analyze the dynamics of the red and black system , and the idea is that the dynamics of this system are essentially dominated by that of the red cycles , where the occurrence of black cycles is an error that we aim to control . _ let @xmath153 be the number of red and black cycles , respectively , of size @xmath75 at time @xmath37 . it will be helpful to introduce another type of cycle , called ghost cycles , which are nonexisting cycles which we add for counting purposes : the point is that we do not want to touch more than one red cycle in any given step . thus , for any red cycle departing in an unlawful way , we compensate it by creating a ghost cycle of the same size . for instance , suppose two red cycles @xmath154 and @xmath155 coagulate ( this could form a blue or a black cycle ) . then we leave in the place of @xmath154 and @xmath155 two ghost cycles @xmath156 and @xmath157 of sizes identical to @xmath154 and @xmath155 . an exception to this rule is that if , during a step , a transposition creates a small red cycle by fragmentation of a blue cycle , and later within the same step this red cycle either is immediately fragmented again in the next transposition or coagulates with another red or black cycle and remains small , then it becomes black as above but we do not leave a ghost in its place . finally , we also declare that every ghost cycle of size @xmath75 is killed independently of anything else at an instantaneous rate which is precisely given by @xmath158 , where @xmath159 is a random nonnegative number ( depending on the state of the system at time @xmath37 ) which will be defined below in ( [ mu ] ) and corresponds to the rate of lawful departures of red cycles . to summarize , we begin at time @xmath160 with all large cycles colored blue and all small cycles colored red . for every step consisting of @xmath2 transpositions , we run the following algorithm for the coloring of small cycles and creation of ghost cycles ( see table [ table1 ] ) . ' '' '' * if the transposition is a fragmentation , go to ( f ) ; otherwise , go to ( c ) . * if the fragmentation is of a small cycle @xmath128 of length @xmath161 , go to ( fs ) ; otherwise , go to ( fl ) . * color the resulting small cycles black . create a ghost cycle of length @xmath161 , except if @xmath128 was created in the previous transposition of the current step and is red . _ finish_. * if the fragmentation creates one or two small cycles , and this transposition is the first in the step to either create or involve a small cycle , color the smallest small cycle created red . all other small cycles created are colored black . do not create ghost cycles . _ finish_. * if the coagulation involves a blue cycle , go to ( cl ) ; otherwise , go to ( cs ) . * if the blue cycle coagulates with a red cycle , and this is not the first transposition in the step that involves a small cycle , then create a ghost cycle ; otherwise , do not create a ghost cycle . if a small cycle remains after the coagulation , it is colored black . if the coagulation involved two red cycles of size @xmath161 and @xmath162 , create two ghost cycles of sizes @xmath161 and @xmath162 , unless one of these two red cycles ( say of size @xmath162 ) was created in the current step , in which case create only one ghost cycle of size @xmath161 . _ finish . _ ' '' '' let @xmath163 denote the number of ghost cycles of size @xmath75 at time @xmath37 , and let @xmath164 , which counts the number of red and ghost cycles of size @xmath75 . our goal is twofold . first , we want to show that @xmath165 is close in total variation distance to @xmath166 and second , that at time @xmath167 the probability that there is any black cycle or a ghost cycle converges to 0 as @xmath112 . note that with our definitions , at each step at most one red cycle can be created , and at most one red cycle can disappear without being compensated by the creation of a ghost . furthermore these two events can not occur in the same step . [ l : yeq ] assume ( [ assumptionstep2 ] ) as well as ( [ assumptionstep2prime ] ) , and let @xmath167 be as in proposition [ p : step2 ] . then @xmath168 the idea is to observe that @xmath169 has approximately the following dynamics : @xmath170 and that @xmath171 , so that @xmath172 is approximately a system of @xmath173 queues where the arrival rate is @xmath174 and the departure rate of every customer is @xmath175 . the equilibrium distribution of @xmath172 is thus approximately poisson with parameter the ratio of the two rates , that is , @xmath97 . the number of initial customers in the queues is , by assumption ( [ assumptionstep2 ] ) , small enough so that by time @xmath176 they are all gone , and thus the queue has reached equilibrium . we now make this heuristics precise . to increase @xmath169 by 1 , that is , to create a red cycle , one needs to specify the @xmath116th transposition , @xmath177 , of the @xmath2-cycle at which it is created . the first point @xmath178 of the @xmath2-cycle must fall somewhere in a blue cycle ( which has probability @xmath179 ) . say that @xmath180 , with @xmath154 a blue cycle . in order to create a cycle of size exactly @xmath75 at this transposition , the second point @xmath181 must fall at either of exactly _ two _ places within @xmath154 : either @xmath182 or @xmath183 . however , note that if @xmath184 and @xmath185 , then the next transposition is guaranteed to involve the newly formed cycle , either to reabsorb it in the blue cycles , or to turn into a black cycle through coalescence with another small cycle or fragmentation . either way , this newly formed cycle does not eventually lead to an increase in @xmath169 since by our conventions , we do not leave a ghost in its place . on the other hand , if @xmath186 then the newly formed red cycle will stay on as a red or a ghost cycle in the next transpositions of the application of the cycle @xmath128 . whether it stays as a ghost or a red cycle does not change the value of @xmath169 , and therefore , this event leads to a net increase of @xmath169 by 1 . this is true for all of the first @xmath187 transpositions of the @xmath2-cycle @xmath128 , but not for the last one , where both @xmath188 and @xmath189 will create a red cycle of size @xmath75 . it follows from this analysis that the total rate @xmath190 at which @xmath169 increases by 1 satisfies @xmath191 to get a lower bound , observe that for @xmath145 , @xmath192 at the beginning of the step . when a @xmath2-cycle is applied and we decompose it into elementary transpositions , the value @xmath142 for each of the transpositions may take different successive values which we denote by . however , note that at each such transposition , @xmath179 can only change by at most @xmath193 . thus it is also the case that for all @xmath177 , @xmath194 . therefore , the probability that a fragmentation of a blue cycle does not create any small cycle is also bounded below by @xmath195 it thus follows that the total rate @xmath190 is bounded below by @xmath196 of course , by this we mean that the @xmath197 are nonnegative jump processes whose jumps are of size @xmath198 , and that if @xmath199 is the filtration generated by the entire process up to time @xmath37 , then @xmath200 almost surely on the event @xmath201 . as for negative jumps , we have that for @xmath202 , @xmath203 where @xmath159 depends on the partition and satisfies the estimates @xmath204 where @xmath205 the reason for this is as follows . to decrease @xmath169 by 1 by decreasing @xmath206 , note that the only way to get rid of a red cycle without creating a ghost is to coagulate it with a blue cycle at the @xmath116th transposition , @xmath177 , with no other transpositions creating small cycles . the probability of this event is bounded above by @xmath207 and , with @xmath208 as above , bounded below by @xmath209 therefore , if in addition ghosts are each killed independently with rate @xmath159 as above , then ( [ jump- ] ) holds . more generally , if @xmath210 and @xmath211 are pairwise distinct integers , then we may consider the vector @xmath212 . if its current state is @xmath213 , then it may make transitions to @xmath214 where the two vectors @xmath215 and @xmath216 differ by exactly one coordinate ( say the @xmath116th one ) and @xmath217 ( since only one queue @xmath169 can change at any time step , thanks to our coloring rules ) . also , writing @xmath218 for the vector @xmath212 , we find @xmath219 these observations show that we can compare @xmath220 to a system of independent markov queues @xmath221 with respect to a common filtration @xmath199 , with no simultaneous jumps almost surely , and such that the arrival rate of each @xmath169 is @xmath222 , and the departure rate of each client in @xmath169 is @xmath223 . we may also define a system of queues @xmath224 by accepting every new client of @xmath225 with probability @xmath226 and rejecting it otherwise . subsequently , each accepted client tries to depart at a rate , or when it departs in @xmath225 , whichever comes first . then one can construct all three processes @xmath224 , @xmath227 and @xmath228 on a common probability space in such a way that @xmath229 for all @xmath230 . note that if @xmath231 denote independent poisson random variables with mean @xmath232 , then @xmath233 forms an invariant distribution for the system @xmath234 . let @xmath235 denote the system of markov queues @xmath225 started from its equilibrium distribution @xmath231 . then @xmath236 and @xmath237 can be coupled as usual by taking each coordinate to be equal after the first time that they coincide . in particular , once all the initial customers of @xmath225 and of @xmath238 have departed ( let us call @xmath239 this time ) , then the two processes @xmath228 and @xmath231 are identical . we now check that this happens before @xmath117 with high probability . it is an easy exercise to check this for @xmath238 so we focus on @xmath240 . to see this , note that by ( [ assumptionstep2prime ] ) , there are no more than @xmath241 customers in every strip @xmath242 initially if @xmath243 moreover , each customer departs with rate at least @xmath244 when in this strip . thus the time @xmath245 it takes for all initial customers of @xmath246 in strip @xmath247 to depart is dominated by @xmath248 , where @xmath249 is a collection of i.i.d . standard exponential random variables . hence @xmath250 for larger strips we use the crude and obvious bound @xmath251 if @xmath252 . moreover , each customer departs at rate @xmath244 with @xmath253 . thus , in distribution , @xmath254 so that @xmath255 [ we are using here that @xmath256 for all @xmath257 large enough ] . since we obviously have @xmath258 , we conclude @xmath259 where @xmath260 depends solely on @xmath261 . by markov s inequality and since @xmath118 , we conclude that @xmath262 with high probability . we now claim that @xmath263 with high probability . to see this , we note that at equilibrium @xmath264 . therefore , @xmath265 since we have already checked that @xmath266 as @xmath267 , this shows that on the event @xmath268 and @xmath269 ( an event of probability asymptotically one ) , @xmath270 can be coupled to @xmath237 which has the same law as @xmath231 . thus @xmath271 as @xmath112 . on the other hand , we claim that @xmath272 also . indeed , it is easy to see and well known that for @xmath273 @xmath274 are both independent poisson random variables but with different parameters , we find that @xmath275 as @xmath276 . by the triangle inequality and ( [ yz+ ] ) , this completes the proof of lemma [ l : yeq ] . [ l : black1 ] let @xmath167 be as in proposition [ p : step2 ] . then , with probability tending to 1 as @xmath112 , @xmath277 for all @xmath278 . let us consider black cycles in scale @xmath116 , that is , those whose size @xmath75 satisfies @xmath279 with @xmath280 . by assumption ( [ assumptionstep2 ] ) , before time @xmath37 the total mass of small cycles never exceeds @xmath281 with high probability . thus the rate at which a black cycle in scale @xmath116 is generated by fragmentation of a red cycle ( or from another black cycle ) is at most @xmath282 black cycles can also be generated directly by fragmenting a blue cycle and subsequently fragmenting either the small cycle thus created or some other blue cycle in the rest of the step . the rate at which a black fragment in scale @xmath116 occurs in this fashion is thus smaller than @xmath283 finally , one needs to deal with black cycles that arise through the fragmentation of a blue cycle whose size at the time of the fragmentation is between @xmath101 and @xmath151 ( thus potentially leaving two small cycles instead of one ) . let @xmath284 . we know that , while @xmath285 , @xmath286 . in between steps , the number of cycles in scale @xmath287 can not ever increase by more than @xmath288 . thus the rate at which black cycles occur in this fashion at scale @xmath116 is at most @xmath289 this combined rate is therefore smaller than @xmath290 . note that it may be the case that several black cycles are produced in one step , although this number may not exceed @xmath288 . on the other hand , every black cycle departs at a rate which is at least @xmath291 since @xmath292 for @xmath145 , say . ( note that when two back cycles coalesce , the new black cycle has an even greater departure rate than either piece before the coalescence , so ignoring these events can only increase stochastically the total number of black cycles . ) thus we see that the number of black cycles in this scale is dominated by a markov chain @xmath293 where the rate of jumps from @xmath215 to @xmath294 is @xmath295 and the rate of jumps from @xmath215 to @xmath296 is @xmath297 , and @xmath298 . speeding up time by @xmath299 , @xmath300 becomes a markov chain @xmath301 whose rates are , respectively , @xmath302 and 1 , and where @xmath303 . we are interested in @xmath304 note that when there is a jump of size @xmath288 ( i.e. , when @xmath288 individuals are born ) the time it takes for them to all die in this new time - scale is a random variable @xmath305 which has the same distribution as @xmath306 where @xmath307 are i.i.d . standard exponential random variables . decomposing on possible birth times of individuals , and noting that @xmath308 by a simple union bound , we see that @xmath309 there are @xmath310 possible scales to sum on , so by a union bound the probability that there is any black cycle at time @xmath37 is , for large @xmath1 , smaller than or equal to @xmath311 . the case of ghost particles is treated as follows . [ l : ghost ] let @xmath167 be as in proposition [ p : step2 ] . then , with probability tending to 1 as @xmath112 , @xmath312 for all @xmath278 . suppose a red cycle is created , and consider what happens to it the next time it is touched . with probability at least @xmath313 this will be to coagulate with a blue cycle with no other small cycle being touched in that step , in which case this cycle is not transformed into a ghost . however , in other cases it might become a ghost . it follows that any given cycle in @xmath169 is in fact a ghost with probability at most @xmath314 it follows that ( using the notation from lemma [ l : yeq ] ) @xmath315 which tends to 0 as @xmath112 . this completes the proof of lemma [ l : ghost ] . _ completion of the proof of proposition _ [ p : step2 ] : since @xmath316 , we get the proposition by combining lemmas [ l : yeq ] , [ l : black1 ] and [ l : ghost ] . in order for proposition [ p : step2 ] to be useful , we need to show that assumptions ( [ assumptionstep2 ] ) and ( [ assumptionstep2prime ] ) indeed hold with large enough probability . this will be accomplished in propositions [ rest ] and [ restprime ] below . recall the variable @xmath317 [ see ( [ eq - mj ] ) ] , and let @xmath318 } m_j(t ) < n2^{-j}/(\log n)^3\bigr\}.\ ] ] recall that @xmath101 is the dyadic integer closest to @xmath319 . we begin with the following lemma . its proof is a warm - up to the subsequent analysis . [ lemcala ] let @xmath320 then , @xmath321 it is convenient to reformulate the cycle chain as a chain that at independent exponential times ( with parameter @xmath2 ) , makes a random transposition , where the @xmath161th transposition is chosen uniformly at random ( if @xmath322 is an integer multiple of @xmath2 ) , or uniformly among those transpositions that involve the ending point of the previous transposition and that would result with a legitimate @xmath2-cycle ( i.e. , no repetitions are allowed ) if @xmath322 is not an integer multiple of @xmath2 . we begin with @xmath323 . note that @xmath324 and that @xmath325 decreases by @xmath326 with rate at least @xmath327 and increases , at most by @xmath328 , with rate bounded above by @xmath329 . in particular , by time @xmath330 , the number of increase events is dominated by twice a poisson variable of parameter @xmath331 . thus , with probability bounded below by @xmath332 , at most @xmath333 parts of size @xmath326 have been born . on this event , @xmath334 where @xmath335 is a process with death only at rate @xmath336 . in particular , the time of the @xmath337th death in @xmath335 is distributed like the random variable @xmath338 where the @xmath339 are independent exponential random variables of parameter @xmath340 . it follows that @xmath341 and the chebyshev bound gives , with @xmath342 , @xmath343 for an appropriate constant @xmath128 , by choosing @xmath344 . we thus conclude that @xmath345 we continue on the event @xmath346 . we consider the process @xmath347 . by definition @xmath348 . the difference in the analysis of @xmath349 and @xmath325 lies in the fact that now , @xmath349 may increase due to a merging of two parts of size @xmath326 , and the departure rate is now bounded below by @xmath350 . note that by time @xmath330 , the total number of arrivals due to a merging of parts of size @xmath326 has mean bounded by @xmath351 . repeating the analysis concerning @xmath352 , we conclude similarly that @xmath353 the analysis concerning @xmath354 proceeds with one important difference . let @xmath355 , @xmath356 , and set @xmath357 . now , @xmath358 can increase due to the merging of a part of size @xmath359 with a part of size smaller than @xmath360 . on @xmath361 , this has rate bounded above by @xmath362 one can bound brutally the total number of such arrivals , but such a bound is not useful . instead , we use the definition of the events @xmath363 , that allow one to control the number of arrivals `` from below . '' indeed , note that the rate of departures @xmath364 is bounded below by @xmath365_+(1 - 1/(\log n)^2)/n$ ] ( because the total mass below @xmath366 at times @xmath367 $ ] is , on @xmath361 , bounded above by @xmath368 ) . thus , when @xmath369 , the rate of departure @xmath370 . analyzing this simple birth death chain , one concludes that @xmath371 since @xmath372 , this completes the proof . an important corollary is the following control on the total mass of large parts . [ cor - masstop ] let @xmath373 . then , @xmath374 } m_\chi(t ) < n\biggl(1-\frac{1}{(\log n)^2}\biggr)\biggr)=0.\ ] ] the next step is the following . [ firstr ] @xmath375set @xmath376 . then , @xmath377 the proof of lemma [ firstr ] , while conceptually simple , requires the introduction of some machinery and thus is deferred to the end of this subsection . equipped with lemma [ firstr ] , we can complete the proof of the following proposition . [ rest ] with notation as above , @xmath378 } \max_{j=0}^{\log_2k+1 } m_j(t)>(\log n)^6/2\biggr)=0.\ ] ] let @xmath379 . because of lemma [ firstr ] , it is enough to consider @xmath354 for @xmath380 . we begin by considering @xmath381 . let @xmath382 denote the intersection of @xmath383 with the complement of the event inside the probability in corollary [ cor - masstop ] . on the event @xmath384 , for @xmath385:= t_r$ ] , the rate of arrivals due to merging of parts smaller than @xmath386 is bounded above by @xmath387 . the rate of arrivals due to parts larger than @xmath388 is bounded above by @xmath389 , and the jump is no more than 2 . thus , the total rate of arrival is bounded above by @xmath390 . the rate of departure on the other hand is , due to corollary [ cor - masstop ] , bounded below by @xmath391 . thus , for @xmath392 , the difference between the departure rate and the arrival rate is bounded below by @xmath393 . by definition , define @xmath395 . let @xmath396 } m_{r+1}(t)<\log n\}$ ] . then , reasoning as in the proof of lemma [ lemcala ] , we find that @xmath397 let @xmath398 . one proceeds by induction . letting @xmath399,@xmath400 } m_{r+j}(t)<\log n\}$ ] and @xmath401 , we obtain from the same analysis that for @xmath402 , @xmath403 thus , @xmath404 , while@xmath405 $ ] . this completes the proof , since @xmath406 . while a proof could be given in the spirit of the proof of lemma [ lemcala ] , we prefer to present a conceptually simple proof based on comparison with the random @xmath2-regular hypergraph . this coupling is analog to the usual coupling with an erds rnyi random graph ( see , e.g. , @xcite and @xcite ) . toward this end , we need the following definitions . [ def - hyper ] a _ @xmath2-regular hypergraph _ is a pair @xmath407 where @xmath408 is a ( finite ) collection of vertices , and @xmath409 is a collection of subsets of @xmath408 of size @xmath2 . the _ random _ hypergraph @xmath410 is defined as the hypergraph consisting of @xmath411 , with each subset @xmath412 of @xmath408 with @xmath413 taken independently to belong to @xmath410 with probability @xmath414 . let @xmath86 denote the random @xmath2-hypergraph obtained by taking @xmath411 and taking @xmath409 to consist of the @xmath2-hyperedges corresponding to the @xmath2-cycles @xmath415 of the random walk @xmath34 . it is immediate to check that @xmath86 is distributed like @xmath416 with @xmath417 [ def - hyper1 ] a _ @xmath2-hypertree _ with @xmath412 hyperedges in a @xmath2-regular hypergraph @xmath418 is a connected component of @xmath418 with @xmath419 vertices . ( pictorially , a @xmath2-hypertree corresponds to a standard tree with hyperedges , where any two hyperedges have at most one vertex in common . ) @xmath2-hypertrees can be easily enumerated , as in the following , which is lemma 1 of @xcite . the number of @xmath2-hypertrees with @xmath75 ( labeled ) vertices is @xmath420!i^{h-1}}{h ! ( ( k-1)!)^h},\qquad h\geq0,\ ] ] where @xmath412 is the number of hyperedges and thus @xmath419 . the next lemma controls the number of @xmath2-hypertrees with a prescribed number of edges in @xmath86 . [ cont - tree ] let @xmath421 then , @xmath422 } \mathcal{d}_{t,(\log n)^2}\biggr ) { \mathop{\longrightarrow}_{n\to\infty } } 1 .\ ] ] let @xmath423 $ ] and @xmath424 . by monotonicity , it is enough to check that @xmath425 note that , with @xmath419 , and adopting as a convention @xmath426 when @xmath427 , @xmath428 [ indeed recall that if @xmath429 is a subset of @xmath4 comprising @xmath75 elements , then disconnecting @xmath429 from the rest of @xmath430 requires closing exactly @xmath431 hyperedges , while @xmath432 is the number of hyperedges that need to be closed inside @xmath429 for it to be a hypertree . ] we can now provide the following proof : proof of lemma [ firstr ] at time @xmath37 , @xmath433 consists of cycles that have been obtained from the coagulation of cycles that have never fragmented during the evolution by time @xmath37 , denoted @xmath434 , and of cycles that have been obtained from cycles that have fragmented and created a part of size less than or equal to @xmath75 , denoted @xmath435 . note that @xmath434 is dominated above by the number of @xmath2-hypertrees with @xmath412 edges in @xmath86 , where @xmath436 . by lemma [ cont - tree ] , this is bounded above by @xmath437 with high probability for all @xmath438 . on the other hand , the rate of creation by fragmentation of cycles of size @xmath75 is bounded above by @xmath439 , and hence by time @xmath330 , with probability approaching @xmath326 no more than @xmath437 cycles of size @xmath75 have been created , for all @xmath440 . we thus conclude that with probability tending to @xmath326 , we have , with @xmath423 $ ] , @xmath441 } n_i^f(t ) \leq(\log n)^{3.1}.\ ] ] this yields the lemma , since for @xmath442 , @xmath443 we now prove that at time @xmath444 , the assumption ( [ assumptionstep2prime ] ) [ with @xmath125 replaced by @xmath445 is satisfied , with high probability . [ restprime ] for every @xmath54 there exist @xmath446 and @xmath447 such that for @xmath448 , @xmath449 consider first the time @xmath450 . [ restprime - u ] with probability approaching 1 as @xmath32 , we have @xmath451 for all @xmath452 . as in the proof of lemma [ firstr ] , split @xmath354 into two components @xmath453 and @xmath454 . note that the rate at which a fragment of size less than @xmath455 is produced is smaller than @xmath456 , so for any @xmath457 , @xmath458 . the probability that such a poisson random variable is more than twice its expectation is ( by standard large deviation bounds ) smaller than @xmath459 for some @xmath460 , so summing over @xmath461 values of @xmath116 we easily obtain that with high probability , @xmath462 for all @xmath463 . it remains to show that @xmath464 for all @xmath463 with high probability . to deal with this part , note that if @xmath465 denotes the number of hypertrees with @xmath412 hyperedges in @xmath466 , then @xmath467 where @xmath468 is the number of vertices . reasoning as in ( [ et_h ] ) , we compute after simplifications [ recalling that @xmath469 and @xmath470 , for @xmath471 @xmath472\\[-8pt ] & \le&\frac{n ( \log n)^h } { h ! i } ( 1-p_u)^{i{n - i \choose k-1 } } \le\frac{n^{1-i}(\log n)^{1+hk}}{h!i}.\nonumber\end{aligned}\ ] ] thus summing over @xmath473 to @xmath474 , we conclude by markov s inequality that @xmath475 for all @xmath476 with high probability . for @xmath477 or @xmath427 , we get from ( [ boundet_h ] ) @xmath478 computing the variance is easy : writing @xmath479 , we get @xmath480 but note that @xmath481 so @xmath482 thus by chebyshev s inequality , @xmath483 as @xmath112 . this proves the lemma . with this lemma we now complete the proof of proposition [ restprime ] . we compare @xmath484 to independent queues as follows . by proposition [ rest ] , on an event of high probability , during the interval @xmath485 $ ] the rate at which some two cycles of size smaller than @xmath486 coagulate is smaller than @xmath487 , so the probability that this happens during this interval of time is @xmath488 . likewise , the rate at which some cluster smaller than @xmath486 will fragment is at most @xmath489 , so the probability that this happens during the interval @xmath490 $ ] is @xmath488 . now , aside from rejecting any @xmath2-cycle that would create such a transition , the only possible transition for @xmath317 are increases by 1 ( through the fragmentation of a component larger than @xmath491 ) and decreases by 1 ( through coagulation with cycle larger than @xmath486 ) . the respective rates of these transitions is , as in ( [ lambda+ ] ) , at most @xmath492 , and at least @xmath493 as in ( [ mu+ ] ) . this can be compared to a queue where both the departure rate and the arrival rate are equal to @xmath222 , say @xmath358 . the difference between @xmath354 and @xmath358 is that some of the customers having left in @xmath358 might not have left yet in @xmath354 . excluding the initial customers , a total of @xmath494 customers arrive in the queue @xmath358 during the interval @xmath490 $ ] , so the probability that any one of those customers has not yet left by time @xmath43 in @xmath354 given that it did leave in @xmath495 is no more than @xmath496 , where the constants implicit in @xmath497 do not depend on @xmath116 or @xmath1 . thus with probability greater than @xmath498 , there is no difference between @xmath499 and @xmath500 . moreover , @xmath501 where @xmath502 is the total number of initial customers customers that have not departed yet by time @xmath43 . using lemma [ restprime - u ] , @xmath503 where @xmath504 is a collection of i.i.d . standard exponential random variables . using the independence of the queues @xmath358 , in combination with ( [ boundmj1 ] ) and ( [ boundrj ] ) as well as standard large deviations for poisson random variables , the proposition follows immediately . combining propositions [ p : step2 ] and [ rest ] , and using the notation introduced in the beginning of this section , we have proved the following . fix @xmath54 . then there is a @xmath122 such that with @xmath505 , and all large @xmath1 , @xmath506 we now deduce the following : [ prop - small ] fix @xmath54 . then there is a @xmath122 such that with @xmath505 , and all large @xmath1 , @xmath507 where @xmath508 is the cycle distribution of a random permutation sampled according to the invariant distribution @xmath28 . by ( [ conclsmall0 ] ) and the triangle inequality , all that is needed is to show that @xmath509 whenever @xmath2 is even , and thus @xmath28 is uniform on @xmath27 , ( [ conclsmall2 ] ) is a classical result of diaconis pitman and of barbour , with explicit upper bound of @xmath510 ( see @xcite or the discussions around @xcite , theorem 2 , and @xcite , theorem 4.18 ) . in case @xmath2 is odd , @xmath28 is uniform on @xmath30 . a sample @xmath24 from @xmath28 can be obtained from a sample @xmath25 of the uniform measure on @xmath27 using the following procedure . if @xmath25 is even , take @xmath511 , otherwise let @xmath512 where @xmath513 is some fixed transposition [ say @xmath514 . the probability that the collection of small cycles in @xmath24 differs from the corresponding one in @xmath25 is bounded above by @xmath515 , which completes the proof . fix @xmath54 and @xmath516 . recall that @xmath101 is the closest dyadic integer to @xmath517 and that a cycle is called small if its size is smaller than @xmath101 . for @xmath1 large , let @xmath518 . we know by the previous section ( see proposition [ prop - small ] ) that at this time , for @xmath1 large , the distribution of the small cycles of the permutation @xmath34 is arbitrarily close ( variational distance smaller than @xmath65 ) to that of a ( uniformly chosen ) random permutation @xmath519 . therefore we can find a coupling of @xmath520 and @xmath519 in such a way that @xmath521 we can now provide the following proof : proof of theorem [ t : mix ] we will construct an evolution of @xmath519 , denoted @xmath522 , that follows the random @xmath2-cycle dynamic ( and hence , @xmath522 has cycle structure whose law coincides with the law of the cycle structure of a uniformly chosen permutation , at all times ) . the idea is that with small cycles being the hardest to mix , coupling @xmath523 and @xmath522 will now take very little time . to prove this , we describe a modified version of the schramm coupling introduced in @xcite , which has the additional property that it is difficult to create small unmatched pieces . to describe this coupling , we will need some notation from @xcite . let @xmath524 be the set of discrete partitions of unity @xmath525 we identify the cycle count of @xmath34 with a vector @xmath526 . we thus want to describe a coupling between two processes @xmath527 and @xmath528 taking their values in @xmath524 and started from some arbitrary initial states . the coupling will be described by a joint markovian evolution of @xmath529 . we now begin by describing the construction of a random transposition . for @xmath530 , let @xmath531 denote the smallest element of @xmath532 not smaller than @xmath215 . let @xmath533 be two random points uniformly distributed in @xmath534 , set @xmath535 and condition them so that @xmath536 . note that @xmath537 are both uniformly distributed on @xmath532 . if we focus for one moment on the marginal evolution of @xmath538 , then applying one transposition to @xmath527 can be realized by associating to @xmath539 a tiling of the semi - open interval @xmath540 $ ] where each tile is equally semi - open and there is exactly one tile for each nonzero coordinate of @xmath527 . ( the order in which those tiles are put down may be chosen arbitrarily and does not matter for the moment . ) if @xmath541 and @xmath542 fall in different tiles then we merge the two tiles together and get a new element of @xmath524 by sorting in decreasing order the size of the tiles . if @xmath541 and @xmath542 fall in the same tile then we use the location of @xmath542 to split that tile into two parts : one that is to the left of @xmath542 , and one that is to its right ( we keep the same semi - open convention for every tile ) . this procedure works because , conditionally on falling in the same tile @xmath136 as @xmath541 , then @xmath542 is equally likely to be on any point of @xmath543 distinct from @xmath542 , which is the same fragmenting rule as explained at the beginning of the proof of proposition [ p : step2 ] . we now explain how to construct one step of the joint evolution . if @xmath544 are two unit discrete partitions , then we can differentiate between the entries that are matched and those that are unmatched ; two entries from @xmath40 and @xmath545 are matched if they are of identical size . our goal will be to create as many matched parts as possible . let @xmath546 be the total mass of the unmatched parts . when putting down the tilings associated with @xmath40 and @xmath545 we will do so in such a way that all matched parts are at the right of the interval @xmath540 $ ] and the unmatched parts occupy the left part of the interval , as in figure [ fig : coupl1 ] . if @xmath541 falls into the matched parts , we do not change the coupling beyond that described in @xcite ; that is , if @xmath542 falls in the same component as @xmath541 we make the same fragmentation in both copies , while otherwise we make the corresponding coalescence . the difference occurs if @xmath541 falls in the unmatched parts . let @xmath547 and @xmath548 be the respective components of @xmath40 and @xmath545 where @xmath541 falls , and let @xmath549 be the reordering of @xmath550 in which these components have been put to the left of the interval @xmath540 $ ] . is uniformly chosen on @xmath534 and picks a part in @xmath40 and @xmath545 , which are then rearranged into @xmath549 . ] let @xmath551 and let @xmath552 be the respective lengths of the pieces selected with @xmath541 , and assume without loss of generality that @xmath553 . further rearrange , if needed , @xmath547 and @xmath548 so that after the rearrangement , . because @xmath554 , necessarily @xmath555 ( and is uniformly distributed on the set @xmath556 ) . the point @xmath542 designates a size - biased sample from the partition @xmath557 and we will construct another point @xmath558 , which will also be uniformly distributed on @xmath556 , to similarly select a size - biased sample from @xmath559 . however , while in the coupling of @xcite one takes @xmath560 , here we do not take them equal and apply to @xmath542 a measure - preserving map @xmath561 , defined as follows . define the function @xmath562 where @xmath563 . see figure [ fig : coupl2 ] for description of @xmath561 . note that @xmath561 is a measure - preserving map and hence @xmath564 is uniformly distributed on @xmath534 . define @xmath565 . with @xmath537 and @xmath558 selected , the rest of the algorithm is unchanged , that is , we make the corresponding coagulations and fragmentations . is chosen uniformly in ( 0,1 ) and serves as a second size - biased pick for @xmath557 . @xmath566 is mapped to @xmath567 which gives a second size - biased pick for @xmath559 . ] this coupling has a number of remarkable properties which we summarize below . essentially , the total number of unmatched entries can only decrease , and furthermore it is very difficult to create small unmatched entries , as the smallest unmatched entry can only become smaller by a factor of at most 2.=1 in what follows , we often speak of the `` unmatched entries '' between two permutations , meaning that we associate to these permutations elements of @xmath524 and identify matched parts in @xmath524 with matched cycles in the permutations . the translation between the two involves a factor @xmath1 concerning the size of the parts , and in all places it should be clear from the context whether we discuss parts in @xmath568 or cycles of partitions . [ l : coupling ] let @xmath569 be the size of the smallest unmatched entry in two partitions @xmath544 , let @xmath570 be the corresponding partitions after one transposition of the coupling and let @xmath571 be the size of the smallest unmatched entry in @xmath570 . assume that @xmath572 for some @xmath573 . then it is always the case that @xmath574 , and moreover , @xmath575 finally , the number of unmatched parts may only decrease . @xmath576since @xmath577 , it holds in particular that @xmath578 . proof of lemma [ l : coupling ] that the number of unmatched entries can only decrease is similar to the proof of lemma 3.1 in @xcite . ( in fact it is simpler here , since that lemma requires looking at the total number of unmatched entries of size greater than @xmath65 . since in our discrete setup no entry can be smaller than @xmath579 we do not have to take this precaution . ) we continue to denote by @xmath317 the total number of parts in the range @xmath580 . the only case that @xmath569 can decrease is if there is a fragmentation of an unmatched entry , since matched entries must fragment in exactly the same way . now , note that the coupling is such that when an unmatched entry is selected and is fragmented , then all subsequent pieces are either greater or equal to @xmath581 ( where @xmath133 is the size of the smaller of the two selected unmatched entries ) , or are matched . moreover , for such a fragmentation to occur , one must select the lowest unmatched entry ( this has probability at most @xmath582 , since there may be several unmatched entries with size @xmath569 ) , and then fragment it , which has probability at most @xmath583 , and thus @xmath584 . since @xmath585 , this completes the proof . we have described the basic step of a ( random ) transposition in the coupling . the step corresponding to a random @xmath2-cycle @xmath586 is obtained by taking @xmath587 , generating @xmath588 as in the coupling above ( corresponding to the choice of @xmath589 ) , rearranging and taking @xmath590 to correspond to the location of @xmath588 after the rearrangement , drawing new @xmath588 ( corresponding to @xmath591 ) and so on . in doing so , we are disregarding the constraint that no repetitions are present in @xmath24 . however , as it turns out , we will be interested in an evolution lasting at most @xmath592 and the expected number of times that a violation of this constraint occurs during this time is bounded by @xmath593 , which converges to @xmath160 as @xmath32 . hence , we can in what follows disregard this violation of the constraint . now , start with two configurations @xmath594 such that @xmath595 is the element of @xmath524 associated with a random uniform permutation . assume also that initially , the small parts of @xmath596 and @xmath595 ( i.e. , those that are smaller than @xmath101 , the closest dyadic integer to @xmath517 ) , are exactly identical , and that they have the same parity . as we will now see , at time @xmath597 , @xmath598 and @xmath599 will be coupled , with high probability . note also that , since initially all the parts that are smaller than @xmath101 are matched , the initial number of unmatched entries can not exceed @xmath600 , and this may only decrease with time by lemma [ l : coupling ] . [ l : mass small ] in the next @xmath597 units of time , the random permutation @xmath522 never has more than a fraction @xmath601 of the total mass in parts smaller than @xmath92 , with high probability . the proof is the same as that of proposition [ rest ] , only simpler because the initial number of small clusters is within the required range . we omit further details . [ this can also be seen by computing the probability that a given uniform permutation @xmath519 has more than a fraction @xmath602 of the total mass in parts smaller than @xmath92 , and summing over @xmath603 steps . ] [ l : unmatched large ] in the next @xmath597 units of time , every unmatched part of the permutations is greater than or equal to @xmath604 , with high probability . recall that the total number of unmatched parts can never increase . suppose the smallest unmatched part at time @xmath605 is of scale @xmath116 ( i.e. , of size in @xmath242 ) , and let @xmath606 be this scale . then , when touching this part , the smallest scale it could go to is @xmath607 , by the properties of the coupling ( see lemma [ l : coupling ] ) . this happens with probability at most @xmath608 . on the other hand , with the complementary probability , this part experiences a coagulation . and with reasonable probability , what it coagulates with is larger than itself , so that it will jump to scale @xmath609 or larger . to compute this probability , note that since this is the smallest unmatched part , all smaller parts are matched and thus have a total mass controlled by lemma [ l : mass small ] . in particular , on an event of high probability , this fraction of the total mass is at most @xmath610 . it follows that with probability at least @xmath611 , the part jumps to scale at least @xmath609 , and with probability at most @xmath612 , to scale @xmath607 . now , when this part jumps to scale at least @xmath609 , this does not necessarily mean that the _ smallest _ unmatched part is in scale at least @xmath609 , since there may be several small unmatched parts in scale @xmath116 . however , there can never be more than @xmath613 such parts . if an unmatched piece in scale @xmath116 is touched , we declare it a success if it moves to scale @xmath609 ( which has probability at least @xmath611 , given that it is touched ) and a failure if it goes to scale @xmath607 ( which has probability at most @xmath614 ) . if @xmath613 successes occur before any failure occurs at scale @xmath116 , we say that a _ good success _ has occurred , and then we know that no unmatched cycle can exist at scale smaller than @xmath116 . call the complement of a good success a _ potential failure _ ( which thus includes the cases of both a real failure and a success which is not good ) . the probability of a potential failure at scale @xmath116 is at most @xmath615 , which is bounded above by @xmath616 . let @xmath617 be the times at which the smallest unmatched part changes scale , with @xmath618 being the first time the smallest unmatched part is of scale @xmath619 where . let @xmath620 denote the scale of the smallest unmatched part at time @xmath621 , and let @xmath622 be such that @xmath623 . introduce a birth death chain on the integers , denoted @xmath624 , such that @xmath625 and @xmath626 and @xmath627 set @xmath628 , and an analysis of the birth death chain defined by ( [ may27 ] ) and ( [ may27a ] ) gives that @xmath629 ( see , e.g. , theorem ( 3.7 ) in chapter 5 of @xcite ) . thus @xmath630 decays as an exponential in @xmath631 . therefore , since @xmath632 , it follows that @xmath633 as @xmath32 . on the other hand , between times @xmath37 and @xmath634 , the process @xmath635 may have made at most @xmath636 moves with overwhelming probability . this implies that @xmath637 with high probability throughout @xmath638 $ ] . _ end of the proof of theorem _ [ t : mix ] . we now are going to prove that , after @xmath639 steps , there are no more unmatched parts with high probability . the basic idea is that , on the one hand , the number of unmatched parts may never increase , and on the other hand , it does decrease frequently enough . since each unmatched part is greater than @xmath604 during this time , any given pair of unmatched parts is merging at rate roughly @xmath640 . there are initially no more than @xmath613 unmatched parts , so after @xmath641 steps , no more unmatched part remains with high probability . to be precise , assume that there are @xmath642 unmatched parts let @xmath643 be the time to decrease the number of unmatched parts from @xmath642 to @xmath644 . observe that , for parity reasons ( @xmath513 and @xmath519 must have the same parity of number of parts at all times ) , @xmath642 is always even . note also that @xmath645 is impossible , so @xmath642 is at least 4 . assume to start with that both copies have at least 2 unmatched parts . then , at rate greater than @xmath646 we pick an unmatched part in the first point @xmath647 for the @xmath2-cycle . since there are at least 2 unmatched parts in each copy , let @xmath648 be the interval of @xmath534 corresponding to a second unmatched part in the copy that contains the larger of the two selected ones . then @xmath649 , and moreover when @xmath542 falls in @xmath648 , we are guaranteed that a coagulation is going to occur in both copies . we interpret this event as a success , and declare every other possibility a failure . hence if @xmath418 is a geometric random variable with success probability @xmath646 , and @xmath650 are i.i.d . exponentials with mean @xmath651 , the total amount of time before a success occurs is dominated by @xmath652 . if , however , one copy ( say @xmath513 ) has only one unmatched part , then one first has to break that component , which takes at most an exponential random variable with rate @xmath653 . note that the other copy must have had at least 3 unmatched parts , so after breaking the big one , both copies have now at least two unmatched copies and we are back to the preceding case . it follows from this analysis that in any case , @xmath643 is dominated by @xmath654 and so @xmath655 . now , let @xmath656 and let @xmath657 . then @xmath429 is the time to get rid of all unmatched parts . we obtain from the above @xmath658 . by markov s inequality , it follows that @xmath659 with high probability . this concludes the proof of theorem [ t : mix ] . we thank hubert lacoin , remi leblond and james martin for a careful reading of the first version of this manuscript and for constructive comments and corrections . n. berestycki is grateful to the weizmann institute , microsoft research s theory group and the technion for their invitations in june 2008 , august 2008 and april 2009 , respectively . some of this research was carried out during these visits .
let @xmath0 be the permutation group on @xmath1 elements , and consider a random walk on @xmath0 whose step distribution is uniform on @xmath2-cycles . we prove a well - known conjecture that the mixing time of this process is @xmath3 , with threshold of width linear in @xmath1 . our proofs are elementary and purely probabilistic , and do not appeal to the representation theory of @xmath0 . , and .
the rise in the prevalence of childhood obesity has precipitated the need for simple but accurate methods for determining adiposity in paediatric populations . the adolescent years are a period of rapid growth in both the fat ( fm ) and fat - free mass ( ffm ) compartments . despite the recognized importance of measuring body composition in paediatric population , there are a limited number of valid methods that can be used in both clinical and field settings . most of the simple methods used were developed using the two - compartment ( 2c ) model as the criterion method . the 2c model divides body weight into fm and ffm , relying on assumptions that ignore interindividual variability in the ffm composition , which is the most heterogeneous of the two depots ( especially in growing children ) . consequently , measured values of fm and ffm are method dependent , making accuracy difficult to assess while hindering comparisons across different methods and studies . multicomponent models , such as 3c and 4c approaches , are robust to inter - individual variability in the composition of the ffm . the model divides body weight into fat , water , mineral , and protein and allows evaluation of several assumed constant relations that are central to 2c models . although reference data exist for these constants in children from birth to 10 y of age , most values were predicted by extrapolating data between infants ( 6 months ) and the 9-year - old reference child [ 5 , 6 ] . the lack of accurate data on body composition further hinders the evaluation of simple field - based techniques such as bioelectrical impedance analysis ( bia ) and simple anthropometric measurements . collectively , these body composition tools are the most commonly used methods in children and adolescents . variables obtained from bia and anthropometry are often used as predictors during regression analysis aimed to developed fm and ffm equations based on criterion methods . given the vast number of bia and anthropometric - based equations for body composition assessment in children and adolescents , it is difficult to select the most appropriate solution . therefore , clinicians and health - related professionals need specific and detailed criteria for the appropriate model to select , paying close attention to methodological- , biological- and statistical - related issues that will impact the validity of the body composition value obtained . in 1992 , wang et al . proposed an interesting system to organize the human body composition , the five - level model . based on this approach , the human body was characterized in terms of five levels : atomic , molecular , cellular , tissue , and whole body . most of the methodological research in human body composition analysis has been conducted at the molecular level . some of the most widely used molecular level models divide body mass into two , three , or four components . as suggested by wang et al . , methods of quantifying these components in vivo can be organized using the following general formula : ( 1)c = f(q ) , where c represents an unknown component , q a measurable quantity , and f a mathematical function relating q to c . the first is referred type i and was developed using a reference method and regression analysis of data to derive the predictive equation . in these cases , a reference method is typically used to measure the unknown component in a group of participants with certain characteristics . the measurable quantity ( q , i.e. , property and/or the known component ) , as defined in the general formula , is also estimated . regression analysis is then used to establish the mathematical function ( f ) and thus , develop the equation that predicts the unknown component . the second type of mathematical function , known as type ii , is based on firmly founded models . these models usually represent proportions or ratios of measurable quantities to components that are assumed constant within and between subjects . indeed type ii methods are based on assumptions required for their development , and several models have been published . generally , these models were developed from simultaneous equations , which may include two or more unknown components and/or the measurable property . the less complex type ii methods are based on a 2c model where body mass is divided into fm and ffm , either from hydrometric or densitometric techniques . ( i ) two - compartment model : ( 2)body mass = fat+fat - free mass , see . ( ii ) three - compartment model : ( 3)body mass = fat+water+residual , that is , the sum of protein , minerals , and glycogen : ( 4)body mass = fat+bone mineral+residual , that is , the sum of protein , water , and glycogen , ( 5)body mass = fat+bone mineral+lean soft tissue , see . ( iii ) four - component model:(6)body mass = fat+water+bone mineral+residual , that is , the sum of protein , soft tissue minerals and glycogen [ 12 , 13 ] , ( 7)body mass = fat+water+bone mineral+protein [ 14 , 15 ] . mass = fat+water+bone mineral+soft tissue mineral+residual , that is , the sum of protein and glycogen . ( v ) six - component model:(9)body mass = fat+water+bone mineral+soft tissue mineral+protein+glycogen , see . the densitometric method requires the assessment of body volume ( bv ) , usually estimated by hydrostatic weighing or air displacement plethysmography , serving as the basis for 2c model of body composition analysis . the addition of total - body water ( tbw ) is allowed for the development of 3c molecular models . the derived 3c model accounted for the variation in subject hydration by adding a tbw estimation using dilution techniques to behnke 's 2c model . on the basis of data available at the time from five chemically analyzed human cadavers , siri assumed that ffm consisted of two molecular level components , tbw and a combined protein and total mineral [ m , that is , the sum of soft tissue minerals and bone mineral ( mo ) ] residual component . to complete the model , siri suggested a constant ratio between mineral and protein of 0.35 , as estimated from the five cadavers , with a corresponding density of 1.565 dual energy x - ray absorptiometry ( dxa ) has the advantage of being a 3c model that quantifies total and regional fat mass , lean soft tissue , and bone mineral content . this method assumes that nonosseous tissue consists of two distinct components , fat and lean soft tissue . the lean soft tissue component is the difference between body weight and the sum of fat and bone mineral ash . typically , the energy source produces photons at two different energy levels , 40 and 70 kev , which pass through tissues and attenuate at rates related to its elemental composition . bone is rich in highly attenuating minerals , calcium , and phosphorous and is readily distinguished from soft tissues . the measured attenuation of dxa 's two main energy peaks is used to estimate each pixel 's fraction of fat and lean according to series of physical models . overall , the dxa method for estimating three components is first , to separate pixels into those with soft tissue only ( fat + lean soft tissue ) and those with soft tissue + bone mineral , based on the two different photon energies ( lower and higher energies , resp . ) . dxa quantifies fm and ffm with precision [ 1821 ] and provides accurate measures when compared to multicomponent models [ 2226 ] . indeed , scanning speed and minimal - risk allowed its wide implementation and usage in large multicenter studies , including the national health and nutrition examination survey [ 27 , 28 ] . the 3c molecular model of siri can then be extended to a 4c molecular model by adding an estimate of bone mineral by dxa . the 4c model provides the criterion measurements for body composition assessment , but its cost , time involvement , poor subject compliance in pediatric populations , and sophisticated technological analysis are impractical for most , if not all nonresearch - based settings . in fact , the 4c model , which divides body mass into fm , water , mineral and protein ( and/or residual ) , is considered the state - of - the art method for assessing body composition as it can accurately account for the variability in the ffm composition . this model involves measurements from different techniques thus allowing the evaluation of several assumed constant relations that are central to 2c models . however , one of the limitations of estimating body fatness from multicomponent models is that combined technical errors occur when each component is separately estimated . while a higher validity is expected with the measurement of more components , there is an associated propagation of measurement errors with the determination of body density ( or volume ) , tbw , and bone mineral . nevertheless , as long as technical errors are relatively small in each of these components , the cumulative error is also relatively small . still , when one or more of these components is not precisely measured , the advantages of multicomponent analysis are decreased . finally , the addition of in vivo neutron activation analysis is required to assess soft - tissue minerals and glycogen extending fm estimation from a 4c model to 5c and 6c molecular models . there are many biological conditions where the study of multiple components within the ffm composition is important . measuring multiple components often reduces the errors of the assumptions in type ii methods specifically in pediatric populations that can vary substantially the contribution of main ffm components due to growth and maturation . as previously stated , 2c models , use either hydrometric or densitometric techniques and are based upon constants that came from a few adult human cadaver dissections , animal data , and indirect estimates of ffm in human subjects [ 9 , 31 , 32 ] . this approach is less accurate in children because of potential changes in the various assumptions of 2c models during growth and maturation , such as changes in the density and hydration of the ffm . therefore , the 4c model is robust to interindividual variability in the ffm and is the gold standard in pediatric populations . however , multicomponent models are costly , time consuming , and impractical for most settings . for example , to assess fm , a typical 4c model study requires many hours for completion , normally starting with isotope dilution for tbw and measurement of body mass . then , underwater weighing or air displacement plethysmography and dxa techniques , respectively , for body volume and bone mineral assessment are needed . two measurable quantities , tbw and bone mineral along with two properties , body volume and mass , are required to calculate fm . an alternative solution in overcoming the lack of accuracy using less complex techniques based upon 2c models is the use of age- and sex - specific constants derived from pediatric populations . hydrometry and densitometry are two techniques widely used to assess pediatric body composition due to their ease of application , but their validity depends on the accuracy of age- and sex - specific constant values for ffm hydration or density . since 1980 , these constants have relied upon empirical data from fomon et al . that published body composition values for a reference child starting at birth going to 10 y , with most of the values extrapolated from other data . lohman provided similar reference data for pediatric ages based on simultaneous measurements of tbw , body density , and forearm bone mineral density [ 34 , 35 ] . based on these studies and extrapolations , table 1 presents sex- and age - specific constants for conversion of body density , water , and mineral to percent fat in children and adolescents . recently , wells et al . reported reference data for the hydration and density of the ffm and developed prediction equations on the basis of age , sex , and body mass index standard deviations using the 4c measures obtained in a large , healthy sample of children and adolescents aged 423 years . table 2 represents the median values proposed by the authors for hydration , density , and constants using the lms ( lambda - mu - sigma ) method . using these values it is possible to substitute c1 and c2 constants in siri 's equation , thus , improving the accuracy of densitometric techniques in estimating fm of a healthy pediatric population . in addition , the age- and sex specific constants for ffm hydration presented in table 2 can be used to improve the accuracy of hydrometric methods known to be based on the following stable relationship : ( 10)ffm(kg)=ffmtbwtbw(kg ) , where ffmtbw stands for fat - free mass hydration based on the age- and sex - specific constants and tbw for total body water . this equation can be rearranged to ( 11)%fm=(fmbm)100 , where fm is assessed from subtracting ffm from body mass ( bm ) . it is important to emphasize if adult values are used rather than the proposed age- and sex - specific constants in the estimation of fm from densitometric and hydrometric methods , an over- and underestimation of adiposity is expected , respectively . in fact , siri 's 3c model by including both tbw and density is a valid model for determining fm during growth , overcoming the limitations of measuring total body density alone . hence , the combination of body density and body water has become the most practical multicomponent approach to body composition assessment in growing children . with the development of improved body water procedures through deuterium dilution [ 34 , 36 , 37 ] , this approach has offered better estimates of fm and ffm in this population . though the use of age- and sex - specific constants improves the accuracy of 2c models in assessing fm and ffm in children , therefore , if the goal is to develop field - based techniques to predict body composition , multicomponent models should be used as the preferred criterion method . therefore , the accuracy of anthropometry and bia - based equations are dependent in part on the accuracy of the criterion variable for measuring fm and ffm but also on the statistical procedures used to develop these type i functions . in this section , we will review the most common methods used to developed predictive models , that is , type i functions for assessing body composition with regression analysis , the most widely used method for their development . briefly , predictor variables that show the highest correlation with the response variable are chosen to yield the maximum r ( representing the proportion of the total variance in the response variable that is explained by the predictors in a given equation ) . then , a second significant variable is added to the model with the amount of shared variance increasing the r. the procedure is repeated to achieve the best combination of predictor variables until the inclusion of any variable no longer improves ( i.e. , significantly ) the r . another concern when developing predictive equations is multicollinearity , a condition where independent variables are strongly correlated with each other . therefore , if too many variables are included as predictors in a given equation , the probability of multi - collinearity is increased . the variance inflation factor , defined as 1/(1 r ) , can be calculated to detect multi - collinearity . to reduce the number of equations generated and the chance of multi - collinearity , the elimination of predictor variables with the lowest correlation with the reference method should be performed . additionally , to assure the appropriate number of predictors in a specific equation , mallows ' cp statistic index should be used . according to sun and chumlea , the equation with the minimum cp will have the maximum r and minimum root mean square error ( rmse ) values , and as expected , a reduced bias and multi - collinearity . in the development of the regression model , the larger the r the better the equation fits the data , whereas the precision of the model is evaluated by the rmse . the rmse is calculated as the square root of the sum of squared differences between the predicted and the observed values divided by the total number of observations minus the number of parameters as follows : ( 12)rmse=(observedpredicted)2(np1 ) , where n is the number of observations , and p is the number of predictor variables . this procedure is called the coefficient of variation ( cv ) , a standardized value that is useful in comparing predictive equations with different response variables and different units . generally , there are specific selection criteria that should be used for testing the accuracy of new predictive type i functions . one of the first criteria is the validity of the reference method because of its inherent error of measurement , which dose not allow for perfect criterion scores . according to sun and chumlea , other performance indicators include sample size , the ratio of sample size to the number of predictor variables , size of the coefficient of correlation ( r ) , r , rmse , and the cv for the equation . to measure the increase in sample size necessary to offset the loss of precision , the ratio between the variance of prediction error and the variance of criterion value should be calculated . for example , a sample of 100 participants is required to achieve a significant 1% increase in r precision or accuracy of a predictive equation with a statistical power of 90% . an additional procedure to assess the generalizability of predictive equations is the cross - validation of developed models . to test the performance of a predictive equation in cross - validation studies , the parameter is calculated as the square root of the sum of squared differences between the observed and the predicted values divided by the number of subjects in the cross - validation sample as follows : ( 13)pe=(yy)2n , where y are the predicted values , y are the observed values , and n is the number of subjects . while smaller rsme values indicate a greater precision in the development of a predictive equation , a reduced pe points to a better accuracy of the equation when applied to an independent sample . the cross - validation procedure involves the application of the developed model in another sample from the population . usually 2/3 of the sample is used for developing a prediction equation , and 1/3 is used to cross - validate the model though other procedures can be used , such as the jackknife method and the prediction of the sum of squares ( press ) [ 41 , 42 ] . to test the accuracy of an equation when applied to the cross - validation sample , the following parameters should be analyzed : size of the r , pe , and the potential for bias ( mean difference between methods ) . further , though less used , the concordance correlation coefficient ( ccc ) proposed by lin , should be examined as it represents a measure of accuracy by indicating a bias correction factor that quantifies how far the best fit line deviates from the 45 line through the origin , and a measure of precision that specifies how far each observation deviates from the best - fit line . also , for testing the performance of the newly developed equation in the cross - validation group , the agreement between methods should also be examine by analyzing the 95% limits of agreement , as proposed by bland and altman , which tests the potential for bias across the range of fatness or leanness . this is calculated by the differences of the methods ( y - axis ) and the mean of the methods ( x - axis ) ( as proposed by bland and altman ) . instead , the residuals of the regression between methods with the criterion ( in abscissas ) have also been reported . the presence of a trend between the differences and the mean of the methods is determined by using the coefficient of correlation ( or instead by observing the homoscedasticity of the residuals ) ; this is to say a significant correlation between the x- and y - axis indicates bias across the range of fatness . the present study aims to review all the available bia and/or anthropometric - based equations published between 1985 and 2012 for body composition assessment developed using 3c and 4c models in the paediatric population . an extensive literature review was conducted , according to the guidelines proposed at the prisma statement , to select predictive equations for body composition estimation in a paediatric population . medline database ( ovid , pubmed ) and thomson reuters web of knowledge platform were searched for english language articles published in peer - reviewed journals since 1985 with the last search run on december 11 , 2012 . the keyword search terms included : children , adolescent , childhood , adolescence , four - compartment model , three - compartment model , multicomponent model , equation , prediction , dual - energy x - ray absorptiometry , bioelectrical impedance analysis , resistance , anthropometry , skinfold , fat , and fat - free mass . the following characteristics and criteria were used : ( 1 ) participants were healthy children and adolescents ; ( 2 ) the predictor variables were based on bia and/or anthropometry ; ( 3 ) the 3c and 4c models were used as the criterion methods ; ( 4 ) relative or absolute fm and ffm were assessed ; ( 5 ) detailed description of the statistical methods used to formulate the equations was provided . for the identification of studies , the process included the following steps : screen of the identified records ; examination of the full text of potentially relevant studies ; and application of the eligibility criteria to select the included studies . for assessing eligibility , studies were screened independently in an unblinded standardized manner by the primary author , whereas the secondary author examined a small sample of them . our search provided a total of 410 citations . of these , 371 studies were discarded because after reviewing the title and abstract , it appeared that these papers clearly did not meet the criteria . a total of 25 studies did not meet the inclusion criteria described in section 2 ; therefore , a total of 14 studies involving 33 equations were identified for paper . a flow diagram is illustrated in figure 1 to describe the number of studies screened , assessed for eligibility , and included in the paper , along with reasons for exclusions at each stage . a detailed description of the selected equations is presented in tables 3 and 4 , including the characteristics of the study sample , the response and the predictor variables , the criterion models , and the statistical methods used to validate and formulate the equations . the studies summarized in table 3 presented r values for relative and absolute fm ranging from 0.85 to 0.93 and from 0.55 to 0.96 , respectively , with rmses ranging from 2.60 to 3.40% for % fm and from 0.94 to 4.29 kg for absolute fm . values of r > 0.94 and rmse ranging from 1.0 to 2.1 kg were found for ffm estimation . in table 4 , equations developed using a 4c model as the reference method yielded r that ranged from 0.76 to 0.82 with rmse ranging from 3.6 to 3.8% . overall , dxa was used as the reference method to estimate fm [ 4853 ] , % fm [ 54 , 55 ] , and ffm [ 5658 ] . among the 33 equations presented in tables 3 and 4 , only 7 were cross - validated [ 48 , 52 , 53 , 56 , 58 , 59 ] . only 2 studies examined the pes [ 56 , 58 ] in estimating ffm , ranging from 1.2 to 1.5 kg . during the cross - validation analysis , r values ranged from 0.80 to 0.92 for absolute fm with no available information for relative fm . cross - validation of ffm reported in one study showed an r value of 0.95 whereas another study provided values for the cv that ranged from 5 to 6% . none of the above studies examined the ccc , whereas agreement between methods was only included in 3 studies [ 48 , 53 , 56 ] . the smaller 95% confidence intervals for absolute fm were found for dezenberg equations ( 0.3 to 0.1 kg ) , while huang equation ranged from 5.7 to 6.4 kg . for clasey equation , ffm limits of agreement ranged from 2.4 to 2.5 kg . for all the cross - validation equations , the difference between the predictive and the reference methods showed values closed to 0 , indicating a reduced bias in the cross - validation sample of the aforementioned studies . a total of 33 bia and anthropometric - based equations for assessing body composition using multicomponent models as the reference method met the criteria and were selected and reviewed . generally , bia - based equations were developed for ffm estimates , whereas anthropometric - based models were developed for fm estimates . several equations were developed for ages below 14 years while few published equations covered a larger broad of ages , respectively , 3 to 18 years [ 49 , 50 ] and 6 to 17 y [ 55 , 58 ] . the studies of ellis et al . [ 49 , 50 ] likewise presented the largest and ethnically diverse sample , including caucasian , hispanics , and blacks , though the male equations only explained ~60% of the variance in the reference method . also , of note is the absence of including a multi - collinearity analysis in the majority of the selected equations with the exception of the predictive model proposed by morrison et al . . a limited number of studies included a standardized value ( cv ) for the rmses [ 49 , 50 , 58 , 59 ] , a useful parameter for comparing predictive equations with different response variables and units . another important finding is the small number of studies that actually reported the cross - validation of newly proposed models [ 48 , 52 , 53 , 56 , 58 , 59 ] . this is a major flaw in the ability to generalize the predictive model as it establishes whether the equation was accurate to sample - specific variations . in this regard , it is important to highlight the equation developed by clasey et al . for ffm estimation using bia in a large sample of caucasian children aged 511.9 year . the cross - validation sample used by the authors comprising ~80 children explained ffm variability from the criterion method by 95% . the few studies that reported agreement between the proposed equation and the criterion method when applied to a cross - validation sample indicated that limits of agreement are relatively larger which may limit the accuracy of the models at an individual level , even though the mean bias was small . additionally , none of the studies that included a cross - validation sample analysed the concordance correlation coefficient ( ccc ) proposed by lin , as it represents in the same calculation a measure of accuracy and precision of the proposed methodology in relation to the reference technique . most of the studies presented in table 3 were developed using dxa as the criterion method either to estimate fm [ 4853 ] , % fm [ 54 , 55 ] , or ffm [ 5658 ] using different instruments , models , and scan modes . the validity of the response variable , that is , the criterion method , is determinant for developing appropriate equations based on bia and/or anthropometry . therefore , the usefulness of dxa as the reference method for the development of several proposed equations needs to be addressed , in particular some advantages and shortcomings of this technique to assess body composition in pediatric populations . pointed out that dxa technological advances demonstrated a good precision , large availability , and low radiation dose , highlighting dxa as a convenient and useful diagnostic tool for body composition assessment . these authors also concluded that dxa technology can be improved if the uncertainties associated with the trueness of dxa body composition measurements are addressed by conducting more validation studies for testing different dxa systems against in vivo methods such as neutron activation analysis and the 4c model . systematic variations between devices and software versions have been reported previously [ 61 , 62 ] . therefore , dxa systems are not interchangeable and generalizability of predictive equations generated by different densitometers , software , and/or scan mode is still unknown . further research is required for addressing methodological issues related to the validity of this technique , especially if it is used as a criterion method for developing alternative solutions for body composition assessment . it is recognized that 4c models are the best approach in pediatric populations for developing and cross - validating new body composition methods . though other studies [ 63 , 64 ] included children and adolescents in the prediction of bedside techniques using a 4c model as the criterion method , only slaughter et al . proposed solutions specifically developed for a healthy pediatric population ranging in age , maturation status , gender , ethnicity , and adiposity level . this model included bone mineral assessment from a single photon absorptiometry , and the impact of this estimation on the accuracy of those models is still unknown . also developed bia - based equations for assessing ffm using a 4c model as the criterion method . however , we did not include these equations since a wide range of age was found for sun et al . 's proposed models ( 1294 years ) , whereas horlick et al . included hiv - infected children along with healthy children during model development . it is important to address that multicomponent molecular models do not rely upon major assumptions regarding proportions of the ffm density or hydration which are the cornerstone of 2c models . however the use of 3c and 4c models is highly expensive , and laborious which disables its implementation in most laboratories . though the precision of multicomponent models may be affected by the propagation of measurement error related to the need of assessing several techniques , reliability of 3c and 4c models is not compromised if technical errors are relatively small . in this paper , bia and anthropometric - based equations developed against multicomponent models for estimating fm and ffm in children and adolescents were examined . very few equations included a cross - validation sample , and future research efforts should include this procedure for newly proposed models to eliminate the least accurate and precise rather than to continue developing new equations . the predictive equations of slaughter , developed against a 4c model , used a wide and diverse sample ranging in age , maturation status , ethnicity , gender , and adiposity levels and should , therefore , be recommended as a feasible and valid alternative for assessing body composition in paediatric populations . multicomponent models , specifically the 4c model , can account for potential effects of age , sex , and ethnicity differences in the ffm density and composition when used as the criterion method nevertheless residual differences can occur . therefore , specific bia and/or anthropometric models for clearly defined ages , gender , and ethnic groups of children and adolescents are required using a 4c model as the criterion method . finally , future research studies should employ multicomponent models to accurately address the dynamic changes in paediatric body composition using , as predictors , whole body measures .
simple methods to assess both fat ( fm ) and fat - free mass ( ffm ) are required in paediatric populations . several bioelectrical impedance instruments ( bias ) and anthropometric equations have been developed using different criterion methods ( multicomponent models ) for assessing fm and ffm . through childhood , ffm density increases while ffm hydration decreases until reaching adult values . therefore , multicomponent models should be used as the gold standard method for developing simple techniques because two - compartment models ( 2c model ) rely on the assumed adult values of ffm density and hydration ( 1.1 g / cm3 and 73.2% , respectively ) . this study will review bia and/or anthropometric - based equations for assessing body composition in paediatric populations . we reviewed english language articles from medline ( 19852012 ) with the selection of predictive equations developed for assessing fm and ffm using three - compartment ( 3c ) and 4c models as criterion . search terms included children , adolescent , childhood , adolescence , 4c model , 3c model , multicomponent model , equation , prediction , dxa , bia , resistance , anthropometry , skinfold , fm , and ffm . a total of 14 studies ( 33 equations ) were selected with the majority developed using dxa as the criterion method with a limited number of studies providing cross - validation results . overall , the selected equations are useful for epidemiological studies , but some concerns still arise on an individual basis .
SECTION 1. SHORT TITLE. This Act may be cited as the ``UNRWA Humanitarian Accountability Act.'' SEC. 2. UNITED STATES CONTRIBUTIONS TO UNRWA. Section 301 of the Foreign Assistance Act of 1961 is amended by striking subsection (c) and inserting the following new subsection: ``(c)(1) Withholding.--Contributions by the United States to the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA), to any successor or related entity, or to the regular budget of the United Nations for the support of UNRWA or a successor entity (through staff positions provided by the United Nations Secretariat, or otherwise), may be provided only during a period for which a certification described in paragraph (2) is in effect. ``(2) Certification.--A certification described in this paragraph is a written determination by the Secretary of State, based on all information available after diligent inquiry, and transmitted to the appropriate congressional committees along with a detailed description of the factual basis therefor, that-- ``(A) no official, employee, consultant, contractor, subcontractor, representative, or affiliate of UNRWA-- ``(i) is a member of a Foreign Terrorist Organization; ``(ii) has propagated, disseminated, or incited anti-American, anti-Israel, or anti-Semitic rhetoric or propaganda; or ``(iii) has used any UNRWA resources, including publications or Web sites, to propagate or disseminate political materials, including political rhetoric regarding the Israeli-Palestinian conflict; ``(B) no UNRWA school, hospital, clinic, other facility, or other infrastructure or resource is being used by a Foreign Terrorist Organization for operations, planning, training, recruitment, fundraising, indoctrination, communications, sanctuary, storage of weapons or other materials, or any other purposes; ``(C) UNRWA is subject to comprehensive financial audits by an internationally recognized third party independent auditing firm and has implemented an effective system of vetting and oversight to prevent the use, receipt, or diversion of any UNRWA resources by any foreign terrorist organization or members thereof; ``(D) no UNRWA-funded school or educational institution uses textbooks or other educational materials that propagate or disseminate anti-American, anti-Israel, or anti-Semitic rhetoric, propaganda or incitement; ``(E) no recipient of UNRWA funds or loans is a member of a Foreign Terrorist Organization; and ``(F) UNRWA holds no accounts or other affiliations with financial institutions that the United States deems or believes to be complicit in money laundering and terror financing. ``(3) Definition.--In this section: ``(A) Foreign terrorist organization.--The term `Foreign Terrorist Organization' means an organization designated as a Foreign Terrorist Organization by the Secretary of State in accordance with section 219(a) of the Immigration and Nationality Act (8 U.S.C. 1189(a)). ``(B) Appropriate congressional committees.--The term `appropriate congressional committees' means-- ``(i) the Committees on Foreign Affairs, Appropriations, and Oversight and Government Reform of the House; and ``(ii) the Committees on Foreign Relations, Appropriations, and Homeland Security and Governmental Affairs of the Senate. ``(4) Effective Duration of Certification.--The certification described in paragraph (2) shall be effective for a period of 180 days from the date of transmission to the appropriate congressional committees, or until the Secretary receives information rendering that certification factually inaccurate, whichever is earliest. In the event that a certification becomes ineffective, the Secretary shall promptly transmit to the appropriate congressional committees a description of any information that precludes the renewal or continuation of the certification. ``(5) Limitation.--During a period for which a certification described in paragraph (2) is in effect, the United States may not contribute to the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) or a successor entity an annual amount-- ``(A) greater than the highest annual contribution to UNRWA made by a member country of the League of Arab States; ``(B) that, as a proportion of the total UNRWA budget, exceeds the proportion of the total budget for the United Nations High Commissioner for Refugees (UNHCR) paid by the United States; or ``(C) that exceeds 22 percent of the total budget of UNRWA.''. SEC. 3. SENSE OF CONGRESS. It is the sense of Congress that-- (1) the President and the Secretary of State should lead a high-level diplomatic effort to encourage other responsible nations to withhold contributions to UNRWA, to any successor or related entity, or to the regular budget of the United Nations for the support of UNRWA or a successor entity (through staff positions provided by the United Nations Secretariat, or otherwise) until UNRWA has met the conditions listed in subparagraphs (A) through (F) of section 301(c)(2) of the Foreign Assistance Act of 1961 (as added by section 2 of this Act); (2) citizens of recognized states should be removed from UNRWA's jurisdiction; (3) UNRWA's definition of a ``Palestine refugee'' should be changed to that used for a refugee by the Office of the United Nations High Commissioner for Refugees; and (4) in order to alleviate the suffering of Palestinian refugees, responsibility for those refugees should be fully transferred to the Office of the United Nations High Commissioner for Refugees.
UNRWA Humanitarian Accountability Act - Amends the Foreign Assistance Act of 1961 to withhold U.S. contributions to the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) or to any successor or related entity unless the Secretary of State certifies to Congress that: (1) no UNRWA official, employee, representative, or affiliate is a member of a foreign terrorist organization, has propagated anti-American, anti-Israel, or anti-Semitic rhetoric, or has used UNRWA resources to propagate political materials regarding the Israeli-Palestinian conflict; (2) no UNRWA facility is used by a foreign terrorist organization; (3) no UNRWA school uses educational materials that propagates anti-American, anti-Israel, or anti-Semitic rhetoric; (4) UNRWA is subject to auditing oversight; and (5) UNRWA holds no accounts or other affiliations with financial institutions deemed by the United States to be complicit in money laundering and terror financing. Limits, upon certification compliance, U.S. contributions to UNRWA.
CHARLESTON, S.C. (AP) — A white man accused of fatally shooting nine black parishioners at a Charleston church last year was allowed to act as his own attorney in his federal death penalty trial Monday. Dylann Roof's request came against his lawyers' advice, and U.S. District Judge Richard Gergel said he would reluctantly accept the 22-year-old's "unwise" decision. Death penalty attorney David Bruck then slid over and let Roof take the lead chair. The lawyers can stand by and help Roof if he asks. The development came the same day jury selection resumed in the case. The selection process was halted Nov. 7 after lawyers for Roof questioned his ability to understand the case against him. Judge Gergel's ruling last week cleared the way for Monday's process to begin anew. Roof, 22, is charged with counts including hate crimes and obstruction of religion in connection with the June 17, 2015, attack at Emanuel African Methodist Episcopal Church in Charleston. He faces a possible death sentence if convicted. Beginning Monday, 516 potential jurors were to report to the courthouse to be individually questioned by the judge. When 70 qualified jurors are picked, attorneys can use strikes to dismiss those they don't want, until 12 jurors and six alternates are seated. The judge delayed the process of narrowing the jury pool when Roof's lawyers suggested that their client either didn't understand the charges against him or couldn't properly help with his defense. The lawyers didn't say what led them to question Roof's fitness for trial. The decision came after Gergel wrapped up a hastily called two-day hearing to determine if Roof is mentally fit to stand trial, hearing testimony from psychologist James Ballenger and four other unnamed witnesses and reviewed sworn statements from three others. The judge said he took the rare step of closing the hearing to the public and media because Roof made statements to a psychologist that might not be legal to use at his trial and could taint potential jurors. On Friday, the judge said he refrained from releasing a transcript of the hearing for the same reason, reversing an earlier pledge to release a redacted transcript. Victims' relatives complained about the secrecy surrounding the proceedings, but Gergel maintains the steps he has taken are to ensure Roof receives a fair trial and that pre-trial exposure doesn't provide grounds for an appeal. Roof also has already been found competent in state court, where prosecutors plan a second death penalty trial on nine counts of murder. According to police, Roof sat through nearly an hour of prayer and Bible study at the church with its pastor and 11 others before pulling a gun from his fanny pack and firing dozens of shots. Roof shouted racial insults at the six women and three men he is charged with killing and the three people left alive, authorities said. Roof said he left the three unharmed so they could tell the world the shootings were because he hated black people. ___ Kinnard can be reached at http://twitter.com/MegKinnardAP . Read more of her work at http://bigstory.ap.org/content/meg-kinnard/ . ||||| Six women and one man out of the 20 prospective jurors are all who advanced to the next round of jury selection in Dylann Roof's federal hate crimes trial. U.S. District Court Judge Richard Gergel asked potential jurors about their written statements concerning their views on the death penalty and then verified they had not sought out information on the case. But the day began with the defendant asking the court to dismiss his appointed attorneys so he could represent himself. It opens the door for a potential wild scene inside the federal courtroom where Roof could end up questioning the survivors and victims' family members who take the stand. After approving Roof's request, Gergel also appointed Roof's former defense team of David Bruck and Sarah Gannett to serve as his standby counsel and offer advice as the 22-year-old Eastover man moves forward with the case. Bruck is one the country's most respected death penalty attorneys, working on high profile cases such as Susan Smith in the Upstate and the Boston bomber case. The approval, however, moved Roof to the lead counsel chair while Bruck moved down a seat. He joins a long line of high-profile defendants who acted as their own attorneys, often with poor results. Serial killer Ted Bundy, Beltway sniper John Allen Muhammed and Army Major Nidal Hasan, who killed 13 people at the Fort Hood military base in Texas, ended up with death sentences. After firing their lawyers, Long Island Rail Road shooter Colin Ferguson was sentenced to 200 years in prison, and 9/11 conspirator Zacarias Moussaoui was sent away for life. Defendants who act as their own lawyers generally want to bring attention to their causes and publicize their actions. That almost always runs counter to the advice of lawyers, who urge them not to incriminate themselves. "They think they have a message and that's unfortunately what leads to these crimes in the first place," said New York attorney Tiffany Frigenti, author of an article called "Flying Solo Without a License: The Right of Pro Se Defendants to Crash and Burn" for her law school journal. Pro se representation can also lead to uncomfortable courtroom encounters between defendants and their victims or those victims' families if they are questioned by the very person who is accused of shattering their lives. "It can seem beneficial. Nobody believes in your cause and case more than you," Frigenti said. "But it only works that way in very rare cases — usually appeals." Former state Attorney General says Roof's decision also raises new issues for prosecutors because Roof's lack of knowledge of the law can be seen as a strength and a weakness. “If you’re prosecuting a defendant who is representing themselves, you have to be very careful about not seeming overbearing or taking advantage of lack of legal training because jurors do feel sympathetic,” Condon said. Because it’s a high profile death penalty case, Condon said the appellate record will be under scrutiny for years to come. “I prosecuted several capital cases and this case seems like something that will result in a death penalty verdict and will be successful,” Condon said. “But the appeals that go on, they can last literally decades.” The motion came as prospective jurors returned to the courthouse after a three week delay while the court dealt with competency issues surrounding Roof. During the individual voir dire phase of jury selection, Bruck was often seen leaning over to offer advice to Roof on what questions to ask. But most times after hearing U.S. Attorney Jay Richardson espouse a litany of questions and complaints about prospective jurors, Roof seemed to lose his train of thought and would instead say he had no objections or follow-up questions. Often, he agreed with whatever Gergel lodged as complaints. The decorum order on the voir dire process calls for 20 jurors per day to be called to the courthouse in two groups of 10. Gergel said at the end of the second session Monday he was increasing that order to a total of 24 jurors per day after seeing only 7 making it through. The court will go through some 500 jurors until a group of 70 has been amassed. At that point, Roof and the U.S. attorneys will use alternating strikes to dismiss jurors until the 12 jurors and 6 alternates are seated. Roof faces nearly three dozen charges for hate crimes and crimes against religious practice for the shooting at Emanuel AME Church on June 17, 2015. Investigators say Roof sat in a bible study group for an hour before standing and opening fire on the people in the chuch, killing nine and leaving three alive to tell the story. Follow ABC News 4's Bill Burr @BBonTV on Twitter for live updates from the jury selection process. ||||| Jennifer Berry Hawes is a member of the Watchdog and Public Service team who worked on the newspaper's Pulitzer-Prize winning investigation, "Till Death Do Us Part."
– Dylann Roof may have the chance to question the very people he is accused of shooting when he stands trial for the murders of nine black worshippers at a South Carolina church. Judge Richard Gergel on Monday granted a surprise request from Roof to defend himself against 33 federal charges, with his lawyers serving as "stand-by counsel," reports ABC4. That means his lawyers can offer assistance if Roof asks for it, reports the AP. Roof, who faces the death penalty if convicted, told Gergel he understood the consequences of his decision—which came against the advice of his lawyers—and could file objections and motions and question witnesses on his own. "I do find defendant has the personal capacity to self-representation," Gergel said, per the Charleston Post and Courier, though he called the move "strategically unwise." Gergel's approval means Roof, 22, may now be able to question the three survivors of the Charleston shooting and their family members. The trial could begin within weeks. Jury selection—interrupted by a competency hearing regarding a new psychiatric evaluation of Roof—resumed Monday. Twenty prospective jurors will be questioned daily from a pool of 512 until 70 remain. That number will then be whittled down to 12 jurors and six alternates. A separate case in state court is to begin in January.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Transparent Review of the Affordability and Cost of Electricity (TRACE) Renewable Energy Act of 2010''. SEC. 2. FINDINGS. Congress finds the following: (1) Federal energy-specific subsidies and support to all forms of energy were estimated to be $16.6 billion in 2007, indicating that total Federal energy subsidies have more than doubled over the previous ten years, according to the Federal Financial Interventions and Subsidies in Energy Markets 2007 report by the Energy Information Administration. (2) Research, development, and installation of renewable and other low-emission technologies for electric power generation have been a high priority for the 110th and 111th Congresses. (3) There is a growing need for accurate reporting on the costs associated with each form of alternative energy generation technology because of the significant Federal action and investment in such technology. (4) The costs associated with alternative energy generation technology should be analyzed and made available to assess the ability of each new technology to compete in the marketplace, without Federal subsidy or support, and to optimize the deployment of such technology. (5) The Energy Information Administration has previously created forms and guidelines to collect the necessary information for such reporting and has collected information for several years; however, the program ended due to funding constraints and the lack of an authorization from Congress. SEC. 3. ELECTRIC PRODUCTION COST REPORT. Title II of the Public Utility Regulatory Policies Act of 1978 is amended by adding after section 214 (16 U.S.C. 824 note) the following: ``SEC. 215. ELECTRIC PRODUCTION COST REPORT. ``(a) Electricity Report.--The Secretary, acting through the Administrator of the Energy Information Administration (in this section referred to as the `Administrator'), shall prepare and publish an annual report, at the times specified in subsection (c), setting forth the costs of electricity production per kilowatt hour, by sector and energy source, for each type of electric energy generation. The report shall include each of the following for the period covered by the report: ``(1) The quantity of carbon dioxide emitted per kilowatt hour. ``(2) The cost of electricity generation in cents per kilowatt hour, or dollars per megawatt hour, for each type of electric energy generation in the United States. ``(3) The factors used to levelize costs, including amortized capital costs, current and projected fuel costs, regular operation and maintenance, projected equipment, and hardware lifetimes. ``(4) The costs for constructing new electric transmission lines dedicated to, or intended specifically to benefit, electric generation facilities in each sector and for each energy source, to the extent practicable. ``(b) Collection and Use of Data.-- ``(1) Data collection.--The Administrator shall collect data and use all currently available data necessary to complete the report under subsection (a). Such data may be collected from any electric utility, including public utilities, independent power producers, cogenerating and qualified facilities, and all State, local, and federally owned power producers. ``(2) Cooperation of other agencies.--The heads of other Federal departments, agencies, and instrumentalities of the United States shall assist with the collection of data as necessary to complete the report under subsection (a), including the Chairman of the Federal Energy Regulatory Commission, the Administrator of the Rural Utilities Service, the Director of the Minerals Management Service, and the Administrator of the Environmental Protection Agency. ``(c) Issuance of Reports.-- ``(1) Reports for data previously collected.--As soon as practicable, the Administrator shall prepare and publish reports containing the information specified in subsection (a) for each year for which the data was collected before the date of the enactment of this section. ``(2) Annual reports.-- ``(A) First report.--For the year 2012, the Administrator shall collect all necessary data for the completion of the report under subsection (a) by January 31, 2013, and shall issue the report based on that data by June 30, 2013. ``(B) Subsequent annual reports.--For each year after 2012, the Administrator shall collect all necessary data for the completion of the report under subsection (a) by January 31 of the year following the year for which the data was collected, and shall issue the report based on that data not later than April 30 of the year following the year for which the data was collected. ``(d) Review of Electricity Report.--Following the completion of each report under subsection (a), the Administrator may review the findings with organizations that have expertise in the energy industry and demonstrated experience generating similar industry reports, for the purpose of improving the utility, accuracy, and timeliness of future reports.''.
Transparent Review of the Affordability and Cost of Electricity (TRACE) Renewable Energy Act of 2010 - Amends the Public Utility Regulatory Policies Act of 1978 to direct the Secretary of Energy, acting through the Administrator of the Energy Information Administration, to prepare and publish an annual report setting forth the costs of electricity production per kilowatt hour, by sector and energy source, for each type of electric energy generation.
A marine biologist at Cal State Long Beach has set out to identify the species and provenance of all the shells on the Watts Towers in South L.A. ||||| More than 100,000 Egyptians from all walks of life gathered on Saturday at the central square in Cairo, as military officers stationed in the area embraced the protesters, chanting "the army and the people are one – hand in hand." Click here for more Haaretz coverage of events in Egypt An Egyptian Army officer shouts slogans as he is carried by protesters in Cairo January 29, 2011. Reuters The military officers removed their helmets as they were hoisted up by the crowd in ecstasy. The masses gathered at the square singing, praying and chanting that they will not cease their protest until Egyptian President Hosni Mubarak resigns. The Egyptian government announced earlier in the day that the curfew would be implemented earlier at 16:00, but no one heeded the warning that they would act "firmly" if it was broken. Since the early morning police have not been seen in the streets, and the army has not enforced the curfew. The military forces have been stationed outside several government buildings, television stations and the national museum to secure them from looters. Asalam Aziz, a 37-year-old accountant who joined the protests, told Haaretz that he was "filled with happiness, on the one hand, because my people are acting in a peaceful manner to change the situation, and on the other hand I am filled with anger over a government which does not listen to the desires of the people." Aziz believed that "we have already crossed the point of no return." Keep updated: Sign up to our newsletter Email * Please enter a valid email address Sign up Please wait… Thank you for signing up. We've got more newsletters we think you'll find interesting. Click here Oops. Something went wrong. Please try again later. Try again Thank you, The email address you have provided is already registered. Close Habba Azli, a 25-year-old physiotherapist, said that "the president is just an evil man, if after all that has happened he continues to remain enclosed in his palace and doesnt resign." The protesters in the square are carrying signs saying "Game over Mr. Mubarak." Mubarak's speech and the cabinet's decision to resign were not enough for the masses that flooded the streets of Cairo and other major cities in the country, and the riots gradually increased toward the afternoon. So far there have reports of dozens of casualties in the protests. Egypt's medical sources have reported over 45 dead, 38 of whom were killed during the last two days. Al Jazeera reported that there were over 2,000 injured during the days of protest. Meanwhile, reports of looting have revealed that mummies have been destroyed in Egypt's national museum. ||||| Saudi Arabia strongly criticized Egyptian protesters and voiced support for beleaguered Egyptian President Hosni Mubarak on Saturday, appearing to underscore growing concern across the Arab world over possible spill-over from popular protests that have ousted Tunisia's long-time strongman and now threaten Mr. Mubarak's grip on power. In a statement carried by Saudi's state news agency, King Abdullah bin Abdulaziz al-Saud said protests rocking Egypt were instigated by "infiltrators," who "in the name of ...
– Protesters in Cairo once again defied curfew today, but this time the police largely stood by and let them do it, the LA Times reports. The military meanwhile seems to have switched sides; as protesters swarmed over Cairo’s central square, the officers stationed there threw off their helmets and joined them, crying, “The army and the people are one—hand in hand,” according to Haaretz. Protesters hoisted the officers on their shoulders in triumph. While the mood on the streets was generally joyous, some residents and businesses complained of looting. Some even theorized that the government was intentionally trying to foster anarchy. “The government is trying to transform the people’s revolution into looting mobs so they can justify cracking down,” said one professor. President Mubarak still has allies; Saudi Arabia today condemned the protests, with King Abdullah calling the ring-leaders “infiltrators” who were trying to destabilize the country, the Wall Street Journal reports.
the world health organization predicts that by the year 2020 , depression will be the first cause of invalidity in the world followed by cardiovascular disease . although various psychological and pharmacological treatments exist for the treatment of depression ( for a review , see ) , difficult access to psychotherapy due to monetary or transportation issues and/or low acceptance of antidepressant treatments has led to the development of other forms of treatments . in recent years , there has been a rise in the use of self - help treatments that provide users with information on how to self - identify their problems and propose methods to overcome them . the most frequent form of delivery includes books ( bibliotherapy ) and use of the internet ( internet - based therapy ; for a review , see [ 3 , 4 ] ) . guided self - help treatment implies that some form of support from a therapist is delivered to the patient , either through self - help booklets developed by health professionals or scientists , or via support provided directly by a therapist in addition to utilization of the self - help material . in contrast , unguided self - help represents the use of self - help books available in bookstores with no additional support from a health professional [ 4 , 6 ] . unguided self - help books represent books written by recognized or unrecognized specialists in the field that provides guidance on how to live a better life , be happy , and so forth [ 4 , 7 ] . two dimensions of self - help are generally proposed in unguided self - help books [ 4 , 6 ] , that is , problem - focused or growth - oriented . problem - focused self - help books represent books that extensively discuss the nature of problems one can encounter and how to recognize and circumvent them ( this category of self - help books has also been named victimization books ) . in contrast , growth - oriented books present inspirational messages about life and happiness and propose various methods of coping and development of new skills ( this category of self - help books has also been named meta - analyses have shown that guided self - help interventions for depression are more effective than absence of treatment , and guided self - help interventions present similar efficacy to psychotherapies and/or antidepressants [ 2 , 8 , 9 ] . moreover , guided self - help interventions are now recommended by the national institute for health and clinical excellence . although guided self - help interventions presented in books or via the internet have been extensively studied [ 2 , 11 , 12 ] , unguided self - help books have received very little attention . some studies suggest that reading problem - focused self - help books can have positive effects in the treatment of some problematics such as marital conflict and general emotional disorders , and others suggest that unguided self - help books could be used to prevent the incidence of depression in high risk groups . however , at this point , there is a lot of cynicism about the potentially positive effects of unguided self - help books , with some authors claiming that self - help books are fraudulent , and others suggesting that buying self - help books may be part of a false hope syndrome . for many authors , the major limitation of unguided self - help books is their one - size fits all approach in which advice is given without taking into account the personality and/or diagnosis and/or personal circumstances of the reader [ 1618 ] . this later point brings attention to the lack of information that exists on the type of readership of unguided self - help books . the few studies that were performed to date showed that consumers of self - help books come from all levels of educational backgrounds , socioeconomic status , and positions , although women tend to consume more self - help books than men . notwithstanding , the literature is inconsistent in describing whether consumers of self - help books differ from nonconsumers in terms of personality . one study showed that consumers of self - help books present higher neuroticism than nonconsumers , a second study did not find such a difference [ 4 , 21 ] , and a third reported that reading self - help books is associated with an increase in self - actualization . although these data are interesting , they do not inform us about the characteristics of self - help book readership . indeed , studies assessing why certain people are attracted to self - help books propose that many adults are active consumers of self - help books as a way of self - diagnosing and/or treating their own psychological distress , and that this would mainly result from the stigma surrounding depression in adults [ 22 , 23 ] . in this sense , the active proliferation of the self - help book industry would mainly reflect the underlying depressive symptomatology of individuals , and this industry would be highly successful because individuals need some sort of autotreatment to alleviate their depressive mood and/or disorder . if this is the case , one could predict that active consumers of self - help books might present increased stress physiology and increased depressive symptomatology when compared to nonconsumers of self - help books . impairment in the regulation of the hypothalamic - pituitary - adrenal ( hpa ) system has been reported in acute and/or chronic episodes of depression [ 24 , 25 ] . the impaired negative feedback of the hpa system ultimately leads to hypersecretion of crf , shifting the activity of the hpa axis to greater production of glucocorticoids ( cortisol in humans ; for a review , see ) . in this first pilot study , we assessed whether consumers of unguided self - help books present differences in diurnal levels of cortisol , stress reactive cortisol levels , depressive symptoms , and personality traits in comparison to nonconsumers . personality and depressive symptomatology are important factors to measure in consumers of self - help books as they could potentially be important predictors of increased stress reactivity and/or depressive symptomatology . in line with the goals of pilot studies ( for a review on pilot studies , see ) , we performed this first small scale preliminary study in order to evaluate the feasibility of studying self - help book consumers and potential adverse events related to these types of studies . most importantly , to guide future research , we aimed to generate effect sizes for our dependent variables ( cortisol levels , depressive symptomatology , and personality factors ) in order to determine the appropriate sample size needed for a larger experimental study on this issue that has received no empirical evidence . the definition of self - help books that we used in this research project is the definition given by the neuropsychologist paul pearsall who defines self - help books as books that give advice on how to change your life , attain happiness , find true love , lose weight , and more . we defined consumers of self - help books as individuals who have bought or browsed a minimum of four self - help books in the previous year . we felt that including only individuals who have bought ( and not browsed ) four self - help books might bias the sample toward people from higher socioeconomic status , which could then have a significant impact on the results . questions about the number and types of books bought and/or browsed by the participants were asked during a recruitment phone interview . participants defined as consumers of self - help books were asked to provide the names of these books during the phone interview in order to ascertain whether they fell into our category of consumers of self - help books . online recruitment was performed using advertisements posted on general or university websites . since the purpose of the study was to compare two different populations ( self - help books ' consumers and nonconsumers ) , two different types of advertisement nonconsumers were recruited via an advertisement featuring a study on personality traits and stress , without any mention on self - help books consumption . this procedure was used to ensure that the nonconsumer group was not composed of people against this type of literature but only not attracted to it . these potential nonconsumer participants were then screened on the phone and additional questions were asked to validate that they had never read or browsed that kind of self - help literature and that they were not attracted to it . only those individuals who did not read self - help books and were not attracted to them self - help book consumers were recruited via an advertisement stating that we were looking for adults who were active consumers of self - help books for a study on personality and stress . during their visit to the lab , participants from that group were evaluated on their preference for problem - focused versus growth - oriented self - help books using a classification task that we developed . in this task , we presented the consumer group with 10 books and , after giving them 10 minutes to browse the various books , we asked them to sort out the five books that they would buy given the opportunity . five of the ten books proposed a growth - oriented approach ( e.g. , the power of positive thinking ) , while five of them proposed a problem - focused approach ( e.g. , how can i forgive you ? : the courage to forgive , the freedom not to ) . books used to assess preference for growth - oriented ( books # 1 to # 5 ) versus problem - focused ( books # 6 to book # 10 ) self - help books . growth - oriented self - help books are the following:the power of positive thinking by norman vincent peale , 1952.how to stop worrying and start living by dale carnegie , 1990.you're stronger than you think by peter ubel , 2006.you can be happy no matter what by richard carlson , 2006.choices that change lives : 15 ways to find more purpose , meaning , and joy by hal urban , 2006 . the power of positive thinking by norman vincent peale , 1952 . choices that change lives : 15 ways to find more purpose , meaning , and joy by hal urban , 2006 . problem - focused self - help books are as follows:(6)why is it always about you ? : saving yourself from the narcissists in your life by sandy hotchkiss , 2003.(7)i'm ok , you 're my parents by dale atkins , 2004.(8)shame and guilt by jane middelton - moz , 1990.(9)self nurture : learning to care for yourself as effectively as you care for everyone else by alice d. domar and henry dreher , 2001.(10)how can i forgive you ? : the courage to forgive , the freedom not to by janis a. spring , 2005.a ratio of growth - oriented / problem - focused preference was calculated by adding the number of books from each pole that fell within the category of books to buy by the participants . for example , if a participant stated that they would buy three growth - oriented books and two problem - focused books , this participant received a ratio of 3/2 = 1.5 . with this ratio , the larger the number , the greater the attraction to growth - oriented books and vice versa for problem - focused books . participants displaying a ratio of 4 and above were classified in the growth - oriented group as scores lower than 4 were closer to chance level for preference assessment . when presented with books , participants were not aware that the goal of this task was to determine their attractiveness to growth - oriented versus problem - focused books . the reason for this is that it can be predicted that most people would choose not to select problem - focused books if told about the two poles ( growth - oriented versus problem - focused ) , given the negative social value that may be attached to problem - focused self - help books . why is it always about you ? : saving yourself from the narcissists in your life by sandy hotchkiss , 2003 . i 'm ok , you 're my parents by dale atkins , 2004 . self nurture : learning to care for yourself as effectively as you care for everyone else by alice d. domar and henry dreher , 2001 . how can i forgive you ? : the courage to forgive , the freedom not to by janis a. spring , 2005 . participants from both groups were screened over the phone prior to recruitment in order to make sure that they fulfilled our inclusion criteria . exclusionary criteria included presence or history of neurological or psychiatric conditions , diabetes , respiratory disease , asthma , infectious illness , thyroid or adrenal dysfunctions , obesity ( body massive index > 30 ) , any glucocorticoid or cardiovascular altering medications ( e.g. , antidepressants , diuretics , antiasthmatics , and b - blockers ) , and excessive use of drugs or alcohol . thirty - two healthy men and women aged between 18 and 65 ( m = 36.03 16.09 ) participated in this study . eighteen self - help consumers ( 75% female ) and 14 nonconsumers ( 75% female ) were recruited . the average age of the consumers was 38.33 years old ( 3.5 ) and 33.07 years old ( 4.72 ) for the nonconsumers . within the group of self - help books consumers , 11 individuals were classified as having a preference for problem - focused books ( hereon referred to as the problem - focused group ) and 7 were classified as having a preference for growth - oriented books ( hereon referred to as the growth - oriented group ) . three women were menopausal ( one in each condition ) and all others were tested in the follicular phase of their menstrual cycle . all participants provided written informed consent and were compensated for their participation in the study . this study was approved by the research ethics board of the mental health university institute respecting the canadian tri - council 's policy statement for the ethical conduct for experimentation using humans , guided by the world medical association 's declaration of helsinki . we measured personality traits in order to determine whether preference for problem - focused or growth - oriented books would be associated with personality traits that could predict cortisol levels . personality traits were measured using the neo five factor inventory ( neo - ffi ) . this 60-item personality inventory was developed as a short form of the neo - pi . extraversion , openness to new experiences , agreeableness , and conscientiousness . participants are asked to respond on a likert - scale with the extent to which they agree with each item ( strongly agree to strongly disagree ) . the mean coefficient alpha for the revised inventory scale was 0.77 . since low sense of control is linked to the cortisol stress response , locus of control was measured in order to explain any potential physiological stress response differences between groups . using a six - point likert scale ( not at all true to very true ) , the bcc yields four scales including self - concept of own competence , control expectancy : internality , control expectancy : externality , and control expectancy : chance control . the mean alpha for this questionnaire is 0.82 for young students and 0.83 for the elderly . self - esteem was measured using the 10-item rosenberg self - esteem scale ( res ) , which is a unidimensional scale that measures personal worth , self - confidence , self - respect , and self - depreciation . participants are asked to respond on a four - point scale with the degree to which they agree with each item ( strongly agree to strongly disagree ) . the scale shows good reliability ( = 0.80 ) and is a valid test of global self - worth . self - reported depressive symptoms were assessed using the 21-item beck depression inventory ii ( bdi ii ) , which is a unidimensional scale that assesses diverse psychological and physiological symptoms related to depression on a four - point scale . the bdi 's total score ranges from 0 to 63 , displaying a continuum of depression related symptoms . they were asked to provide samples on two different days , separated by 3 days , with the first day of sampling starting 3 days after their visit to our laboratory . saliva was collected using passive drool at the time of awakening and 30 minutes after awakening in order to calculate the cortisol awakening response ( car ) . it has been reported that during the first hour after awakening , cortisol levels show an acute increase . cortisol determination during this time of day appears to represent a response of the hpa axis to an endogenous stimulation and is a reliable indicator of diurnal hpa activity . participants were also asked to provide three additional samples at 14:00 and 16:00 and before bedtime . these sampling times have been shown in previous studies to be reliable markers of the diurnal cycle of cortisol secretion [ 35 , 36 ] . as the nonadherence to saliva sampling in ambulatory settings has been shown to exert a significant impact on the resulting cortisol profile , a individuals were asked to record the exact time of each saliva sample to assess participants ' compliance . the tsst is an established and highly effective psychosocial stress paradigm used to provoke activation of the hpa axis . the version of the tsst that we used in the current study was somewhat different from the original version as we used a panel - out the reason why we made the decision to use the panel - out condition in the current study is that the research assistants who acted as judges in our experiment were younger than the participants . we have shown in previous studies that environmental factors such as age of research assistants can lead to a spurious stress response in some individuals [ 39 , 40 ] , and , consequently , we wanted to limit contact between our participants and the judges . the panel - out version of the tsst was used in many of our studies . while one study reported no significant differences between the panel - in and the panel - out conditions in men , another study has reported higher cortisol reactivity in the panel - in compared to the panel - out condition in women . our laboratory found no significant differences in terms of cortisol reactivity between the panel - in and panel - out condition when comparing 140 men , women in the luteal phase of their menstrual cycle , and women taking oral contraceptives . therefore both conditions induce a stress response ( article in preparation ) . in summary , the tsst involves an anticipation phase ( 10 minutes ) and a test phase that comprises 10 minutes of public speaking . the test phase is divided into a mock job interview ( 5 minutes ) followed by mental arithmetic ( 5 minutes ) . throughout their performance , participants face a one - way mirror and a camera . behind this mirror , two confederates act as judges and pretend to be experts in behavioral analysis while observing the participants and communicating with them via an intercommunication system . a total of eight saliva samples for cortisol determination were obtained at 20 min and 10 min ( baseline ) , immediately before the tsst as well as + 10 , + 20 , + 30 , + 40 , and + 50 min after the tsst began . during recruitment , potential participants were told on the phone that the study consisted of one testing day , lasting two hours , and two days of saliva sampling at home were required following the testing session . , participants were tested in the afternoon in order to obtain adequate cortisol reactivity to the psychosocial stressor and to control for possible differential effects of the circadian cortisol patterns . upon arrival at the laboratory , participants were asked to read and sign an informed consent form . thereafter , they were asked to answer the psychological questionnaires , which took approximately 15 minutes . participants provided saliva samples by filling a small plastic vial with 1 ml of pure saliva ( i.e. , passive drool ) . participants were instructed about the tsst and prepared their mock job interview speech during a 10-minute anticipation phase . participants then had to do the verbal ( 5 minutes ) and mental arithmetic ( 5 minutes ) tasks . after the recovery period , they were debriefed with regard to the goal of the public speaking task . participants were debriefed about the general hypothesis of the study when they brought the home saliva kit back to the lab . dominique walker 's laboratory at the douglas institute research center by radioimmunoassay using a kit from dsl ( diagnostic system laboratories , inc . , texas , usa ) . the intra - assay and interassay coefficient of variation for these studies are 4.6% and 5% , respectively . the limit of detection of the assay is 0.01 dl , and all samples were assayed in duplicates . the first set of analyses was done with group ( 2 levels : consumer versus nonconsumer ) as the independent variable to test whether as a group , consumers of self - help books present different psychoneuroendocrine profiles when compared to nonconsumers of self - help books . in the second set of analyses , consumers of self - help books were split as a function of their preference for growth - oriented or problem - focused books and compared with the nonconsumer group , using group ( 3 levels : growth - oriented , problem - focused , and nonconsumer ) as the independent variable . for each analysis , personality traits ( as measured by the five neo subscales neuroticism , extroversion , openness , agreeableness , and conscientiousness ) , locus of control , self - esteem , and depressive symptoms were included in univariate anovas . for cortisol , both diurnal and reactive cortisol values followed a normal distribution and , for this reason , raw data of cortisol were used for all analyses . for each salivary cortisol analysis , sex and body mass index ( bmi ) were entered as covariates as these are factors associated with cortisol production . diurnal cortisol secretion was calculated using the mean concentration of cortisol for each sample on both days of saliva sampling , resulting in five cortisol means . in order to determine whether self - help book use was related to diurnal cortisol secretion , we calculated the car as well as using the trapezoidal method to calculate area under the curve with respect to ground ( aucg ; basal cortisol ) . in order to determine whether self - help book use was related to reactive cortisol secretion , we calculated the area under the curve relative to increase ( auci ; reactive cortisol ) . these analyses were made in order to determine whether there were significant group differences in terms of basal and reactive cortisol levels between groups . to ascertain the participant 's compliance regarding the diurnal saliva sampling , time when saliva samples were taken was computed into a mean in each group and anovas were used to calculate whether there were significant group differences . finally , we calculated the effects size for the comparison between consumers versus nonconsumers and the comparison between preference for growth - oriented or problem - focused books in order to determine ( 1 ) the statistical power of the significant differences observed and ( 2 ) the appropriate sample size for a larger full scale study . in terms of feasibility , we found it quite easy to recruit consumers of self - help books as no differences were observed in terms of time and cost of recruitment of this population compared to other populations we have tested in the past . recruitment of nonconsumers was more time consuming because we had to validate a posteriori the nonconsumption of self - help books in the individuals calling us to participate in the research but , overall , the burden was not high on recruitment . no adverse events were reported during recruitment and testing , although the research assistants working on this project reported that the testing of consumers of self - help books took generally longer than testing of nonconsumers because consumers were generally more verbal and interacted more with the assistants during testing . figure 1 shows that participants displayed a normal diurnal cortisol rhythm as well as an increase in cortisol in response to the tsst . preliminary analysis also revealed that groups did not differ in terms of time of saliva sampling ( all p values > 0.763 ) and that groups did not differ in terms of age , bmi , years of education , or sex of the participants ( all p values > 0.165 ) . also , no group differences were observed for personality traits ( all p values > 0.112 ) , locus of control ( all p values > 0.162 ) , and self - esteem ( all p values > 0.295 ) when we contrasted the consumers to the nonconsumers , and when we split the consumers into those individuals with a preference for growth - oriented or problem - focused books . we first contrasted consumers and nonconsumers on basal / reactive cortisol levels and depressive symptomatology . we found no differences between consumers and nonconsumers on diurnal cortisol levels aucg ( f(1,30 ) = 0.080 , p = 0.780 ; see figure 2(a ) ) , car ( f(1,30 ) = 0,31 , p = 0.862 ; see figure 2(b ) ) , and reactive cortisol auci ( f(1,30 ) = 2.172 , p = 0.151 ; see figure 2(c ) ) . for depressive symptomatology , the analysis showed a significant between - group effect ( f(1,31 ) = 6,186 , p = 0.019 ) , with the consumer group displaying a higher depressive mean score ( 7,28 1 , 01 versus 4,14 0 , 57 ) when compared to nonconsumers ( see figure 2(d ) ) . in a second set of analyses splitting the consumer group into those individuals with a preference for growth - oriented or problem - focused books , we found no group differences in aucg diurnal cortisol levels ( f(2,29 ) = 0.789 , p = 0.464 ; see figure 3(a ) ) or car ( f(2,29 ) = 0.015 , p = 0.985 ; see figure 3(b ) ) . we did , however , find a significant group difference in reactive cortisol levels auci [ f(2,29 ) = 4.079 , p = 0.028 ] . post hoc analyses showed that the growth - oriented group presented a significantly greater auci when compared to the nonconsumer group ( p = 0.040 ; see figure 3(c ) ) . no differences were found between the problem - focused group and nonconsumer group ( p = 1.00 ) or between the problem - focused group and the growth - oriented group ( p = 0.10 ) . strikingly , when one looks at cortisol levels in response to the tsst in the group of nonconsumers ( see figures 1 and 3(d ) ) , one can see that the cortisol response appears to be quite low compared to that of consumers of self - help books . this could represent either a hyporesponse to the tsst in the nonconsumers of self - help books , or a hyperresponse to the tsst in the consumers of growth - oriented self - help books ( see figure 1 ) . in order to contextualize the cortisol response to the tsst in the group of nonconsumers , we extracted compiled databases on reactive cortisol in response to tsst ( we have more than a thousand participants tested with the same protocol on the tsst in our databases ) . we extracted data for sex- and age - matched controls and compared their response to the tsst to that of the nonconsumers . we found no significant differences between the cortisol levels in response to the tsst among participants from our previous studies when compared to nonconsumers of self - help books . this suggests that the group of nonconsumers presents a typical cortisol response to the tsst but that the effect seems blunted given the hyperreactivity observed in the group of consumers of growth - oriented self - help books . when we compared groups on depressive symptomatology , we found a group difference in depressive scores [ f(2,29 ) = 5.876 , p = 0.008 ] . post hoc analyses showed that the problem - focused group presented a significantly higher score on the bdi than the nonconsumer group ( p = 0.006 ; see figure 3(d ) ) . no differences were found between the growth - oriented group and nonconsumer group ( p = 0.795 ) or between the problem - focused group and the growth - oriented group ( p = 0.095 ) . cohen 's f effect sizes for group differences on depressive symptomatology were large for both the comparison between consumers and nonconsumers of self - help books ( f = 0.454 ) and between growth - oriented and problem - focused groups when compared to nonconsumers ( f = 0.63 ) . we found a similar large effect size for the group difference on reactive cortisol levels when comparing the growth - oriented and problem - focused groups to the nonconsumer group ( f = 0.507 ) . table 1 presents the effect size for all the comparisons performed in the present study . we also calculated the number of participants that would be needed in a future larger scale study in order to have sufficient statistical power to find group differences on the variables tested . this analysis showed that between 150 and 1000 participants would be needed to find any significant differences in basal cortisol levels as a function of self - help book consumption . by contrast , a much smaller sample size would be needed for reactive cortisol levels ( n = 40 ) and depressive symptoms ( n = 30 ) , based on the medium / large effect sizes found in this small pilot study . the first goal of this pilot study was to determine whether consumers and nonconsumers of self - help books differ in physiological and/or psychological markers of stress . we found no differences in basal and reactive cortisol levels but reported that consumers of self - help books present increased depressive symptomatology when compared to nonconsumers of self - help books . although this difference was obtained with a small sample size , the effect size of the difference was large ( f = 0.454 ) . this first result confirms previous suggestions stating that individuals may buy self - help books in order to self - diagnose and/or treat their psychological distress . the second goal of this pilot study was to assess whether the type of self - help books one has a preference for is a better marker of physiological and psychological markers of stress than general interest in self - help books as a whole . first , we found that consumers of problem - focused self - help books presented significantly more depressive symptoms than consumers of growth - oriented self - help books . this later result shows that the group differences observed between consumers and nonconsumers of self - help books on depressive symptoms is mainly driven by consumers of problem - focused self - help books . the increased depressive symptoms found in consumers of problem - focused self - help books converge with the literature on depressive symptomatology suggesting that these symptoms are associated with higher self - victimization . future studies on self - help books consumers should therefore measure self - victimization in order to verify if it mediates the association between preference for problem - focused self - help books and depressive symptomatology . while we can not ascertain that consumers of this literature chose to read these kinds of books because they show higher depressive symptoms , it is possible that using this literature leads to higher depressive symptomatology . since our cross - sectional design does not allow us to determine the directionality of the association found , a longitudinal study would be necessary to test this . given the large effect size obtained for this group difference in depressive symptomatology , sample sizes in the range of 20 to 30 participants per group would provide sufficient statistical power to confirm group differences . in future studies of these populations , it could be interesting to assess potential cognitive behavioral tendencies that have been linked to depression . for example , rumination , guilt , mind wandering , and worries [ 39 , 49 , 50 ] are behavioral tendencies among individuals with depressive symptomatology that may be more prominent among consumers of problem - focused books . indeed , these cognitions and/or behaviors have been shown to be linked to both depressive symptomatology and stress physiology and could act as mediators in the association between problem - focused self - help books consumption and presence of higher depressive symptomatology . measuring them in future studies could therefore strengthen our understanding of the psychoneuroendocrine profile of consumers of problem - focused self - help books . the groups did not differ on diurnal cortisol levels , when consumers were compared to nonconsumers and when the consumer group was split as a function of preference for problem - focused or growth - oriented self - help books . also , the effect sizes for these differences were very low and we calculated that sample sizes between 150 and > 1000 individuals would be necessary to find any statistical differences in diurnal cortisol levels between groups . it is important to note that diurnal cortisol rhythm has been shown to be very stable in healthy populations and that most differences observed in basal cortisol secretion are observed in clinical populations [ 34 , 51 ] . therefore , the fact that we recruited healthy consumers of self - help books and that we excluded participants presenting psychopathologies might explain why we were not able to detect any differences in terms of diurnal cortisol levels . therefore , in future studies , it would be interesting to compare the diurnal cortisol profile of clinically depressed individuals who consume self - help books and clinically depressed nonconsumers if one is interested in measuring diurnal cortisol levels as a function of consumption of self - help books . when we compared groups on reactive cortisol levels , we found that consumers of growth - oriented self - help books are significantly more reactive to a laboratory psychosocial stressor when compared to consumers of problem - focused self - help books or nonconsumers of self - help books and the effect size was large for this group difference ( f = 0.507 ) . this is an important finding as we had previously found no significant difference between consumers and nonconsumers of self - help books on reactive cortisol levels . this result suggests that it is the preference for a particular type of self - help books ( here , growth - oriented self - help books ) that is associated with increased production of cortisol in response to a psychosocial stressor and not general attraction toward self - help books more generally . this suggests that the increased stress reactivity to the tsst that we observed in consumers of growth - oriented self - help books can not be explained by one 's belief that one has control over the situation , as suggested by this type of self - help books . one mechanism that could explain this higher reactivity might be some other personality trait inherent to people who are attracted by this type of literature . even though we measured basic personality traits using the neo - ffi and did not find any differences for five factors measured , it is still possible that some other personality traits that elude measurement with the neo - ffi could explain the greater cortisol reactivity reported in individual having a preference for growth - oriented self - help books . on the other hand , we do know that hpa axis reactivity to stressors plays a critical role in providing energy resources to face the environment and is therefore both adaptive and necessary . therefore , another possible mechanism that could explain the higher stress reactivity observed in individuals having a preference for growth - oriented self - help books is that coping mechanisms taught in this literature allow these consumers to react in a more effective way to their environment as required by the situation . this suggestion goes along with studies performed in depressed patients and normal individuals showing that greater use of escape - avoidance coping ( unhealthy coping mechanism ) is associated with less cortisol reactivity . the present pilot study is characterized by a number of limitations , including a small sample size , a cross - sectional protocol , and an underrepresentation of men . although we made sure that our groups were equivalent in a number of factors that are known to have effects on the physiological stress response ( such as sex , sex hormones , socioeconomic status , age , and bmi ) , it is still possible that some of the negative findings reported here are due to a type ii error due to small sample size . additionally , while the current pilot study relied on the use of a daily sampling questionnaire in order to assess participant 's compliance when collecting diurnal cortisol saliva samples , this method has been shown to be less reliable than the use of electronic devices . however , a recent study suggests that multiday sampling somewhat tempers this effect in comparison to only one day of sampling [ 42 , 55 ] . future studies on consumers of self - help books should consider using electronic devices in the assessment of diurnal cortisol as this method was shown to be more reliable . furthermore , even though locus of control did not explain the intergroup differences in terms of stress reactivity and depressive symptoms , other factors such as coping strategies that have not been measured in the present study may have predictive value for cortisol secretion in consumers of problem - focused versus growth - oriented self - help books . future studies assessing psychological and/or physiological markers in consumers of self - help books should therefore consider measuring coping strategies , which may explain some of the observed associations between variables . also , as mentioned earlier , the cross - sectional design prevents us from determining any directionality between variables and , consequently , a longitudinal design measuring stress hormones before and after utilization of self - help books could help disentangle the cause - effects relationship of the self - help book industry on physiological and psychological markers of stress . finally , given the differences in psychological and biological markers of stress observed in consumers of problem - focused versus growth - oriented self - help books , it would be important in future studies to determine whether one group of consumers benefits more from a particular type of unguided self - help literature when compared to the other group . although we found no general difference in cortisol levels when comparing consumers and nonconsumers of self - help books , we found that consumers of growth - oriented self - help books are more stress reactive when facing a social evaluative threat , while consumers of problem - focused self - help books show higher depressive symptomatology when compared to nonconsumers of self - help books . our results therefore suggest that preference for a particular genre of self - help book ( problem - focused versus growth - oriented ) may be associated with increased stress and/or mental burden in consumers of self - help books . every year , the self - help industry generates billions of dollars in the us and canada making it one of the most lucrative businesses in north america . clinicians are now using guided bibliotherapy to help patients deal with various life conditions and we know that unguided self - help books differ greatly in terms of quality of valid scientific information provided . it is predicted that the self - help book industry will only grow in future years . consequently , it is essential to understand the impact of different types of self - help books on individuals ' physical and mental health .
the self - help industry generates billions of dollars yearly in north america . despite the popularity of this movement , there has been surprisingly little research assessing the characteristics of self - help books consumers , and whether this consumption is associated with physiological and/or psychological markers of stress . the goal of this pilot study was to perform the first psychoneuroendocrine analysis of consumers of self - help books in comparison to nonconsumers . we tested diurnal and reactive salivary cortisol levels , personality , and depressive symptoms in 32 consumers and nonconsumers of self - help books . in an explorative secondary analysis , we also split consumers of self - help books as a function of their preference for problem - focused versus growth - oriented self - help books . the results showed that while consumers of growth - oriented self - help books presented increased cortisol reactivity to a psychosocial stressor compared to other groups , consumers of problem - focused self - help books presented higher depressive symptomatology . the results of this pilot study show that consumers with preference for either problem - focused or growth - oriented self - help books present different physiological and psychological markers of stress when compared to nonconsumers of self - help books . this preliminary study underlines the need for additional research on this issue in order to determine the impact the self - help book industry may have on consumers ' stress .
in order to recapitulate the lytic is in an alignment suitable for super - resolution imaging , we utilized glass coated with antibodies directed against the nk cell activating receptor nkp30 and adhesion receptor cd18 , as described previously . the human nk cell line , nk92 , was prepared in single cell suspension and adhered to antibody - coated glass for 20 min then fixed . after fixation , cells were permeabilized and stained for f - actin using phalloidin alexa fluor 488 and for the lytic granule component perforin using pacific orange - conjugated anti - perforin antibody . using sequential scanning , we evaluated actin via phalloidin alexa fluor 488 in sted and anti - perforin via the pacific orange secondary antibody in both sted and confocal imaging modes . images were acquired using leica asaf software then exported to volocity software ( perkin elmer ) and thresholded using the same settings in all cases to allow for quantitative comparison of the images . as we had previously identified , both f - actin and lytic granules were present throughout the synapse . there was a qualitative improvement in the resolution of the lytic granules imaged using the sted modality ( fig . 1a , red ) , when compared with confocal ( fig . 1b , red ) . to quantitatively compare resolution , we measured a single granule in both sted and confocal and determined the full width at half maximum ( fwhm ) using leica asaf software ( fig . fwhm measures the width of the fluorescence intensity peak and thus reflects the ability to separate or resolve objects . as suggested by our observations , analysis confirmed greater resolution in sted , with a fwhm value of 90 nm , whereas fwhm in confocal was 210 nm . visualization of lytic granules imaged by cw - sted and confocal on f - actin . nk92 cells were adhered to glass coated with antibody to activating ( nkp30 ) and adhesion ( cd18 ) receptor then fixed , permeabilized and stained for perforin and actin . cells were imaged using cw - sted ( actin , green ) and either cw - sted or confocal ( perforin , red ) . shown is the same cell with granules detected by sted ( a ) or confocal ( b ) . a region of interest is enlarged to show greater resolution of granules ( center panel ) . ( c ) full width half maximum ( fwhm ) measurements of confocal ( green line ) and sted images ( red line ) . horizontal dashed lines show half maxima , vertical dashed lines show width at half maxima . ( d ) representative line profile of pixel intensities of actin ( green line ) and perforin ( red line ) taken from a line bisecting a single granule ( shown in white in sted image enlargement ) . the increased resolution we were able to identify using sted resulted in an ability to define lytic granules of an apparent smaller size . thus , with this improved ability to distinguish lytic granules , we sought to confirm our earlier finding that lytic granules , while located in areas of f - actin hypodensity , were either in minimally sized clearances in contact with or atop f - actin filaments . in order to accomplish this , we measured line profiles of fluorescence intensity for perforin and f - actin staining . consistent with our earlier findings , we found an intersection of line profiles ( fig . this indicates that lytic granules are closely associated with f - actin and thus secreted through minimally sized clearances . actin reorganization at the is is a critical prerequisite for cytotoxicity in both adaptive and innate immune effector cells . it is required for cell surface receptor rearrangements , cell activation signaling and for the subsequent polarization of lytic granules to the is . previous studies performed using 3d reconstruction of confocal images , however , have resulted in a model for secretion in which lytic granules are expelled through a central clearance of actin in both nk cells and their adaptive counterpart , the cytotoxic t lymphocyte . detailed super - resolution analysis of the nk cell lytic synapse by two independent laboratories suggests a new paradigm for cytotoxicity in which a pervasive actin network is present and acts not as a barrier but a facilitator for secretion . with this new understanding it will be interesting to see if this model extends to other immune cells undergoing directed secretion of both specialized secretory lysosomes and cytokines . alternatively , it may represent an additional checkpoint utilized by cells of the innate immune system as they access pre - armed functions . using dual color sted nanoscopy of lytic granules on actin filaments in nk cells we have shown in unprecedented resolution details of this new paradigm .
natural killer ( nk ) cells are innate immune effectors that eliminate diseased and tumorigenic targets through the directed secretion of specialized secretory lysosomes , termed lytic granules . this directed secretion is triggered following the formation of an immunological synapse ( is ) , which is characterized by actin re - modeling and receptor organization at the interface between the nk cell and its susceptible target . actin at the is has been described to be permissive to secretion by forming a large central clearance through which lytic granules are released . however we , and others , have recently shown that the actin network in nk cells at the is is dynamic yet pervasive . these efforts used multiple high resolution imaging techniques to demonstrate that the actin network does not act as a barrier to secretion , but instead enables the secretion of lytic granules through minimally sized clearances . in our recent publication we visualized actin using continuous wave stimulated emission depletion ( cw - sted ) and lytic granules using the confocal modality . here we report for the first time dual channel sted nanoscopy of nk cell lytic granules on actin filaments .
during the first toronto sars outbreak in march 2003 , 69 healthcare workers at risk for sars were interviewed a median of 1.2 months ( range 1 to 1.5 months ) after exposure ( 3 ) . five months ( range 4.8 to 5.3 months ) after participating in this initial study , 30 of these healthcare workers were asked to participate in another study . these workers were eligible for participation in this second investigation because they had entered the index patient 's room from 24 hours before intubation to 4 hours after intubation . both investigations involved telephone or face - to - face interviews to determine the amount of time the worker had spent in contact with the patient , the activities that had occurred while the worker was in the patient 's room , and the personal protective equipment used by the worker . the second questionnaire was more detailed than the first but contained a substantial number of questions that were identical to those in the first questionnaire . responses to identical questions in the initial and follow - up interviews were compared and expressed as proportions . responses obtained during the initial interview were considered the reference standard for comparison with follow - up interview responses . agreement between the initial and follow - up responses was quantified by using the kappa statistic and confidence intervals . the kappa statistic ( ) is a commonly used measurement of agreement or repeatability in epidemiologic studies . kappa values from 0.20 to 0.39 indicated fair agreement , values from 0.40 to 0.59 indicated moderate agreement , values from 0.60 to 0.79 indicated good agreement , and values > 0.80 indicated excellent agreement ( 5 ) . twenty - seven of the 30 eligible healthcare workers agreed to the second interview ( table 1 ) . the proportion of healthcare workers who reported the same exposure in the follow - up interview as during the initial interview was > 80% for most respiratory and airway management activities and > 90% for procedures such as vascular catheter insertion . however , the proportion of similar responses was lower for routine patient care activities such as bedding change ( 67% ) and nebulizer treatments ( 70% ) ( table 2 ) . other occupation categories include service assistants , residents , 1 physician , and 1 pharmacist . * sars , severe acute respiratory syndrome ; ci , confidence interval ; bipap , bi - level positive air pressure . agreement between initial and follow - up responses was high for most respiratory and airway management activities , including suctioning after intubation ( = 0.63 ) , manipulation of oxygen face mask or tubing ( = 0.70 ) , manual ventilation ( = 0.63 ) , and mechanical ventilation ( = 0.70 ) . agreement was fair to moderate for the following respiratory procedures : intubation ( = 0.46 ) , suctioning before intubation ( = 0.34 ) , and patient coughing while the healthcare worker was in the room ( = 0.38 ) . agreement was high for routine patient care activities , including emptying urinary catheter collection bag or collecting urine sample ( = 0.63 ) , bathing the patient ( = 0.87 ) , and performing oral care or obtaining nasal swabs ( = 0.71 ) . agreement was also high for inserting an arterial line ( = 0.75 ) and for cleaning the patient 's room ( = 0.65 ) . healthcare workers were asked during both interviews to estimate whether they had spent > 10 minutes , > 30 minutes , or > 4 hours in the patient 's room . twenty - two ( 88% ) of the 25 healthcare workers that participated in both interviews provided the same estimates of exposure duration . two healthcare workers overestimated and 1 underestimated the time spent in the patient 's room . kappa values ( = 0.52 ) did not vary according to the duration of exposure . relative to their initial responses , on follow - up , healthcare workers tended to overestimate their presence in the patient 's room during respiratory and airway management activities , particularly nebulization therapy . however , during the second interview , they were less likely to report being in the room while a bi - level positive air pressure unit was being used or while bedding was being changed . the rates of overestimated responses versus underestimated responses for other patient care activities were similar ( table 2 ) . healthcare workers who subsequently developed cases of laboratory confirmed sars were not more or less likely to remember their presence or absence during patient care activities ( data not shown ) . in the hospital , use of additional precautions ( gown , gloves , and surgical masks for room entry ) for methicillin - resistant staphylococcus aureus was practiced by the healthcare workers ( 6 ) . compliance varied among healthcare workers , but the proportion of workers with the same response during the follow - up interview was > 80% for all infection control precautions , except wearing a gown ( 76% , data not shown ) . in general , responses in the 2 interviews showed little variation in infection control precautions . our results indicate that healthcare workers in this study reliably recalled contact practices , patient care activities , and infection control precautions 5 months after their initial interview and 6 months after exposure to a patient with sars . the proportion of identical follow - up responses averaged > 85% for contact practices , patient care activities , and infection control precautions . agreement between initial and follow - up responses was good to excellent for most respiratory practices and airway management activities , routine patient care activities , and other medical procedures . the lowest proportion of identical responses observed on the initial and follow - up interview was for being in the patient 's room while the patient was coughing or spitting ( 59% ) , with a kappa value ( 0.38 ) indicating fair agreement . the risk of droplet and airborne spread of communicable diseases is assumed to be greater if a patient is frequently coughing . hence , different infection control precautions have been recommended when caring for patients who are coughing ( 7 ) . however , our results suggest that recollection of contact during this activity may not be reliable . whether this poor reliability is related to the effect of time on memory or the intermittent nature of coughing the inferences that can be drawn from this study are limited by the relatively small size of our cohort . caring for patients with sars can be a memorable and frightening event ( 8,9 ) , and recall reliability in our study may not be generalized to other clinical situations . furthermore , the similarities among questions during the 2 interviews may have resulted in the potential for recall bias , causing an overestimation of reliability within respondents ( 10 ) . finally , our study measured the reliability rather than the validity of healthcare worker recall for determining exposure risk . nonetheless , our findings that healthcare workers reliably recalled exposure after several months following the event should be reassuring to investigators studying risk factors for sars transmission in hospitals and to infection control practitioners assessing exposure to communicable diseases .
we reinterviewed healthcare workers who had been exposed to a patient with severe acute respiratory syndrome ( sars ) in an intensive care unit to evaluate the effect of time on recall reliability and willingness to report contact activities and infection control precautions . healthcare workers reliably recalled events 6 months after exposure .
the lunar cherenkov ( lc ) technique , in which radio - telescopes search for pulses of microwave - radio radiation produced via the askaryan effect @xcite from uhe particle interactions in the lunar regolith , is a promising method for detecting the highest energy cosmic rays ( cr ) and neutrinos . proposed by dagkesamanskii and zheleznykh @xcite and first attempted by hankins , ekers & osullivan @xcite using the parkes radio telescope , subsequent experiments at goldstone ( glue ) @xcite , kalyazin @xcite , and westerbork @xcite have placed limits on an isotropic flux of uhe neutrinos . the square kilometre array ( ska ; @xcite ) , a giant radio array of total collecting area @xmath0 km@xmath1 to be completed by @xmath22020 , will offer unprecedented sensitivity , and have the potential to observe both a cosmogenic neutrino flux from gzk interactions of uhe cr and the uhe cr themselves @xcite . one aim of the lunaska project ( lunar uhe neutrino astrophysics with the ska ) is to develop experimental methods scalable to giant , broad - bandwidth radio arrays such as the ska . for this purpose , we have been using the australia telescope compact array ( atca ) , a radio interferometer of six @xmath3-m dishes along a @xmath4 km e - w baseline located in new south wales , australia . here we report on our techniques , which have allowed us to achieve a lower detection threshold than other lc experiments , and have the greatest exposure to @xmath5 ev neutrinos coming from the vicinity of centaurus a and the galactic centre over all detection experiments . we have implemented the hardware described below on three of the six atca antennas , ca01 , ca03 , and ca05 , with a maximum baseline of @xmath6 m. in each , we installed specialised pulse de - dispersion and detection hardware , with the full signal path at each antenna shown in fig . [ block_diagram ] . we have been utilising an fpga - based back - end , the `` cabb digitiser board '' , installed as part of the on - going compact array broadband upgrade , which allows us to process the full @xmath7 mhz ( @xmath8-@xmath9 ghz ) bandwidth provided by the standard atca l - band signal path . this is split into dual linear polarisations , passed through an analogue de - dispersion filter to correct for the effects of the earth s ionosphere , and then @xmath10-bit - sampled at @xmath11-ghz . a simple threshold trigger algorithm is then applied which sends back a @xmath12 sample ( @xmath13 ns ) buffer of both polarisations to the control room along with antenna - specific clock times accurate to @xmath14 ns should the voltage on either polarisation exceed an adjustable threshold . to coherently dedisperse our full @xmath7 mhz bandwidth and recover our full sensitivity , we have made use of innovative new analogue dedispersion filters . the filters consist of @xmath0 m of tapered microwave waveguide wrapped for easy of storage into a spiral pattern , with the continuous sum of reflections along their length producing a frequency - dependent delay varying contrary to ( and thus correcting for ) the delay induced by the earth s ionosphere . while the filters can only correct for a fixed delay , the ionosphere over the antenna during night - time hours ( at least near solar cycle minimum ) is relatively stable , producing a typical @xmath15 ns of dispersion across our bandwidth at zenith . therefore we set the filters to correct for @xmath16 ns of dispersion , i.e. the expected value when the moon is at @xmath17 elevation . for the observations reported here , the full continuous bandwidth could not be returned to a central location for coincident triggering , though this will not be the case for future observations . hence our use of only three antennas until we combine information from all antennas in real - time , we are limited by the sensitivity of each , which we can only partially recover with by increasing our trigger rate . we currently set the thresholds so that each polarisation channel would be triggering at @xmath18 hz of a maximum @xmath19 hz , for an effective dead - time on a three - fold trigger of approximately @xmath4% . a summary of our observation runs is given in tbl . [ obstbl ] , covering a trial period in may 2007 , and our main observing runs in february and may 2008 . the 2007 and february 2008 runs were tailored to ` target ' a broad ( @xmath20 ) region of the sky near the galactic centre , harbouring the closest supermassive black hole and potential accelerator of uhe cr . therefore for these runs we pointed the antenna towards the lunar centre , since this mode maximises coverage of the lunar limb ( from which we expect to see the majority of pulses ) and hence we achieve the greatest total effective aperture . our may 2008 observing period targeted centaurus a only , a nearby active galaxy which could potentially account for two of the uhe cr events observed by the pierre auger observatory @xcite . regardless of their source , this suggests the likelihood of an accompanying excess of uhe neutrinos , and we do not exclude the possibility of seeing the uhe cr themselves . we therefore pointed the antenna at that part of the lunar limb closest to cen a in order to maximise sensitivity to uhe particles from this region @xcite . .observation dates ( nights thereof ) and total time @xmath21 spent observing the moon in detection mode of our observing runs . [ cols="<,^,^,^,^,^,^,^",options="header " , ] our lunaska lunar observations are continuously improving , both as new hardware becomes available , and as we become more experienced at observing in a ns radio environment . we have already achieved the greatest sensitivity for a lunar cherenkov experiment , and accumulated the greatest exposure to uhe neutrinos in the @xmath22 ev range from the vicinity of sgr a * and cen a. we have also demonstrated that an array of antennas observing over a broad bandwidth is extremely efficient for disciminating between terrestrial rfi and true lunar pulses . the next stage is to implement real - time coincidence logic between antennas and improve the rfi filtering , so we expect our @xmath23 observations for which time has been allocated to be the most sensitive yet . this research was supported by the australian research council s discovery project funding scheme ( project numbers dp0559991 and dp0881006 ) . the australia telescope compact array is part of the australia telescope which is funded by the commonwealth of australia for operation as a national facility managed by csiro . j.a - m thanks xunta de galicia ( pgidit 06 pxib 206184 pr ) and consellera de educacin ( grupos de referencia competitivos consolider xunta de galicia 2006/51 ) .
lunaska ( lunar uhe neutrino astrophysics with the square kilometre array ) is a theoretical and experimental project developing the lunar cherenkov technique for the next generation of giant radio - telescope arrays . here we report on a series of observations with atca ( the australia telescope compact array ) . our current observations use three of the six 22 m atca antennas with a 600 mhz bandwidth at 1.2 - 1.8 ghz , analogue dedispersion filters to correct for the typical night - time ionospheric dispersion , and state - of - the - art 2 ghz fpga - based digital pulse detection hardware . we have observed so as to maximise the uhe neutrino sensitivity in the region surrounding the galactic centre and to centaurus a , to which current limits on the highest - energy neutrinos are relatively weak . uhe neutrino detection , coherent radio emission , lunar cherenkov technique , uhe neutrino flux limits , detectors telescopes
University of Westminster suspends all student union events deemed sensitive just days after ‘Jihadi John’ was unmasked as a former student and amid controversy over attendance of controversial preacher The university attended by Mohammed Emwazi, the Islamic State extremist known as “Jihadi John”, has suspended any student union event deemed “sensitive” a day after his identity was revealed. The University of Westminster’s decision came amid confusion over when and if an event entitled Who is Muhammad? – originally scheduled for Thursday night and due to feature a controversial Islamic preacher – would go ahead. A campaign to ban Sheikh Haitham al-Haddad from speaking was launched after allegations were made that he has described homosexuality as a “scourge” and “criminal act”. More than 3,000 people signed a petition to stop him speaking at the event, but he insisted he should be allowed to on the grounds the event was not focused on sexuality. The university’s Islamic society was forced to postpone the event over security concerns on Thursday afternoon after Emwazi, 26, was identified. On Friday sources close to the university’s Islamic Society suggested it had been rescheduled for Monday. However, the university released a statement late on Friday contradicting that and clarifying that “any events that have been deemed sensitive have been suspended”. Controversy over the event came as the university confirmed that Emwazi graduated from a three-year course in information systems and business management in 2009. Haddad said the campaign against him, led by the university’s LGBTI society, was “completely misplaced” because his views on sexuality were not the focus of the discussion. “In the religion of Islam, it is clear-cut that homosexual acts are a sin and are unlawful in sharia. Trying to censor lawful speech does not change this fact,” he said. Haddad said his views were similar to “those of orthodox Christian or Jewish religious leaders” and that denying him a platform was to deny him his right to free speech. The university’s Islamic society defended him, reiterating that sexuality was not the focus of the debate. In a statement, the society said that inviting Haddad was not intended to cause any offence, but was motivated solely by his standing among religious leaders. Peter Tatchell, the prominent gay rights activist, said members of University of Westminster’s LGBTI society and women’s rights campaigners had for years been targeted by hardline Islamist students. “The atmosphere is intimidatory towards gay and women’s rights campaigners and towards fellow Muslims who don’t share their hardline interpretation of Islam,” he told the Guardian. Facebook Twitter Pinterest Sheikh Haitham al-Haddad (right). Photograph: Marcel Antonisse/AFP/Getty Images Tatchell, who has given talks at the university and has close links to its LGBTI society, said the group’s posters had been torn down and defaced as recently as last year. “Gay and women students have also told me that they are too frightened to challenge Islamists on campus because they fear retribution,” he said. A senior lecturer, who declined to be named publicly, said the overwhelming majority of Muslim students at the university were moderate in their beliefs and were upset that preachers such as Haddad were invited to talk at the Islamic society event. He said the society was run by extreme figures between 2008 and 2011 but that they had all graduated. There was now only a small number of hardline Wahabi Muslims who could cause problems, he said, citing the example of a Saudi Arabian student complaining six months ago that she was being “shouted at by other Muslim students simply because she was Saudi”. Security on the university’s four campuses has been increased in recent years, with spot checks on students and restrictions on visitors. A sign at the university warns that the security alert status has been raised to amber due to a “heightened state of awareness of potential security problems or threats”. The university has attempted to distance itself from Emwazi, saying in a statement that it was “shocked and sickened” by the news. Asked about the allegations concerning LGBT students being targeted, a university spokeswoman said it “condemns the promotion of radicalisation, terrorism and violence or threats against any member of our community” and that any student found to be engaging in radicalised activity or intimidating others would be referred to disciplinary procedures. In a separate statement, the university’s Islamic society said it has “nothing to do with” Emwazi and added: “It is not associated with any extremist organisations and that should be obvious and not need stating, but given the climate, it has become necessary to clarify such things in statements such as this.” The general thing is it’s not a crazy extremist university. Not at all. Everyone I know is condemning this Jihadi John Naj, student Since March 2012, the university’s Islamic society is estimated to have hosted 22 events featuring speakers with a history of radical Islamist views, according to the Henry Jackson Society thinktank. Previous speakers have included Anwar al-Awlaki, an al-Qaida leader killed by a US drone strike in Yemen in September 2011; Hizb ut-Tahrir member Jamal Harwood; and Dr Khalid Fikry, who has given speeches in which he appears to suggest that Shia Muslims believe “raping a Sunni woman is a matter that pleases Allah”. Former Westminster university student Yassin Nassari was jailed in 2007 for carrying blueprints for a rocket in his luggage when stopped by police at Luton airport. It is not known whether Nassari was radicalised at the university, but his Old Bailey trial heard that, after taking a break from his studies, he reappeared wearing long robes and referring to himself as “emir” of the student’s Islamic society. In 2011, the university was in the spotlight after it emerged that the then-president and a vice-president of its students’ union had links to the extremist group Hizb ut-Tahrir, which has long called for the establishment of an Islamic state. In 2012, a series of jihadist videos were posted on the Islamic society’s Facebook page in support of al-Shabaab, the Somali Islamist terrorist group that on 21 February this year called for attacks on US, UK and Canadian shopping malls. Several students spoken to by the Guardian on Friday insisted that the university was not a hotbed of radical Islamism. “I don’t want to use the word ‘extreme’: it’s a volatile word. There’s different scales. I’m a Muslim myself and there are liberal moderates and more conservative,” said Naj, 20, a second-year law student. “The general thing is it’s not a crazy extremist university. Not at all. Everyone I know is condemning this Jihadi John. I don’t like how the media is painting this uni to be a hub of extremism.” Recent graduate Haleema Abdullahi, 22, said the university stood out due to its “large Muslim population” and because many students chose to wear traditional Islamic dress. “People say we’re extremist – the University of Westminster is very active with lots of events open to everyone and many sisters there are active. People say they are conservative as many wear abaya and hijab,” she said. Abdullahi said she was sickened by Emwazi’s actions, but that was not fair to link him to the university because he graduated six years ago. She added: “Other extremists went to other unis – it happens.” Speaking at the university’s Regent Street campus, a language masters student who declined to be named said the university may have a hardline reputation because some campuses had many Muslims. “I’ve been here for four years and I haven’t seen any radicalisation and the university Islamic society has made efforts to include people and invite non-Muslims to events, too,” she said. However, another student said the university was segregated between Muslims and non-Muslims. “If you’re not a Muslim, you won’t know what happens in that separate community,” she said. ||||| The first known photograph has emerged of Mohammed Emwazi - the Islamic State militant known as "Jihadi John" - as an adult. Showing him with a goatee beard and wearing a Pittsburgh Pirates baseball cap, the image is revealed in student records from his time at the University of Westminster. It comes after the 26-year-old who became the masked face of the notorious terror organisation was identified as the figure seen in several videos of hostages being beheaded. Sky News can also exclusively reveal details of Emwazi's academic achievements during his stint at the university in London's Cavendish campus, between 2006 and 2009. According to the document, he passed all but two of the modules in his Information Systems with Business Management degree, for which he was awarded a lower second honours (2:2). 1 / 3 Gallery: Jihadi John's University Academic Record Mohammed Emwazi, aka Jihadi John, studied a computing course at university. Sky News has exclusively obtained his student record The record shows he was awarded a "condoned credit; retake" status for modules in Business Information Systems and Managing Business Organisations. Earlier, a photograph had emerged showing the smiling face of a eight-year-old Emwazi sitting with classmates at St Magdalene's Church of England School in west London. An unnamed classmate told The Sun newspaper that Emwazi, who reportedly came from a devout family, was the only Muslim in the class and would demonstrate Arabic writing to the class. Play video "Feb: Boris On 'Jihadi Nonsense'" Video: Feb: Boris On 'Jihadi Nonsense' After completing his degree he went on to become a computer programmer before travelling to Syria in 2013 and later joining IS. Advocacy group CAGE said the Kuwaiti-born Briton was "extremely kind" and "extremely gentle" but had been harassed by the UK security services. Research director Asim Qureshi said Emwazi's family was "in utter shock" that the "beautiful young man" had joined the militant group. Play video "PM Defends Security Services" Video: PM Defends Security Services Prime Minister David Cameron defended the security services, insisting Britain will do "everything we can" to bring terrorists to justice. "They are having to make incredibly difficult judgements, and I think basically they make very good judgements on our behalf," he said. ||||| “Jihadi John,” the English-speaking militant who is often shown as beheading hostages on Islamic State videos, was identified this past week as Mohammed Emwazi. (Associated Press) Avinash Tharoor is studying for a master’s degree in international public policy at University College London. He’s the editor of the Prohibition Post, a drug policy news site. Before traveling to Syria and becoming “Jihadi John,” the masked English-speaker who beheads Islamic State captives on video, Mohammed Emwazi graduated with a computer programming degree from the University of Westminster. I studied international relations there, and although I never met Emwazi, I wasn’t surprised he had attended my alma mater. Despite boasting an inspiring academic staff and vibrant student life, the university has a dark side to its campus culture. The ideological climate feels conducive for radicalization; even though the university never intended this, it seems to be a place where extremism can fester. I don’t know if that climate is what turned Mohammed Emwazi into Jihadi John, but Westminster was probably a factor in his radicalization. When I enrolled there in 2010, the year after Emwazi graduated, my classmates were from every corner of the world, including Britons from a range of ethnic, religious and economic backgrounds. It was a welcome change from my homogenous secondary school in the London suburbs. And during the three years I spent at the university, I found the academic life to be intellectually stimulating; the teaching staff was insightful and knowledgeable. My first impression was that the university was an example of multiculturalism’s success. But the longer I spent on campus, the more I noticed strange occurrences and remarks that seemed to fit with an Islamist ideology. Eventually, I realized, these ideas were deeply ingrained at Westminster, allowing individuals to feel comfortable advocating dangerous and discriminatory beliefs. Foreign policy reporter Adam Goldman explains who Mohammed Emwazi is and how The Washington Post discovered his identity. (Gillian Brockell and Alice Li/The Washington Post) I recall a seminar discussion about Immanuel Kant’s “democratic peace theory,” in which a student wearing a niqab opposed the idea on the grounds that “as a Muslim, I don’t believe in democracy.” Our instructor seemed astonished but did not question the basis of her argument; he simply moved on. I was perplexed, though. Why attend university if you have such a strict belief system that you are unwilling to consider new ideas? And why hadn’t the instructor challenged her? At the time, I dismissed her statement as one person’s outlandish opinion. Later, I realized that her extreme religious views were prevalent within the institution. During my second year at the university, our elected student union chose to close the only bar on our central London campus. The ostensible reason for doing so, the representatives argued, was that the establishment was unprofitable. But the closure occurred soon after the British media reported that the vice president of the university’s student union was promoting Hizb ut-Tahrir — an Islamic organization that advocates for the creation of an Islamic caliphate, which would surely outlaw alcohol — on social media. The student union’s president at the time had also used social media for dubious purposes. He posted a video on Facebook in which he performed a rap he’d written titled “Khilafah’s Coming Back,” using the Arabic word for “caliphate” and lyrics that included a derogatory term for non-Muslims. The university’s response to these findings left something to be desired: “If our students have concerns that the actions of fellow students step beyond acceptable behaviour or statutory regulations, then we have appropriate mechanisms in place to deal with these concerns.” As a vociferous critic of the Israeli government, I have participated in demonstrations and activities supporting Palestine for many years. Yet in a discussion about the conflict, I was horrified to hear a fellow student, supposedly a scholar of international relations and politics, complaining about “the f---ing Jews.” What bothered me even more than such bigoted rhetoric was that the individuals who voiced these extreme positions appeared to do so with impunity. The entrenched nature of Islamist extremism on campus became most apparent during my final year at Westminster. Gay friends told me of derisive comments they overheard from individuals who made no attempt to hide their flagrant homophobia. A Christian friend who campaigned to be student union president faced intimidation and harassment regarding his beliefs and his appearance. He has a beard, and supporters of his opponent alleged that he was growing facial hair to trick voters into thinking he was Muslim. A female friend of South Asian ancestry told me how she was intimidated in the university library by a group of men who deemed her a “non-Muslim b----” after she declared her support for my Christian friend. These are just a few examples of what I believe to be a more widespread phenomenon of religiously motivated intimidation experienced by students at Westminster. Several of my peers — the targets of these comments — filed complaints with the student union, but they were met with indifference and vague assurances that the issue would be dealt with. In my frustration at the time, I wrote an article about this discrimination for our university’s newspaper, but I received no response from the university or the student union. From my experiences, I believe that the university is unwittingly complicit in perpetuating such radicalization, as it has often allowed Islamist extremism to go unchallenged. I don’t think the university itself is advocating extremism, but by failing to prevent the advocacy of such ideas, the institution is attracting students who are sympathetic to them. Students who do not identify with extreme Islamist ideology are being put at risk of discrimination, intimidation and potentially radicalization by the university’s failure to properly handle the situation. Mohammed Emwazi, also known as “Jihadi John,” studied computer programming at the University of Westminster in London. (Reuters) Before the news broke that Emwazi was a University of Westminster graduate, the institution was already embroiled in controversy. The Islamic Society was set to host a lecture by a homophobic preacher, Haitham al-Haddad, this past Thursday; the speech was postponed because of what the university described as “increased sensitivity and security concerns.” I hope that the humiliation of having Jihadi John among its alumni leads Westminster to implement big changes to quell extremism. If it does not, I fear for how many new recruits the Islamic State might garner from the graduating class of 2015. Twitter: @AvinashTharoor Read more from Outlook and follow our updates on Facebook and Twitter.
– After the Islamic State terrorist known as Jihadi John was unmasked as Mohammed Emwazi, Sky News unearthed a photo of him from his days at the University of Westminster in London. Much to the dismay of the Pittsburgh Pirates, Emwazi is wearing a team hat. "It is absolutely sickening to everyone within the Pirates organization, and to our great fans, to see this murderer wearing a Pirates cap in this old photo," says a team statement. Westminster, meanwhile, is taking criticism for fostering an atmosphere that seems to embrace extremism. In the Washington Post, for example, former student Avinash Tharoor writes this: "Despite boasting an inspiring academic staff and vibrant student life, the university has a dark side to its campus culture. The ideological climate feels conducive for radicalization; even though the university never intended this, it seems to be a place where extremism can fester. I don’t know if that climate is what turned Mohammed Emwazi into Jihadi John, but Westminster was probably a factor in his radicalization." The school has now postponed all "sensitive" events on campus, including a speech by an Islamic preacher who has described homosexuality as a "scourge," reports the Guardian. (Click for more on Emwazi's background.)
SECTION 1. SHORT TITLE. This Act may be cited as the ``Phantom Fuel Reform Act''. SEC. 2. CELLULOSIC BIOFUEL REQUIREMENT. (a) Provision of Estimate of Volumes of Cellulosic Biofuel.-- Section 211(o)(3)(A) of the Clean Air Act (42 U.S.C. 7545(o)(3)(A)) is amended-- (1) by striking ``Not later than'' and inserting the following: ``(i) In general.--Not later than''; and (2) by adding at the end the following: ``(ii) Estimation method.-- ``(I) In general.--In determining any estimate under clause (i), with respect to the following calendar year, of the projected volume of cellulosic biofuel production (as described in paragraph (7)(D)(i)), the Administrator of the Energy Information Administration shall-- ``(aa) for each cellulosic biofuel production facility that is producing (and continues to produce) cellulosic biofuel during the period of January 1 through October 31 of the calendar year in which the estimate is made (in this clause referred to as the `current calendar year')-- ``(AA) determine the average monthly volume of cellulosic biofuel produced by such facility, based on the actual volume produced by such facility during such period; and ``(BB) based on such average monthly volume of production, determine the estimated annualized volume of cellulosic biofuel production for such facility for the current calendar year; and ``(bb) for each cellulosic biofuel production facility that begins initial production of (and continues to produce) cellulosic biofuel after January 1 of the current calendar year-- ``(AA) determine the average monthly volume of cellulosic biofuel produced by such facility, based on the actual volume produced by such facility during the period beginning on the date of initial production of cellulosic biofuel by the facility and ending on October 31 of the current calendar year; and ``(BB) based on such average monthly volume of production, determine the estimated annualized volume of cellulosic biofuel production for such facility for the current calendar year. ``(II) Total production.--An estimate under clause (i) with respect to the following calendar year of the projected volume of cellulosic biofuel production (as described in paragraph (7)(D)(i)), shall be equal to the total of the estimated annual volumes of cellulosic biofuel production for all cellulosic biofuel production facilities described in subclause (I) for the current calendar year.''. (b) Reduction in Applicable Volume.--Section 211(o)(7)(D)(i) of the Clean Air Act (42 U.S.C. 7545(o)(7)(D)(i)) is amended-- (1) in the first sentence, by striking ``based on the'' and inserting ``using the exact''; and (2) in the second sentence-- (A) by striking ``may'' and inserting ``shall''; and (B) by striking ``same or a lesser volume'' and inserting ``same volume''. (c) Definition of Cellulosic Biofuel.--Section 211(o)(1)(E) of the Clean Air Act (42 U.S.C. 7545(o)(1)(E)) is amended-- (1) by striking ``The term'' and inserting the following: ``(i) In general.--The term''; and (2) by adding at the end the following: ``(ii) Exclusions.--The term `cellulosic biofuel' does not include any compressed natural gas, liquefied natural gas, or electricity used to power electric vehicles that is produced from biogas from-- ``(I) a landfill; ``(II) a municipal wastewater treatment facility digester; ``(III) an agricultural digester; or ``(IV) a separated municipal solid waste digester.''. (d) Regulation of Cellulosic and Advanced Fuel Pathways.-- (1) In general.--Those provisions of the final rule of the Administrator of the Environmental Protection Agency entitled ``Regulation of Fuels and Fuel Additives: RFS Pathways II, and Technical Amendments to the RFS Standards and E15 Misfueling Mitigation Requirements'' (79 Fed. Reg. 42128 (July 18, 2014)) relating to existing and new cellulosic biofuel pathways under the renewable fuel standard under section 211(o) of the Clean Air Act (42 U.S.C. 7545(o)) and that conflict with the amendments made by subsection (c) shall have no force or effect. (2) Reissuance.--The Administrator of the Environmental Protection Agency shall reissue the rule described in paragraph (1) to conform the rule to the amendments made by subsection (c). (e) Cellulosic Biofuel Mandate.--In section 211(o)(2)(B)(i) of the Clean Air Act (42 U.S.C. 7545(o)(2)(B)(i)), in the table following subclause (III), strike the applicable volume of cellulosic biofuel (in billions of gallons) relating to calendar year 2014.
Phantom Fuel Reform Act This bill amends the Clean Air Act to revise the renewable fuel standard program. Beginning on January 1, 2015, the renewable fuel that is required to be blended into gasoline must be advanced biofuel, which cannot be ethanol derived from corn starch. This bill revises the renewable fuel standards by decreasing the total volume of renewable fuel that must be contained in gasoline sold or introduced into commerce for years 2015 through 2022. The Environmental Protection Agency (EPA) must determine the target amount of cellulosic biofuel to be blended into transportation fuel based on the actual volume of cellulosic biofuel produced in the current year. The EPA must reduce the required volume of renewable fuel in transportation fuel by the same volume of cellulosic biofuel in the fuel. Cellulosic biofuel does not include any compressed natural gas, liquefied natural gas, or electricity used to power electric vehicles that is produced from biogas from a landfill, a municipal wastewater treatment facility digester, an agricultural digester, or a separated municipal solid waste digester.